throbber
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998
`
`909
`
`Serial Concatenation of Interleaved
`Codes: Performance Analysis,
`Design, and Iterative Decoding
`
`Sergio Benedetto, Fellow, IEEE, Dariush Divsalar, Fellow, IEEE,
`Guido Montorsi, Member, IEEE, and Fabrizio Pollara, Member, IEEE
`
`I. INTRODUCTION
`
`IN his goal to find a class of codes whose probability of error
`
`Manuscript received July 26, 1996; revised November 1, 1997. This work
`was supported in part by NATO under Research Grant CRG 951208. The
`work of S. Benedetto and G. Montorsi was supported by Italian National
`Research Council (CNR) under “Progetto Finalizzato Trasporti (Prometheus),”
`by MURST (Progetto 40% “Comunicazioni con Mezzi Mobili”), and by
`Qualcomm. The research described in this paper was performed in part at
`the Jet Propulsion Laboratory, California Institute of Technology, Pasadena,
`CA, under Contract with the National Aeronautics and Space Administration
`(NASA). The material in this paper was presented in part at the IEEE Inter-
`national Symposium on Information Theory, Ulm, Germany, June 29–July 4,
`1997.
`S. Benedetto and G. Montorsi are with Dipartimento di Elettronica, Politec-
`nico di Torino, Italy.
`D. Divsalar and F. Pollara are with Jet Propulsion Laboratory, Pasadena,
`CA 91109 USA.
`Publisher Item Identifier S 0018-9448(98)02352-9.
`
`1998 IEEE
`
`Abstract—A serially concatenated code with interleaver consists
`of the cascade of an outer encoder, an interleaver permuting
`the outer codewords bits, and an inner encoder whose input
`words are the permuted outer codewords. The construction
`can be generalized to h cascaded encoders separated by h 1
`interleavers. We obtain upper bounds to the average maximum-
`likelihood bit error probability of serially concatenated block and
`convolutional coding schemes. Then, we derive design guidelines
`for the outer and inner encoders that maximize the interleaver
`gain and the asymptotic slope of the error probability curves.
`Finally, we propose a new, low-complexity iterative decoding al-
`gorithm. Throughout the paper, extensive comparisons with par-
`allel concatenated convolutional codes known as “turbo codes”
`are performed, showing that the new scheme can offer superior
`performance.
`
`Index Terms— Concatenated codes, iterative decoding, serial
`concatenation, turbo codes.
`
`NOMENCLATURE
`
`Constituent Code.
`CC
`Parallel Concatenated Convolutional Code.
`PCCC
`Parallel Concatenated Block Code.
`PCBC
`Serially Concatenated Code.
`SCC
`Serially Concatenated Block Code.
`SCBC
`Serially Concatenated Convolutional Code.
`SCCC
`Maximum Likelihood.
`ML
`Input–Output Weight-Enumerating Function.
`IOWEF
`Conditional Weight-Enumerating Function.
`CWEF
`Soft Input Soft Output module.
`SISO
`SW-SISO Sliding Window—Soft Input Soft Output
`module.
`Log-Likelihood Ratio.
`Maximum a posteriori.
`
`LLR
`MAP
`
`decreased exponentially at rates less than capacity, while
`decoding complexity increased only algebraically, Forney [1]
`arrived at a solution consisting of the multilevel coding
`structure known as concatenated code. It consists of the
`cascade of an inner code and an outer code, which, in Forney’s
`approach, would be a relatively short inner code (typically,
`a convolutional code) admitting simple maximum-likelihood
`(ML) decoding, and a long high-rate algebraic nonbinary
`Reed–Solomon outer code equipped with a powerful algebraic
`error-correction algorithm, possibly using reliability informa-
`tion from the inner decoder.
`Initially motivated only by theoretical research interests,
`concatenated codes have since then evolved as a standard
`for those applications where very high coding gains are
`needed, such as (deep-) space applications and many others.
`Alternative solutions for concatenation have also been studied,
`such as using a trellis-coded modulation scheme as inner code
`[2], or concatenating two convolutional codes [3]. In the latter
`case, the inner Viterbi decoder employs a soft-output decoding
`algorithm to provide soft-input decisions to the outer Viterbi
`decoder. An interleaver was also proposed between the two
`encoders to separate bursts of errors produced by the inner
`decoder.
`We find then, in a “classical” concatenated coding scheme,
`the main ingredients that formed the basis for the invention
`of “turbo codes” [4], namely two, or more, constituent codes
`(CC’s) and an interleaver. The novelty of turbo codes, how-
`ever, consists of the way they use the interleaver, which is
`embedded into the code structure to form an overall concate-
`nated code with very large block length, and in the proposal of
`a parallel concatenation to achieve a higher rate for given rates
`of CC’s. The latter advantage is obtained using systematic
`CC’s and not transmitting the information bits entering the
`second encoder. In the following, we will refer to turbo codes
`as parallel concatenated convolutional codes (PCCC’s). The
`so-obtained codes have been shown to yield very high coding
`; in
`gains at bit error probabilities in the range
`particular, low bit error probabilities can be obtained at rates
`well beyond the channel cutoff rate, which had been regarded
`for long time as the “practical” capacity. Quite remarkably,
`this performance can be achieved by a relatively simple
`0018–9448/98$10.00 ª
`
`Hughes, Exh. 1032, p. 1
`
`

`
`910
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998
`
`iterative decoding technique whose computational complexity
`is comparable to that needed to decode the two CC’s.
`In this paper, we consider the serial concatenation of inter-
`leaved codes or serially concatenated codes (SCC’s), called
`SCBC or SCCC according to the nature of CC’s, that can be
`block (SCBC) or convolutional (SCCC) codes. For this class
`of codes, we obtain analytical upper bounds to the performance
`of an ML decoder, propose design guidelines leading to the
`optimal choice of CC’s that maximize the interleaver gain
`and the asymptotic code performance, and present a new
`iterative decoding algorithm yielding results close to capacity
`limits with limited decoding complexity. Preliminary results
`have appeared in [5] and [6]. Extensive comparisons with
`turbo codes of the same complexity and decoding delay are
`performed.
`With this embodiment of results, we believe that SCCC can
`be considered as a valid, in some cases superior, alternative
`to turbo codes.
`In Section II, we derive analytical upper bounds to the
`bit error probability of both SCBC’s and SCCC’s, using the
`concept of “uniform interleaver” that decouples the output
`of the outer encoder from the input of the inner encoder. In
`Section III, we propose design rules for SCCC’s through an
`asymptotic approximation of the bit error probability bound
`assuming long interleavers or large signal-to-noise ratios. In
`Section III, we compare serial and parallel concatenations
`of block and convolutional codes in terms of maximum-
`likelihood analytical upper bounds. Section V is devoted to the
`presentation of a new iterative decoding algorithm and to its
`application to some significant codes. Performance comparison
`between SCCC’s and PCCC’s under suboptimum iterative
`decoding algorithms are presented in Section IV.
`
`II. ANALYTICAL BOUNDS TO THE PERFORMANCE
`OF SERIALLY CONCATENATED CODES
`
`A. A Union Bound to the Bit Error Probability
`and Some General Warnings
`linear block code and its Input–Output
`Consider an
`Weight Enumerating Function (IOWEF), defined as
`
`performance can be evaluated under the assumption that the
`all-zero codeword has been transmitted. Assume then that
`the all-zero codeword
`has been transmitted, and define
`as the event in which the
`the pairwise error event
`and generated by
`likelihood of a codeword with weight
`an information word of weight
`is higher than that of the
`.
`all-zero codeword
`the bit error probability under
`Using the union bound,
`ML soft decoding for binary phase-shift keying (PSK) (or
`binary pulse amplitude modulation (PAM)) transmission over
`an additive white Gaussian noise channel with two-sided noise
`can be upper-bounded as
`power spectral density
`
`(3)
`
`is the energy per information
`is the code rate,
`where
`bit,1 and where we have defined the bit error multiplicity
`
`(4)
`
`Expressions (3) and (4) suggest that two ways can be followed
`to improve the bit error probability performance: the first, lead-
`ing to the more traditional concept of good (and asymptotically
`good) codes, tries to increase the first, more significant weights
`in (3); the second, forming the basis of turbo codes and also
`of serially concatenated codes, aims at reducing the bit error
`multiplicities (4). To quote Forney’s 1995 Shannon lecture:
`
`Rather than attacking error exponents, turbo codes at-
`tack multiplicities, turning conventional wisdom on its
`head.
`
`A more compact, but looser, upper bound, can be obtained
`from (3) using the inequality
`
`which yields
`
`(1)
`
`e
`
`(5)
`
`(6)
`
`represents the number of codewords with weight
`where
`generated by information words of weight
`. In (1), we have
`also implicitly defined the conditional weight enumerating
`function (CWEF)
`
`(2)
`
`as the function that enumerates the weight distribution of
`codewords generated by information words of a given weight
`.
`
`A linear block (or convolutional) code possesses the uniform
`error property [7], stating that its word and bit error probability
`
`From (3) and (6), we conclude that, in order to upper-bound
`the bit error probability for any linear block code, we need to
`evaluate its CWEF. As a consequence, also for concatenated
`codes with interleavers we can use (3) and (6), provided
`that we are able to compute the CWEF of the overall code
`assuming that the CWEF’s of the constituent codes (CC’s)
`are known. This has been done already for “turbo codes,”
`i.e., parallel concatenated codes, in [8]. In the following, we
`will show how to extend those results to the case of serial
`concatenation.
`
`1 It must be noted that Eb=N0 is not the signal-to-noise ratio that can be
`measured in the channel, which, indeed, is lower by the factor Rc.
`
`Hughes, Exh. 1032, p. 2
`
`

`
`BENEDETTO et al.: SERIAL CONCATENATION OF INTERLEAVED CODES
`
`911
`
`[10]. A successful application of the Gallager bound to
`parallel concatenated codes with interleavers has been
`described in [11], where it is shown that the new bound
`extends the validity of the union bound for some range
`of signal-to-noise ratios below the channel cutoff rate,
`typically 0.5 dB. On the other hand, those attempts would
`still be based on the hypothesis of maximum-likelihood
`decoding. Thinking of applying them to the suboptimum
`iterative decoding seems not realistic.
`
`• To obtain the divergence of the union bound one needs to
`compute a very large number of terms for the summation
`in the right-hand side of (3), or (6), and this was indeed
`the case for the example to which the curves of Fig. 1
`refer. In that case, however, the interleaver length
`was limited to
`. When
`becomes very large, as
`it is required to approach the channel capacity, only a
`limited number of terms in the summations (3) and (6) can
`be obtained with a reasonable computational complexity.
`As a consequence, the obtained upper bounds are still
`very accurate above the channel cutoff rate, but may
`not present the divergence at cutoff rate. In those cases,
`the reader should only consider as reliable the bit error
`probability values above the cutoff rate, or perhaps half a
`decibel below it, according to the results of [11]. A way
`to overcome this drawback, and indeed to always show
`the divergence of the union bound, has been proposed in
`[12]. It has, however, to rely on the looser bound (6).
`
`• The main tool used in this paper to analytically predict
`the performance and to find design rules about the main
`CC’s parameters is a union bound. We have seen, on the
`other hand, that the union bound is tight only for medium-
`high signal-to-noise ratios. One could then question the
`validity of the approach, which suffers the paradox of
`using a bound not applicable to very low signal-to-noise
`ratios in order to design coding schemes intended to work
`near channel capacity.
`We are conscious of this inconsistency, yet, for one
`hand, have simply nothing better to propose, and, on the
`other hand, we had widely verified by simulation that the
`parallel concatenated codes designed on the basis of our
`rules are indeed very good also at very low signal-to-
`noise ratios (see [13] and [14]). In the remainder of this
`paper, we will show that this heuristic validation of the
`design rules also holds for serially concatenated codes
`with interleavers.
`
`• The last observation concerns still another inconsistency,
`resulting from the fact that we are using bounds based on
`maximum-likelihood decoding to design codes that are
`decoded according to a different, suboptimum algorithm.
`Also in this case we invoke the heuristic validation stem-
`ming from a large number of simulations, which show
`the convergence of the simulated performance toward
`the analytical bounds. In Fig. 1, where the simulated
`points have been obtained with the suboptimum, itera-
`tive decoding algorithm, we see a nice example of this
`behavior.
`
`Fig. 1. Comparison of bit error probability curves obtained through union
`bounds and simulations for two parallel concatenated convolutional codes
`over an additive Gaussian noise channel. Also indicated is the Eb=N0 value
`corresponding to the channel cutoff rate.
`
`Before starting the analysis leading to the evaluation of the
`CWEF of a serially concatenated code (SCC) with interleaver,
`a warning to the readers is necessary. Both (3) and (6)
`stem from the union bound, stating that the probability of
`a union of events is less than or equal to the sum of the
`probabilities of the individual events. The union bound is
`used extensively as an upper limit to the error probabilities
`for digital transmission systems. The sums of the individual
`probabilities in the right-hand sides of (3) and (6), however, are
`not probabilities themselves, and can thus assume large values
`much greater than one. In fact, it is common knowledge in the
`field that union bounds are very close to the true probability
`in the case of maximum-likelihood decoding for medium-
`high signal-to-noise ratios, whereas they tend to diverge for
`low signal-to-noise ratios. A widely accepted rule of thumb
`is that the signal-to-noise ratio where they start to become
`unreliable is the one yielding the cutoff rate of the channel.
`The behavior of the bounds is illustrated as a typical example
`in Fig. 1, where we plot the bounds for two different rate
`parallel concatenated codes and compare them with simulation
`results obtained using the suboptimum,
`iterative decoding
`algorithm proposed to decode turbo codes. Also drawn in
`corresponding to the channel cutoff
`the figure is the
`rate.
`Some general comments, partly based on Fig. 1, are appro-
`priate:
`
`• As previously anticipated, the upper bounds based on the
`union bound diverge at a signal-to-noise ratio close to
`the channel cutoff rate. Obtaining tighter upper bounds
`capable of extending the validity interval of the union
`bounds for concatenated codes is an important, and still
`widely open, topic for research. The new bounds could be
`based on the technique successfully employed in [9] for
`convolutional codes, or on the classical Gallager bound
`
`Hughes, Exh. 1032, p. 3
`
`

`
`912
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998
`
`Fig. 2. Serially concatenated (n; k; N = mp) block code.
`
`B. Evaluating the Bit Error Probability Upper Bound
`for Serially Concatenated Block Codes
`We will now show how to apply the union bounds (3) and
`(6) to the case of serially concatenated codes with interleavers.
`For simplicity of the presentation, we begin considering seri-
`ally concatenated block codes (SCBC’s).
`The scheme of two serially concatenated block codes is
`shown in Fig. 2. It is composed of two cascaded CC’s, the
`code
`with rate
`and the inner
`outer
`code
`with rate
`, linked by an interleaver of length
`that is an integer multiple of the length
`of the outer
`codewords. The scheme works as follows: the
`bits of a
`of codewords of the outer code are written into the
`number
`, and read in a different order
`interleaver of length
`according to the permutation performed by the interleaver. The
`bits at the output of the interleaver is then sent
`sequence of
`to the inner encoder.
`in blocks of length
`The overall SCBC is then an
`code with rate
`
`.
`code
`and we will refer to it as the
`In the following, we will derive an upper bound to the ML
`performance of the overall code
`, assuming first that
`,
`and then extending the result to the general case. We assume
`that the outer and inner CC’s are linear, so that also the SCBC
`is linear and the uniform error property applies, i.e., the bit
`error probability can be evaluated assuming that the all-zero
`codeword has been transmitted.
`In order to apply the upper bounds (3) and (6) to the SCBC,
`we need to evaluate the CWEF of the code
`, assuming that
`we know the CWEF’s of the CC’s.
`of the
`is low, we can compute the coefficients
`If
`CWEF
`(2) by letting each individual information
`word with weight
`be first encoded by the outer encoder
`and then, after the
`bits of the outer codeword have been
`permuted by the interleaver, be encoded by the inner encoder
`originating an inner codeword with a certain weight. After
`repeating this procedure for all the information words with
`weight
`, we should count the inner codewords with weight
`, and their number would be the value of
`. When
`is large, or, in the case
`, when
`is large, the
`previous operation becomes too complex, and we must resort
`to a different approach.
`The key point, here, is that we would like to obtain a
`simple relationship between the CWEF’s of the two CC’s,
`an operation that is prevented by the fact that the knowledge
`of the information word weight is not enough to obtain the
`weight of the inner codeword, which, instead, depends on the
`weight of the outer codeword and on the permutation induced
`by the interleaver.
`
`Fig. 3. The action of a uniform interleaver of length 4 on sequences of
`weight 2.
`
`As in [8] and [15], a crucial step in the analysis consists
`in replacing the actual interleaver that performs a permutation
`input bits with an abstract interleaver called uniform
`of the
`interleaver, defined as a probabilistic device that maps a given
`into all distinct
`permutations of it
`input word of weight
`(see Fig. 3).
`with equal probability
`Use of the uniform interleaver permits the computation
`of the “average” performance of the SCBC, intended as the
`expectation of the performance of SCBC’s using the same
`CC’s, taken over the set of all interleavers of a given length.
`A theorem proved in [8] guarantees the meaningfulness of the
`average performance, in the sense that there will always be, for
`each value of the signal-to-noise ratio, at least one particular
`interleaver yielding performance better than or equal to that
`of the uniform interleaver.
`Let us define the IOWEF and the CWEF of the SCBC
`and
`. Their definition and meaning
`as
`are the same as in (1) and (2).
`As seen, to apply the bounds (3) and (6) to the bit error
`probability we need to evaluate the CWEF of the SCBC from
`the knowledge of the CWEF’s of the outer and inner codes,
`and
`, where the first
`which we call
`enumerates the weight distributions of the outer codewords
`generated by information words of weight
`, and the second
`enumerates the weight distributions of the inner codewords
`.
`generated by outer codewords of weight
`To do this, we exploit the properties of the uniform inter-
`at the output
`leaver, which transforms a codeword of weight
`of the outer encoder into all its distinct
`permutations. As a
`of weight
`,
`consequence, each codeword of the outer code
`through the action of the uniform interleaver, enters the inner
`encoder generating
`codewords of the inner code
`. Thus
`of codewords of the SCBC of weight
`the number
`associated with an information word of weight
`is given by
`
`(7)
`
`From (7) we derive the expressions of the CWEF and
`IOWEF of the SCBC as
`
`(8)
`
`(9)
`
`Hughes, Exh. 1032, p. 4
`
`

`
`BENEDETTO et al.: SERIAL CONCATENATION OF INTERLEAVED CODES
`
`913
`
`enumerates the weight distributions of the
`where
`information words that generate codewords of the outer code
`.
`with a given weight
`
`serially concatenated
`the
`Example 1: Consider
`block code obtained by concatenating the
`parity-check
`Hamming code through an interleaver of
`code to a
`. The IOWEF
`and
`of
`length
`the outer and inner code are
`
`so that
`
`Through (9) we then obtain
`
`Previous results (8) and (9) can be easily generalized to the
`being
`more interesting case of an interleaver with length
`) of the length of the
`an integer multiple (by a factor
`outer codewords. Denoting by
`the IOWEF of the
`new
`outer code, and similarly by
`the
`IOWEF of the new
`inner code, it is straightforward
`to obtain
`
`(10)
`
`From the IOWEF’s (7)–(10), we obtain the CWEF’s
`and
`of the new CC’s, and, finally,
`through (8) and (9),
`the CWEF and IOWEF of the new
`SCBC
`
`(11)
`
`(12)
`
`Fig. 4. Analytical bounds for serially concatenated block code of Example
`2 (SCBC1 in Table I).
`
`Fig. 5. Serially concatenated (n; k; N ) convolutional code.
`
`Example 2: Consider again the CC’s of Example 1,
`linked by an interleaver of length
`, and use (2),
`(3), and (11). The so obtained upper bound to the bit error
`probability is plotted in Fig. 4 for various values of the
`integer
`. The curves show the interleaver gain, defined as the
`factor by which the bit error probability is decreased with the
`interleaver length at a given signal-to-noise ratio. Contrary to
`parallel concatenated block codes [8], the curves do not exhibit
`the interleaver gain saturation. Rather, the bit error probability
`seems to decrease regularly with
`as
`. We will explain
`this behavior in Section III.
`
`C. Serially Concatenated Convolutional Codes
`The structure of a serially concatenated convolutional code
`is shown in Fig. 5. It refers to the case of two convolutional
`with rate
`, and the inner
`CC’s, the outer code
`code
`with rate
`, joined by an interleaver of length
`bits, generating an SCCC
`with rate
`.
`will be assumed to be an integer multiple2 of
`. We assume,
`as before, that the convolutional CC’s are linear, so that the
`SCCC is linear as well, and the uniform error property applies.
`The exact analysis of this scheme can be performed by
`appropriate modifications of that described in [8] for PCCC’s.
`It requires the use of a hypertrellis having as hyperstates pairs
`of states of outer and inner codes. The hyperstates
`and
`are joined by a hyperbranch that consists of all pairs of paths
`
`2 Actually, this constraint is not necessary. We can choose in fact inner and
`outer codes of any rates Ri
`
`c = ki=ni and Roc = ko=no, constraining the
`interleaver to be an integer multiple of the minimum common multiple of no
`and ki, i.e., N = K mcm (no; ki). This generalization, though, leads to
`more complicated expressions and is not considered in the following.
`
`Hughes, Exh. 1032, p. 5
`
`

`
`914
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998
`
`The performance of a concatenated code with interleaver,
`for both cases of parallel and serial concatenation, depends
`on the constituent codes and on the interleaver in a strictly
`interdependent manner. The joint design of CC’s and the
`interleaver, however, is a hopeless goal, and the only way
`to achieve significantly good results seems to pass through a
`decoupled design, in which one first designs the CC’s, and then
`tailors the interleaver on their characteristics. Our approach to
`achieve this goal resides once more in the use of the uniform
`interleaver, which yields to an average optimization of the
`CC’s, followed by a customization of the actual interleaver.
`Only the first step, i.e., the design of CC’s, will be treated here.
`Consider the SCCC depicted in Fig. 5. Its performance
`can be approximated by that of the equivalent block code
`whose IOWEF labels the branch of the hypertrellis joining
`the zero states of outer- and inner-code trellises. Denoting by
`the CWEF of this equivalent block code, we can
`rewrite the upper bound (6) as3
`
`(13)
`
`is the minimum weight of an input sequence
`where
`is the
`generating an error event of the outer code, and
`minimum weight4 of the codewords of
`. By error event
`of a convolutional code, we mean a sequence diverging from
`the zero state at time zero and remerging into the zero state
`at some discrete time
`. For constituent block codes, an
`error event is simply a codeword.
`The coefficients
`of the equivalent block code can be
`obtained from (7), once the quantities
`and
`of the
`CC’s are known. To evaluate them, consider a rate
`with memory5
`convolutional code
`and its equivalent
`block code whose codewords are all sequences
`of length
`bits of the convolutional code starting from and
`ending at the zero state. By definition, the codewords of the
`equivalent block code are concatenations of error events of
`the convolutional codes. Let
`
`(14)
`
`be the weight enumerating function of sequences of the
`convolutional code that concatenate
`error events with total
`. The coefficient
`represents
`input information weight
`the number of sequences of weight
`, input weight
`, and
`number of concatenated error events
`. Its meaning is pic-
`torially clarified in Fig. 7, where it can be noticed that, by
`error events are
`“concatenated,” we mean actually that the
`
`3 In the following, a subscript “m” will denote “minimum,” and a subscript
`“M” will denote “maximum.”
`4 Since the input sequences of the inner code are not unconstrained
`independent and identically distributed (i.i.d.) binary sequences, but, instead,
`codewords of the outer code, hm can be greater than the inner code free
`f .
`distance di
`5 By memory, we mean the maximum length of the shift registers contained
`in the encoder.
`
`Fig. 6. Analytical bounds for the serially concatenated convolutional code
`of Example 3 (SCCC1 of Table II).
`
`of the inner code and
`that join states
`with length
`of the outer code, respectively. Each hyperbranch
`states
`is thus an equivalent SCBC labeled with an IOWEF that can
`be evaluated as explained in the previous subsection. From the
`hypertrellis, the upper bound to the bit error probability can
`be obtained through the standard transfer function technique
`employed for convolutional codes [10]. As proved in [8], when
`the length of the interleaver is significantly greater than the
`constraint lengths of the CC’s, an accurate approximation of
`the exact upper bound consists in retaining only the branch
`. In the
`of the hypertrellis joining the hyperstates
`following, we will always use this approximation.
`
`Example 3: Consider a rate
`SCCC formed by an
`outer
`-state convolutional code with rate
`and an inner
`-state convolutional code with rate
`, joined by a uniform
`interleaver of length
`and
`(SCCC1 of Table II).
`Both encoders are systematic and recursive and the gener-
`ator matrices are reported in Table III, first and third rows.
`Using the previously outlined analysis, we have obtained the
`bit error probability curves shown in Fig. 6. The performance
`shows a very significant interleaver gain, i.e., lower values of
`the bit error probability for higher values of
`. The interleaver
`gain seems to behave as
`. This behavior will be explained
`in the next section.
`
`III. DESIGN OF SERIALLY CONCATENATED CODES
`In the previous section, we have presented an analytical
`bounding technique to find the ML performance of SCBC’s
`and SCCC’s. For practical applications, SCCC’s are to be pre-
`ferred to SCBC’s. One reason is that a posteriori-probability
`algorithms are less complex for convolutional than for block
`codes, another is that the interleaver gain can be greater for
`convolutional CC’s, provided that they are suitably designed.
`Hence, we deal mainly with the design of SCCC’s, extending
`our conclusions to SCBC’s when appropriate.
`Before delving deeply into the analytical derivation, it is
`important to say a few words about the design methodology.
`
`Hughes, Exh. 1032, p. 6
`
`

`
`BENEDETTO et al.: SERIAL CONCATENATION OF INTERLEAVED CODES
`
`915
`
`and the binomial in the denominator using the lower bound
`
`Fig. 7. The meaning of the coefficients Al; h; j.
`
`adjacent, in the sense that each one starts immediately where
`the previous ends, without any gap in between.
`For
`much larger than the memory of the convolutional
`code, the coefficient
`of the CWEF of the equivalent block
`code can be upper-bounded by6
`
`(15)
`
`where
`, the largest number of error events concatenated in a
`and generated by a weight
`information
`codeword of weight
`sequence, is a function of
`and that depends on the encoder,
`as we will see later.
`Let us return now to the block code equivalent to the SCCC.
`and
`for the
`Using previous result (15) with
`outer and inner code, respectively,7 we can write
`
`and
`
`Substituting them into (7), we obtain an upper bound to
`the value of the coefficients
`of the serially concatenated
`block code equivalent to the SCCC in the form
`
`(16)
`
`is the free distance of the outer code. By free distance
`where
`we mean the minimum Hamming weight of error events
`for convolutional CC’s, and the minimum Hamming weight
`of codewords for block CC’s.
`To proceed further, we need to replace the three binomial
`coefficients in (16). In order to obtain an upper bound to the
`right-hand side of (16), we will replace the two binomials in
`the numerator using the upper bound
`
`6 The upper bound is obtained neglecting the length of error events
`compared to N , and assuming that the number of ways j input sequences
`producing j error events can be arranged in a register of length N is N=p
`.
`j
`The ratio N=p derives from the fact that the code has rate p=n, and thus N
`bits correspond to N=p input words or, equivalently, trellis steps.
`7 In the following, superscripts “o” and “i” will refer to quantities pertaining
`to outer and inner code, respectively.
`
`and
`
`and for
`Notice that the bounds are tight for large
`which will be seen to always happen in our case.
`Substitution of these bounds in (16) yields
`
`,
`
`(17)
`
`Finally, substituting (17) into (13), gives the bit error proba-
`bility bound in the form
`
`(18)
`
`Using (18) as the starting point, we will obtain some im-
`portant design considerations. The bound (18) to the bit error
`probability is obtained by adding terms of the first summation
`. The coefficients of the
`with respect to the SCCC weights
`exponentials in
`depend, among other parameters, on
`. For
`large
`, and for a given
`, the dominant coefficient of the
`is the one for which the exponent of
`is
`exponentials in
`maximum. Define this maximum exponent as
`
`(19)
`
`Evaluating
`in general is not possible without specifying
`the CC’s. Thus we will consider two important cases, for
`which general expressions can be found.
`
`A. The Exponent of
`for the Minimum Weight
`, the performance of the SCC is
`For large values of
`dominated by the first term of the summation with respect to ,
`corresponding to the minimum value
`. Remembering
`that by definition,
`and
`are the maximum number
`of concatenated error events in codewords of the inner and
`and , respectively, the following
`outer code of weights
`inequalities hold true:
`
`(20)
`
`(21)
`
`(22)
`
`Hughes, Exh. 1032, p. 7
`
`

`
`916
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 44, NO. 3, MAY 1998
`
`of codewords of the
`is the minimum weight
`where
`of the inner
`outer code yielding a codeword of weight
`code, and
`means “integer part of
`.”
`In most cases,8
`, and
`, and (22) becomes
`
`, so that
`
`(23)
`
`The result (23) shows that the exponent of
`corresponding
`to the minimum weight of SCCC codewords is always negative
`, thus yielding an interleaver gain at high
`.
`for
`Substitution of the exponent
`into (18) truncated to
`yields the
`the first term of the summation with respect to
`:
`following result, asymptotic with respect to
`
`(24)
`
`where the constant
`
`is
`
`that generate codewords
`is the set of input weights
`and
`of the outer code with weight
`.
`Expression (24) suggests the following conclusions.
`• For the values of
`and
`where the SCCC per-
`,
`formance is dominated by its free distance
`increasing the interleaver length yields a gain in perfor-
`mance.
`• To increase the interleaver gain, one should choose an
`outer code with large
`.
`• To improve the performance with
`, one should
`choose an inner- and outer-code combination such that
`is large.
`These conclusions do not depend on the structure of the
`CC’s, and thus they yield for both recursive and nonrecursive
`encoder.
`The curves of Fig. 4 showing the performance of the various
`SCBC’s of Example 1 with increasing interleaver length,
`however, also show a different phenomenon: for a given
`, there is a value of
`above which the bound diverges.
`In other words, there seem to be coefficients of the exponents
`, that increase with
`.
`in , for
`To investigate this phenomenon, we will evaluate the largest
`exponent of
`, defined as
`
`1) Block and Nonrecursive Convolutional Inner Encoders:
`Consider the inner encoder and its impact on th

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket