`
`409
`
`Unveiling Turbo Codes: Some Results on
`Parallel Concatenated Coding Schemes
`
`Sergio Benedetto, Senior Member, IEEE, and Guido Montorsi, Member, IEEE
`
`Abstruct- A parallel concatenated coding scheme consists of
`two simple constituent systematic encoders linked by an inter-
`leaver. The input bits to the first encoder are scrambled by the
`interleaver before entering the second encoder. The codeword
`of the parallel concatenated code consists of the input bits to
`the first encoder followed by the parity check bits of both
`encoders. This construction can be generalized to any number
`of constituent codes. Parallel concatenated schemes employing
`two convolutional codes as constituent codes, in connection with
`an iterative decoding algorithm of complexity comparable to
`that of the constituent codes, have been recently shown to yield
`remarkable coding gains close to theoretical limits. They have
`been named, and are known as, “turbo codes.” We propose a
`method to evaluate an upper bound to the bit error probability
`of a parallel concatenated coding scheme averaged over all
`interleavers of a given length. The analytical bounding technique
`is then used to shed some light on some crucial questions which
`have been floating around in the communications community
`since the proposal of turbo codes.
`Index Terms- Turbo codes, concatenated codes, iterative de-
`coding.
`
`I. INTRODUCTION AND MOTIVATIONS
`NOWLEDGE of the fact that increasing the codeword
`
`K length n of block codes (or the constraint length of
`
`convolutional codes) leads to better performance dates back
`to Shannon theory. It is also well known that the complexity
`of maximum-likelihood (ML) decoding algorithms increases
`with n, up to a point where decoding becomes physically
`unrealizable.
`Thus the research in coding theory has seen many proposals
`aiming at constructing powerful codes with large equivalent
`block lengths structured so as to permit breaking the ML de-
`coding into simpler partial decoding steps. Iterated codes [l],
`product codes and their extension [2], concatenated codes [3]
`and their generalized version [4], and large constraint-length
`convolutional codes with suboptimal decoding strategies, like
`sequential decoding, are nonexhaustive examples of these
`attempts, and some of them have been successfully employed
`in applications where large coding gains are required, such as,
`for example, deep-space communication.
`
`Manuscript received February 27, 1995; revised October 17, 1995. This
`work was supported by European Space Agency and by CNR under Progetto
`Finalizzato Trasporti, subproject “Prometheus.” The material in this paper was
`presented in part at the IEEE Intemational Symposium on Information Theory,
`Whistler, BC, Canada, September 17-22, 1995.
`The authors are with the Dipartimento di Elettronica, Politecnico di Torino,
`10129 Torino, Italy.
`Publisher Item Identifier S 0018-9448(96)01685-9.
`
`The most recent successful attempt consists of the so-called
`“turbo codes” [5], whose astonishing performance has given
`rise to a large interest in the coding community. They are
`parallel concatenated codes (PCC) (see Fig. 1 for the case
`of block PCC) whose encoder is formed by two (or more)
`constituent systematic encoders joined through an interleaver.
`The input information bits feed the first encoder and, after
`having been interleaved by the interleaver, enter the second
`encoder. The codeword of the parallel concatenated code
`consists of the input bits to the first encoder followed by the
`parity check bits of both encoders.
`The suboptimal iterative decoder is modular, and consists of
`a number of equal component blocks formed by concatenating
`the decoders of the constituent codes (CC) separated by the
`same interleaver used in the encoder. Each decoder produces
`weighted soft decoding of the input sequence. By increasing
`the number of decoding modules, and thus the number of
`decoding iterations, bit error probabilities as low as
`at &,/No = 0.0 dB have been shown by simulation [6]. A
`version of turbo codes employing two eight-state convolutional
`codes as constituent codes, an interleaver of 32 x 32 bits
`and an iterative decoder performing two and a half iterations
`with a complexity of the order of nine times the ML Viterbi
`decoding of each constituent code is presently available on a
`chip yielding a measured bit error probability of 9.0 . lop7 at
`&/No = 3 dB [7].
`Bandwidth-efficient versions of turbo codes, compared to
`trellis-coded modulation schemes have also been proposed [8],
`as well as turbo codes based on block (instead of convolu-
`tional) codes [9], [lo].
`A careful examination of the literature shows that, rather
`than being a sudden apparition, turbo codes are the result
`of a clever intuition building on several concepts already
`available. We can cite in general the literature on product
`and concatenated codes in relation with the idea of paral-
`lel concatenation, the pioneering work on symbol-by-symbol
`maximum a posteriori decoding of linear codes [ 111 and the
`proposals in [ 121-[ 141 of the soft-decisions Viterbi algorithm
`in relation to the way of implementing the iterative decoder.
`Very close to turbo codes are also the separable “filters”
`described in [ 151 to iteratively decode multidimensional codes.
`As for the applications, it must be mentioned that PCC’s,
`like all codes with very long codewords, suffer from one
`important drawback, namely the delay due to the interleaver
`and to the iterative decoding (as an example, the previously
`mentioned “chip” has a latency of 2318 bits). This prevents
`them from being used in applications where the combination
`
`0018-9448/96$05.00 0 1996 IEEE
`
`Hughes, Exh. 1063, p. 1
`
`
`
`410
`
`LEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO 2, MARCH 1996
`
`of decoding delay and data rate leads to intolerable delays, like
`digital telephony. A broad range of applications still remains,
`such as digital audio and video broadcasting, data packet
`transmission, and space applications. It is also worthwhile
`mentioning that the interleaver inherently present in the PCC
`can prove beneficial for transmission on fading channels [8].
`Since the successful proposal of turbo codes, neither a good
`theoretical explanation of the codes behavior/performance nor
`an adequate comprehension of the role and relative importance
`of the PCC ingredients (constituent codes and interleaver) have
`appeared.
`In terms of performance of PCC’s, apart from the measure-
`ments on the chip [7], what is known is essentially due to
`simulation [5], [8], [15]-[18], which, in itself, is not at all a
`simple task, as it requires a huge amount of computer time to
`obtain reliable results down to bit error probabilities like lop6.
`As a consequence, a number of basic questions are still
`unanswered:
`What is the performance of the ML decoder ?
`What are the relative contributions of the constituent
`codes and of the interleaver length in determining the
`PCC performance ?
`For a given interleaver length, how sensitive is the
`performance to the interleaver choice ?
`How crucial is the use of recursive (feedback) systematic
`convolutional codes (as opposed to nonrecursive ones)
`as constituent codes of the PCC scheme ?
`How close are the proposed suboptimal iterative decod-
`ing algorithms to ML decoding?
`Answering these questions is certainly important from a the-
`oretical point of view. Some of them, however, have significant
`practical relevance as well. For example, questions 1 and 5 can
`encourage (or discourage) the search for improved decoding
`algorithms, and question 2 may lead to the optimization
`of the PCC for given system constraints such as delay or
`complexity. Question 3, in turn, is related to the importance of
`the interleaver optimization, a topic which has already received
`some “cut-and-try’’ attention [ 171-[ 191. Finally, question 4 has
`been discussed in [20] where the authors seem to believe
`that recursive convolutional codes have superior merits in
`themselves, rather than only when used as CC of a PCC.
`Formidable complexity obstacles discourage classical the-
`oretical analysis of the PCC’s. As an example, the code
`implemented in VLSI in [7], when seen as a whole convo-
`lutional code, consists of an equivalent time-varying convolu-
`tional code with 21°30 states, thus preventing any analytical
`evaluation of the main performance parameters.
`In this paper, we will try to shed some light on the theoret-
`ical comprehension of PCC’s. We will propose answers to the
`previous questions, some of which may be only preliminary
`yet indicative of the right direction.
`In particular, we will define and evaluate an upper bound
`to the average pe~ormance of the ML soft decoder for a
`PCC, stemming from characteristics of the CC’s. Owing to
`its definition, the average performance, expressed in terms of
`bit error probability, turns out to be independent from the
`particular interleaver used, and helps in assessing what can
`
`be gained with given CC’s and with an interleaver of a given
`length.
`We will also present simulation results for PCC’s with
`differently chosen interleavers of the same length and com-
`pare them with the proposed bound. The results show that
`“random” interleavers offer performance close to the average
`ones evaluated through the upper bound, independent, to a
`large extent, from the particular interleaver. Bad interleavers
`are very easy to avoid in practice.
`Moreover, we will show that recursive convolutional codes,
`although providing almost the same performance as nonrecur-
`sive codes when used alone, are indeed crucial when embedded
`in a PCC as CC’s.
`Finally, by comparing our bound on ML performance with
`simulation results based on iterative decoding, we will give
`heuristic evidence that the suboptimal algorithms can come
`very close to the optimum.
`To help the reader, we will accompany the description with
`frequent examples, and will start from the simpler case of
`PCC schemes using block codes as CC’s (parallel concatenated
`block code (PCBC)) leaving for the final sections the more
`complicated case of parallel concatenated convolutional codes
`(PCCC).
`
`11. NOTATIONS AND DEFINITIONS
`Notations and definitions will be introduced for the case of
`parallel concatenated block codes, the extension to convolu-
`tional codes being straightforward.
`Given an ( n , k ) systematic block code C, its well-known
`weight enumerating function (WEF) is
`
`B c ( H )
`
`n
`
`B,H2
`
`2=0
`where B, is the (integer) number of codewords with Hamming
`weight (number of ones) z and H is a dummy variable. The
`WEF of a code can be used to compute the exact expression
`of the probability of undetected errors and an upper bound to
`the word error probability [21].
`We define the input-redundancy weight enumerating func-
`tion (RWEF) of the code as
`
`w,3
`where Aw,J denotes the (integer) number of codewords gen-
`erated by an input information word of Hamming weight w
`whose parity check bits have Hamming weight j, so that the
`overall Hamming weight is w + j.
`The IRWEF makes explicit in each term of the WEF the
`separate contributions of the information and of the parity-
`check bits to the total Hamming weight of the codewords, and
`thus provides additional information on the (Hamming) weight
`profile of the code. It will prove crucial in the following when
`dealing with parallel concatenated codes (PCC), since the two
`input words to the constituent encoders, the- second being
`obtained by interleaving the first, share the same Hamming
`weight, so that the redundant bits generated by the two
`
`Hughes, Exh. 1063, p. 2
`
`
`
`BENEDETTO AND MONTORSI: UNVEILING TURBO CODES: RESULTS ON PARALLEL CODING SCHEMES
`
`411
`
`encoders derive from terms of the IRWEF with the same input
`weight w.
`We mention also that the IRWEF characterizes the whole
`encoder, as it depends on both input information words and
`codewords, whereas the WEF only depends upon the code. As
`a consequence, the WEF is related to the word error probability
`of the code, whereas the IRWEF provides information on the
`bit error probability.
`Obviously, the following relationship holds true:
`B c ( H ) = AC(W = H , 2 = H )
`
`The second and third line of (3) represent two equivalent
`expressions to bound the bit error probability. The first expres-
`sion keeps distinct the contributions of information words with
`different weight w, whereas the second sums the contributions
`according to the overall weight m of the codeword through
`the coefficient Dm defined in (4).
`A tighter bound can also be obtained from (3) [22] exploit-
`ing the inequality
`
`A C ( H , H ) =
`Aw,,Hw+j =
`WJ
`
`k
`
`B ~ H ~
`
`It assumes the form
`
`with
`
`where
`
`Example 1: The (7,4) Hamming code has the following
`WEF:
`Bc(H) = 1 + 7H3 + 7H4 + H7.
`Splitting the contribution of the information and redundancy
`bits to the total codeword weight we obtain the IRWEF of
`the code
`Ac(W, 2) = 1 + W ( 3 Z 2 + Z3) + W2(32 + 32’)
`+ w3(1+ 3 2 ) + ~ 4 2 3 .
`
`(1)
`0
`Consider now the conditional weight enumerating function
`A g ( 2 ) of the parity check bits generated by the code C
`corresponding to the input words of weight w. It can be
`obtained from the IRWEF as
`
`
`
`so that we can also write the inverse relationship
`Ac(W, 2) = W W A z ( Z ) .
`
`(2)
`
`W
`
`Both IRWEF and the A: (2) can be used with the union bound
`to compute an upper bound to the bit error probability for ML
`soft decoding of the code over a channel with additive white
`Gaussian noise in the form
`Pb(e) I ’)
`
`l w = ~ = ~ - R . E b / N o
`
`with
`
`where R, is the code rate.
`
`(3)
`
`(4)
`
`which admits of course a further development like (3).
`For parallel concatenated convolutional codes the infor-
`mation and code sequences are semi-infinite and, as a a
`consequence, the summation (explicit in (3) and implicit in
`(5)) must be truncated to a finite value. For the bound in the
`second line of (3) the truncation involves the computation of
`the complete conditional weight distributions up to a given
`information weight, whereas for the bound in the third line the
`truncation leads to the computation of the weight multiplicities
`of the unconditional weight distribution up to a given overall
`weight of the code sequences. Computing algorithms and a
`comparison of the two approximations are discussed in [23].
`Using a finite number of terms in (5) transforms the upper
`bound into the approximation
`
`In the following, all the results in terms of bit error probability
`will be computed using (6).
`Example 2: The conditional WEF’s of the Hamming code
`(7,4) considered in the previous example are
`
`A f ( 2 ) = 1
`A ~ ( z ) = 3 2 2 + 2 3
`A f ( 2 ) = 3 2 + 32’
`A?(’) = 1 + 3 2
`A;(z) = 2 3
`
`so that the upper bound on the bit error probability computed
`through (6) and (4) becomes
`
`0
`
`Hughes, Exh. 1063, p. 3
`
`
`
`412
`
`E TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, MARCH 1996
`LEE1
`
`c, (n1+n2-k7k)
`_ _ _ _ _ _ ----- _ _ _ _ _ _ _
`- 4- - - - -
`____----
`--
`- - _ _ _ _ _ _
`
`----
`
`* -
`
`I
`
`Fig. 1. Parallel concatenated block code.
`
`111. PARALLEL CONCATENATED BLOCK CODES
`Consider now a parallel concatenated block code (PCBC)
`obtained as in Fig. 1. Two linear systematic block codes Cl
`with parameters (721, k ) and C, with parameters (nz, k ) , the
`constituent codes (CC), having in common the length k of the
`input information bits, are linked through an interleaver so that
`the information part of the second codeword is just a permuted
`version of the first one. The PCBC codeword is then formed
`by adding to the input bits the parity-check bits generated by
`the first and second encoder. The PCBC, that we denote as
`Cp, is then a (nl + n2 - k , k ) linear code as the interleaver
`performs a linear operation on the input bits.
`If w is the (Hamming) weight of the input word, and z1
`and zz the weights of the parity check bits introduced by
`the first and second encoders, respectively, the weight of the
`corresponding codeword of Cp will be w + z1 + z2.
`We want now to obtain the IRWEF ACP(W, 2) of Cp star-
`ing from the knowledge of those of the constituent codes. For
`a given interleaver, this operation is exceedingly complicated,
`as the redundant bits generated by the second encoder will not
`only depend on the weight of the input word, but also on how
`its bits have been permuted by the interleaver. The only viable
`solution, in theory, would be an exhaustive enumeration of all
`possible cases; in practice, this is an impossible achievement
`for large k , and this was precisely the reason for lengthy
`computer simulations.
`TO overcome this difficulty, we introduce an abstract inter-
`leaver called uniform interleaver, defined as follows.
`Definition I : A uniform interleaver of length k is a prob-
`abilistic device which maps a given input word of weight w
`permutations of it with equal probability
`n
`From the definition, it is apparent that the conditional weight
`enumerating function A,C"(Z) of the second code becomes
`independent from that of the first code thanks to the uniform
`randomization produced by the interleaver.
`As a nice consequence of this, we can easily evaluate the
`conditional weight enumerating function of the PCBC which
`uses the uniform interleaver as the product, suitably normal-
`ized, of the two conditional weight enumerating functions of
`
`the constituent codes
`
`(7)
`
`\w 1
`Also, from (2) we obtain the IRWEF of the code Cp as
`,+
`
`(8)
`
`I
`
`(1 1)
`
`ACP (W, 2) = WwA,CP (2).
`w=l
`ExampZe3: The IRWEF of the PCBC constructed using
`as constituent codes two identical (7,4) Hamming codes can
`be obtained plugging the conditional WEF obtained in the
`previous example into (7) and applying (8)
`ACP (W, Z ) = 1 + W(2.25Z4 + 1.5Z5 + 0.25Z6)+
`+ W2(1.5Z2 + 3Z3 + 1.5Z4)+
`+ W3(0.25 + 1.52 + 2.252') + W4Z6.
`(9)
`Notice in (9) the presence of fractional coefficients represent-
`ing the multiplicity of the various terms. They are a direct
`consequence of the use of the uniform interleaver.
`0
`The introduction of the uniform interleaver permits an easy
`derivation of the weight enumerating functions of the PCBC.
`However, in practice, one is confronted with deterministic
`interleavers, which'give rise to one particular permutation of
`the input bits. So, what is the significance of the preceding
`definitions and equations ?
`To answer this question, we prove now the main property
`of a PCBC which uses the uniform interleaver.
`Theorem I : Let A C p k (W, 2) be the IRWEF of the code
`Cp, obtained using the particular interleaver I k . Then
`E,+ [ACPk (W, Z)] = ACP (W, 2)
`(10)
`where EI, means expectation with respect to the whole class
`v
`of mterleavers.
`Proof of Theorem I : The proof makes use of (2) through
`the following equality chain:
`Ek [ACpk (W, Z)] = E,+
`= WwEk ["2k (Z)]
`= CWwA,CP(Z) = ACP(W)Z). (13)
`
`W
`
`W
`
`where the third equality comes from the definition of the
`QED
`uniform interleaver.
`A second result, which comes as a corollary of the previous
`one from the linear dependency of (6) with respect to the
`conditional weight enumerating function, is the following.
`Corollary I : The upper bound computed using the IRWEF
`AcP (W, 2) coincides with the average of the upper bounds
`obtainable with the whole class of deterministic interleavers.
`v
`The corollary guarantees that, for each value of the signal-
`to-noise ratio, the performance obtained with the uniform inter-
`leaver are achievable by at least one deterministic interleaver.
`Example 4: We can check the result (10) by computing, for
`the simple example of Hamming code previously examined,
`the IRWEF's of the PCBC's constructed using all the inter-
`
`Hughes, Exh. 1063, p. 4
`
`
`
`BENEDETTO AND MONTORSI: UNVEILING TURBO CODES: RESULTS ON PARALLEL CODING SCHEMES
`
`413
`
`1 + W(3Z4 + Z6) + W2(3Z2 + 3Z4) + W3(1 + 3Z2) + W4Z6
`
`TABLE I
`IRWEF OF THE PARALLEL CONCATENATED CODES BASED ON THE (7,4) HAMMING CODE FOR ALL POSSIBLE INTERLEAVERS
`I
`A C p k ( W, 2 )
`Perm.
`0123
`0321
`1023
`1320
`3021
`3120
`0132
`0213
`0231
`0312
`1032
`1203
`1230
`1302
`2013
`2031
`2103
`2130
`2301
`2310
`3012
`3102
`3201
`3210
`
`1
`
`10''
`
`lo-2
`
`lo4
`
`lo6
`
`lo-8
`
`Fig. 2.
`
`Upper bounds for Example 4.
`
`leavers originating from the 24 = 4! permutations of the input
`bits. The computed IRWEF's are reported in Table I.
`From the table, it is apparent that, for this scheme, only two
`types of IRWEF are possible:
`ACpl (W, 2 ) = 1 + W(2Z4 + 2Z5) + W'(2' + 4Z3 + Z4)
`+ w3(22 + 2 2 2 ) + w4z6
`(14)
`which derives from 18 different permutations and
`ACpz(W, 2) = 1 + W(3Z4 + Z6) + W2(3Z2 + 3Z4)
`+ w3(1+ 32') + ~ 4 2 6
`(15)
`
`which appears six times. The average computed over all
`possible interleavers yields
`6
`18
`Acp(W,Z) = -ACP1(W,2)+ -ACp2(W,2)
`24
`24
`which coincides with (9).
`The upper bounds obtained substituting (14), (15), and (9)
`into (6) are plotted in Fig. 2.
`0
`Let n, IC, t ) denote the parameters of a t-error correcting
`(n, IC) code. We have also analyzed the performance achieved
`by a PCBC using as CC the (63,57,3) and the (63,51,5)
`BCH codes. The interleavers have lengths IC = 57 and 51,
`
`Hughes, Exh. 1063, p. 5
`
`
`
`414
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, MARCH 1996
`
`1
`
`10-1
`
`0
`
`1
`
`2
`
`3
`
`4
`
`5
`
`W
`
`O
`
`
`
`6
`
`7
`
`8
`
`9
`
`Upper bounds to the bit error probability of two PCBC's using the (63,57,3) and (63,51,5) BCH codes as CC's and interleavers of lengths
`Fig. 3.
`57 and 51, respectively.
`
`respectively. The results are reported in Fig. 3, where we
`also plot for comparison the results pertaining to the (7,4)
`Hamming code of the previous example. In this case, the
`performance of the PCBC schemes improves over that of the
`CC's, and a gain of 1 dB is obtainable.
`So far, we have used an interleaver with length 5 equal to
`that of the input word of the block code. It is straightforward to
`extend all previous results to the more general case where we
`use as basic codeword for a PCBC 1 consecutive codewords of
`the constituent code C1 as in Fig. 4. In particular, the IRWEF
`of the new constituent (lnl, Zk) code Ci is given by
`Aci (W, 2 ) = [Ac' (W, Z)]'
`and similarly for the second constituent code Ck.
`The conditional weight enumerating function of the new
`component code can still be obtained from the IRWEF through
`
`(16)
`
`From the conditional weight enumerating functions of the
`iwo new constituent codes, owing to the property of the
`uniform interleaver of length lk, we obtain the conditional
`weight enumerating function of the (Z(nl+n2 - k ) , l k ) PCBC
`as
`
`Interleaver
`length N=lk
`U
`Fig. 4. Parallel concatenated block code with interleaver length Ik.
`
`From this point on, the performance of the new PCBC can be
`obtained as before through (6).
`It is important to note that the PCBC obtained in this way
`is in some sense a generalization of the concept of product
`codes. In fact, a product code which does not transmit the
`parity checks on parity check bits can be obtained from the
`PCBC using a row-column interleaver of length N = k'.
`Product codes with iterative decoding have been considered
`in [lo].
`Example 5: Taking as component code the Hamming code
`of previous examples we can build different PCBC of increas-
`ing complexity by simply grouping 1 consecutive codewords.
`The first coefficients D, defined in (4) for the resulting
`PCBC and Z = 1,2,3,4,5 are reported in Table 11. The
`effect of longer interleavers is clearly apparent from the table.
`Namely, the multiplicity of the terms which dominate the
`
`Hughes, Exh. 1063, p. 6
`
`
`
`BENEDETTO AND MONTORSI: UNVEILING TURBO CODES: RESULTS ON PARALLEL CODING SCHEMES
`
`415
`
`1
`
`10-1
`
`10”
`
`lo4
`
`10“
`
`lo-*
`
`Fig. 5. Upper bounds to the bit error probability referring to Example 5. The different curves refer to interleavers of various lengths 1k
`
`Rate 113 PCCC ,.-..
`.
`.
`
`
`
`.
`
`.
`
`
`
`
`
`
`
`
`
` ... .
`
`TABLE I1
`COEFFICIENTS D , FOR THE EVALUATION OF THE BIT ERROR PROBABILITY OF
`THE PCBC OBTAINED FROM THE (7,4) HAMMING CODE AND UNIFORM
`INTERLEAVERS GROUPING 1 , 2 , 3 , 4 , 5 CONSECUTIVE
`INPUT WORDS
`Hamming
`1
`distance
`3
`4
`5
`6
`7
`8
`9
`10
`11
`12
`13
`14
`15
`16
`17
`18
`19
`20
`
`1
`0.1875
`1.875
`3.75
`1.125
`0.0625
`
`1
`
`2
`0.02678
`0.48214
`1.44642
`1.20535
`3.83928
`9.96428
`23.7053
`33.3928
`21.5089
`12.3214
`7.16071
`4.40178
`6.26785
`1.23214
`0.04464
`
`1
`
`3
`0.01022
`0.26590
`1.06363
`0.95259
`3.11412
`6.63116
`16.4537
`32.8558
`51.5566
`104.044
`192.425
`311.545
`379.400
`316.259
`227.272
`148.853
`95.6634
`74.2149
`
`4
`0.00535
`0.18214
`0.91071
`0.81597
`2.77902
`5.44900
`13.6749
`30.8499
`52.5036
`113.328
`219.463
`418.307
`737.146
`1207.75
`2018.09
`3120.65
`4271.70
`4865.35
`
`5
`0.00328
`0.13815
`0.82894
`0.73103
`2.57720
`4.82972
`12.2146
`30.0122
`52.2468
`117.619
`231.044
`463.480
`894.473
`1620.00
`3016.75
`5357.44
`9176.27
`14962.0
`
`length=N
`
`....................
`
`bit
`
`Fig. 6. Encoder structure of a PCCC.
`
`notice from the results that the beneficial effect of increasing
`the interleaver length tends to decrease the larger I becomes. o
`The results obtained in the example illustrate an interesting
`phenomenon that will be observed again: performance im-
`provements for a significant range of signal-to-noise ratios
`can be obtained with PCC’s without increasing (or almost
`independently from) the minimum distance of the CC’s. We
`ought to mention that independence of bit error probability
`performance from the code minimum distance for codes with
`large codewords length was already foreseen in [24].
`
`performance (those with lower Hamming weight) decreases
`when 1 increases. Also, Hamming distances not present in
`the constituent codes show up. The minimum distance of the
`PCBC is still 3, as for the constituent codes.
`Applying the upper bound (6) we obtain the results reported
`in Fig. 5, where the bit error probabilities for the considered
`code and various interleaver lengths I = 1,2,10,20,100 are
`plotted versus the signal-to-noise ratio &,/No. For compari-
`son, the curve of the constituent code is also reported.
`The figure shows that a gain of 2 dB can be achieved
`increasing 1 at the expense of an increased delay. Also, we
`
`IV. PARALLEL CONCATENATED CONVOLUTIONAL CODES
`The first applications of parallel concatenated coding
`schemes used convolutional codes as constituent codes. The
`resulting codes have been named by the authors turbo codes,
`and the main reason for their successful implementation
`resides in the availability of efficient algorithms for soft
`iterative decoding [5], [7], [25]. In our context, we will call
`them parallel concatenated convolutional codes (PCCC). A
`block diagram showing how they work is presented in Fig. 6.
`The behavior is similar to that of a PCBC, the main difference
`being the fact that now the interleaver of length N does not
`
`Hughes, Exh. 1063, p. 7
`
`
`
`416
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 42, NO. 2, MARCH 1996
`
`Fig. 7. Hyper-trellis for the PCCC.
`
`.....
`
`.
`
`'
`
`
`
`I
`
`.I_, . , . . . . .'
`
`contain an integer number of input words, since the input
`sequences are infinite in length.' The figure represents the
`case of a rate 1/3 PCCC obtained from two rate 1/2 CC's.
`Several generalizations are possible. The number of CC's can
`be more than 2, and their rates can be different. Also, the
`final rate of the PCCC-can be increased by puncturing the
`redundant bit sequences at the encoders outputs.
`We will break the performance analysis of PCCC into two
`steps, the first performing an exact analysis to obtain the upper
`bound to the bit error probability, and the second showing how
`to obtain an accurate approximation which drastically reduces
`the complexity analysis.
`
`A. Exact Analysis
`Consider a PCCC formed by an interleaver of length N and
`two convolutional CC's C1 and Cz whose trellises have ml
`and m2 states, respectively.
`To examine the full dynamic of the PCCC, we must consider
`a hyper-trellis with ml . m2 states, like the one depicted in
`Fig. 7.
`The state S,, of the hyper-trellis corresponds to the pair of
`states slz and s2, for the first and second constituent codes,
`respectively. Each branch S,, i S,l
`in the hyper-trellis
`represents all paths which start from the pair of states SI%,
`s21 in N steps (see Fig. 8).
`sz3 and reach the pair sl,,
`Thus when embedded in a PCCC using an interleaver of
`length N , the constituent convolutional codes contributions to
`the final codeword (or code sequence) derive from N-truncated
`versions of their input information sequences, or, equivalently,
`from trellis paths of length N .
`Let AEYmi(W, 2 ) be the label of the branch S,, 4 S,,
`of the hyper-trellis. It represents the IRWEF of the equivalent
`parallel concatenated block code obtained by enumerating the
`weights of all N-truncated sequences of the PCCC joining
`the hyper-states S,, and S,l. Once we know all these labels,
`the performance of the PCCC can be obtained through the
`standard transfer function bound approach [22] applied to the
`hyper-trellis.
`' This observation, and the computing algorithms explained in the following,
`
`refer to the case of continuous transmission and decoding. When the PCCC
`is used as a block code with trellis termination, the same procedure described
`for PCBC can be applied.
`
`s2m;
`
`8
`
`'S2In2
`
`Associations of states and paths of the trellises of constituent codes
`Fig. 8.
`to states and paths of the PCCC hyper-trellis.
`
`To derive the branch labels of the hyper-trellis we can use
`the same procedure as that applied in Section 111 to parallel
`concatenated block codes, as we have seen that each label is
`indeed the IRWEF of a particular equivalent block code with
`information word length equal to N.2
`We start with the conditional weight enumerating functions
`A:;
`(w, Z)3 of the equivalent block codes associated with the
`constituent convolutional codes. These functions enumerate all
`possible paths connecting the state s with the state n in N
`steps for the kth constituent encoder, k = 1,2, and can be
`obtained from the transfer functions af the CC's as described
`in the Appendix. From them, we compute the conditional
`weight enumerating functions AZYml(tu, 2) of the equivalent
`parallel concatenated block codes. Owing to the properties of
`the uniform interleaver, they are simply the normalized product
`of A ~ ( w , z) with A? (w, z)
`
`Then, we obtain the IRWEF from the corresponding condi-
`tional weight enumerating functions through (2) and, finally,
`use the transfer function approach to get an upper bound to
`the bit error probability.
`An example will clarify the whole procedure.
`Example 6: Consider the PCCC obtained by linking two
`identical 2-state recursive convolutional constituent codes with
`an interleaver of length N = 4. The resulting encoder structure
`is depicted in Fig. 9.
`First, we need to derive, using the algorithm described
`in the Appendix, the four conditional WEF's AE(w, 2) that
`enumerate all the possible paths connecting in four steps the
`
`'Actually, only the label AFC0,,(W, 2) describes a linear code containing
`the all "0" word; the other labels refer to cosets of t h s code. This has no
`effect on the analysis.
`3For clarity, we slightly changed the notation of the conditional weight
`enumerating function by insertmg the conditioning weight w within the
`parentheses.
`
`Hughes, Exh. 1063, p. 8
`
`
`
`BENEDETTO AND MONTORSI: UNVEILING TURBO CODES: RESULTS ON PARALLEL CODING SCHEMES
`
`417
`
`TABLE 111
`CONDITIONAL WEIGHT ENUMERATING FUNCTIONS ENUMERATING ALL POSSIBLE PATHS. CONNECTING
`STATE s2 WITH STATE sJ IN FOUR STEPS FOR THE CONST~UENT ENCODER OF EXAMPLE 6
`Ai”,(w, 2 )
`I w = 3
`I
`ij I w = o I
`w = 2
`w = l
`I 2 Z 2 + Z 3 + 3 2 I
`I
`001 1
`2 2 2 + 2 2 3
`z + 2 2 + 2 3 + 2 4
`01
`1 + 2 + 2 2 + 2 3
`2 2 + 2 2 2
`10
`11
`
`2 4
`
`2 + 2Z2 + 3Z3
`
`( w = 4
`I 2 2
`
`2 2
`
`TABLE IV
`CONDITIONAL WEIGHT ENUMERATING FUNCTIONS LABELING THE HYPER-TRELLIS DESCRIBING THE DYNAMIC OF THE P c c c OF EXAMPLE 6
`
`w = 0
`1
`
`w = l
`
`w = 2
`924+1223+10Z*+4Z5+Z”
`6
`
`w = 3 w = 4
`
`2 4
`
`ZZt223+3z*+4z5+326+2z~+28
`4
`
`4?+SZ5+49
`4
`
`ij,ml
`00,oo
`00,Ol
`00.10
`00,11
`01,oo
`oi,01
`01,lO
`01,ll
`10,oo
`10,Ol
`10,lO
`10,ll
`
`I ll; 00 1
`
`11,Ol
`11,lO
`11,ll
`
`2 4
`
`2 + 2 P +3z3 + 4 2 4 + 3 z 5 + 2 P +z’
`4
`
`3ZZ+8Z3 +14Z4+8Z5 +328
`6
`
`I
`
`2 8
`
`1+2Z+3P+423+324+2Z5+Z6
`4
`
`~ + 4 2 3 + 1 o z * + 1 2 z ~ + 9 z ~ )
`6
`
`2 4
`
`4Z3+8Z4+4Z5
`4
`
`4@+SZ3+4Z4 TI
`
`X
`
`Thus the ave