`
`TI-IIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION, CONTROL AND COMPUTDIG
`
`September 23 - 25, 1998
`
`Allerton House, Monticello, Illinois
`Sponsored by the
`Coordinated Science Laboratory and the
`Department of Electrical and Computer Engineering of the
`University of Illinois at Urbana-Champaign
`
`(cid:74)
`
`(cid:36)(cid:83)(cid:83)(cid:79)(cid:72)(cid:3)1(cid:19)(cid:20)(cid:26)
`Apple 1017
`
`
`
`PROCEEDINGS
`
`THIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION. CONTROL. AND COMPUTING
`
`Tamer Ba;ar
`Bruce I-Iajek
`Conference Co—Chairs
`
`Conference held
`September 23 - 25, I998
`Allerton House
`Monticello, Illinois
`
`Sponsored by
`The Coordinated Science Laboratory
`The Department of Electriranldand Computer Engineering
`UNIVERSl'I?$t3eI7 ILLINOIS
`Urbana—gI1ampaign
`
`Hughes, Exh. 1011, p. 2
`
`(cid:74)(cid:74)
`
`
`
`Coding Theorems for “Turbo-Like” Codes‘
`
`Dariush Divsalar, Hui Jin, and Robert J. McEliece
`Jet Propulsion Laboratory and California Institute of Technology
`Pasadena, California USA
`E-mail: dariush@shannon.jp1.nasa.gov,
`(hui, 1:jm)@systems. ca1tech.edu
`
`Abstract.
`
`In this paper we discuss AWGN coding theorems for ensembles of coding systems which
`are built from fixed convolutional codes interconnected with random interleavers. We
`
`call these systems “turbo—lil<e” codes and they include as special cases both the classical
`turbo codes [1,2,3] and the serial concatentation of interleaved convolutional codes
`We offer a general conjecture about the behavior of the ensemble (maximum—1il<elihood
`decoder) word error probability as the word length approches infinity. We prove this
`conjecture for a simple class of rate 1/q serially concatenated codes where the outer
`code is a q—fold repetition code and the inner code is a rate 1 convolutional code with
`transfer function 1 / (1 + D). We believe this represents the first rigorous proof of a
`coding theorem for turbo-like codes.
`
`1. Introduction.
`
`The 1993 discovery of turbo codes by Berrou, Glavieux, and Thitimajshima [1] has
`revolutionized the field of error—correcting codes.
`In brief, turbo codes have enough
`randomness to achieve reliable communication at data rates near capacity, yet enough
`structure to allow practical encoding and decoding algorithms. This paper is an attempt
`to illuminate the first of these two attributes, i.e., the “near Shannon limit” capabilities
`of turbo—like codes on the AWGN channel.
`
`Our specific goal is to prove AWGN coding theorems for a class of generalized con-
`catenated convolutional coding systems with interleavers, which we call “turbo—like”
`codes. This class includes both parallel concatenated convolutional codes (classical
`turbo codes) [1, 2, 3] and serial concatenated convolutional codes [4] as special cases.
`Beginning with a code structure of this type, with fixed component codes and inter-
`connection topology, we attempt to show that as the block length approaches infinity,
`the ensemble (over all possible interleavers) maximum likelihood error probability ap-
`proaches zero if Eb/Nd exceeds some threshold. Our proof technique is to derive an
`explicit expression for the ensemble input-output weight enumerator (IOWE) and then
`to use this expression,
`in combination with either the classical union bound, or the
`recent “improved” union bound of Viterbi and Viterbi [9], to show that the maximum
`likelihood word error probability approaches zero as N —> oo. Unfortunately the diffi-
`culty of the first step, i.e., the computation of the ensemble IOWE, has kept us from
`full success, except for some very simple coding systems, which we call repeat and ac-
`cumulate codes. Still, we are optimistic that this technique will yield coding theorems
`for a much wider class of interleaved concatenated codes. In any case, it is satisfying to
`have rigorously proved coding theorems for even a restricted class of turbo-like codes.
`Here is an outline of the paper. In Section 2 we quickly review the classical union
`bound on maximum—likelihood word error probability for block codes on the AWGN
`
`* Dariush Divsalar’s work, and a portion of Robert McEliece’s work, was performed
`at J PL under contract with NASA. The remainder of McEliece’s work, and Hui Jin’s
`work, was performed at Caltech and supported by NSF grant no. NCR~9505975,
`AFOSR grant no. 51749620-9'?-1-0313, and grant from Qualcomm.
`
`201
`
`Hughes, Exh. 1011, p. 3
`
`
`
`In Section 3 we
`charmel, which is seen to depend on the code’s weight enumerator.
`define the class of “turbo-like” codes, and give a formula for the average input—output
`weight enumerator for such a code. In Section 4 we state a conjecture (the interleaver
`gain exponent conjecture) about the ML decoder performance of turbo-like codes. In
`Section 5, we define a special class of turbo-like codes, the repeat-and-accumulate codes,
`and prove the IGE conjecture for them. Finally, in Section 6 we present performance
`curves for some RA codes, using an iterative,
`turbo-like, decoding algorithm. This
`performance is seen to be remarkably good, despite the simplicity of the codes and the
`suboptimality of the decoding algorithm.
`
`2. Union Bounds on the Performance of Block Codes.
`
`In this section we will review the classical union bound on the maximum-likelihood
`
`word error probability for block codes.
`Consider a binary linear (n, k) block code C with code rate 7‘ = k / n. The (output)
`weight enumemtor
`for C’ is the sequence of numbers A0,. .
`.
`, An, where A;, de-
`notes the number of codewords in C with (output) weight h. The input-output weight
`em1.me'rator(IOWE) for C‘ is the array of numbers A,,,,},, w = 0,1,.. . ,l<:, h = 0,1,. ..,n:
`Awy, denotes the number of codewords in C with input weight w and output weight h.
`The union bound on the word error probability PW of the code C over a memoryless
`binary-input channel, using maximum likelihood decoding, has the well-known form
`
`(2.1)
`
`(2.2)
`
`PW S 2 Ahzh
`h.=1
`n
`k
`
`= Z (2 AW.) 2'‘.
`
`h=1 w=1
`
`In (2.1) and (2.2), the function 2'‘ represents an upper bound on the pairwise error
`probability for two codewords separated by Hamming (output) distance h. For AWGN
`channels, 2 = e"'E’=/N” where Eb/No is the signal-to-noise ratio per bit.
`
`3. The Class of “Turbo-Like” Codes.
`
`In this section, we consider a general class of concatenated coding systems of the type
`depicted in Figure 1, with q encoders (circles) and q — 1 interleavers (boxes). The
`2th code C5 is an (n,-,N.-) linear block code, and the with encoder is preceded by an
`interleaver (permuter) P, of size N,-, except C’; which is not preceded by an interleaver,
`but rather is connected to the input. The overall structure must have no loops, i.e., it
`must be a graph-theoretic tree. We call a code of this type a “turbo-like” code.
`Define sq = {1, 2, .
`.
`.
`, q} and subsets of 3,, by s; =
`6 sq : C’, connected to input},
`so = {i 6 sq : C’, connected to output }, and its complement E0. The overall system
`depicted in Figure 1 is then an encoder for an (n, N) block code with n = 21.90 n,-.
`If we know the IOWE l,,__’s for the constituent codes (7,, we can calculate
`the average IOWE Aw), for the overall system (averaged over the set of all possible
`interleavers), using the uniforrn interleaver technique
`(A uniform interleaver is
`defined as a. probabilistic device that maps a given input word of weight 112 into all
`distinct ‘) permutations of it with equal probability p = 1/ The result is
`
`<3-1)
`
`G AU)
`Aw.»-= Z Z A£33h.H
`h,=iEao h..:iE3o
`‘
`i=2
`2:;-._.=n
`
`202
`
`Hughes, Exh. 1011, p. 4
`
`
`
`In (3.1) we have 112,- = w ifi 6 s1, and iv; = hj if C; is preceeded by C',- (see Figure 2.).
`We do not give a proof of formula (3.1), but it is intuitively plausible if we note that
`the term Ash‘ is the probability that a random input word to C; of weight 111,»
`will produce an output word of weight h,-.
`For example, for the (712 +713 +714, N) encoder of Figure 1 the formula (3.1) becomes
`
`A
`
`M-'-2-'|a»"4
`("2+h3+h4=’I)
`
`= 2 AU) @Aw3,h3AUI4J14
`(M)
`(M)
`(M)
`wa
`10::
`W4
`
`(2)
`
`(3)
`
`(4)
`
`(2)
`(3)
`(4)
`Z AU) AWJ12 Ahhha Ahnhaa
`1'.u,h
`11.
`h1,h2.h3,h4
`I
`(N)
`( 1)
`(Pi-;+h3+h4=h)
`
`ou tpu t
`
`Figure 1. A “turbo—1ike” code with
`S] = {l,2},So = {2,3,4},§0 =
`
`
`
`Figure 2. Ci (an ('n.,~, N,-) encoder) is connected to C,-
`(an (11,, Nj) encoder) by an interleaver of size NJ». We
`have the “boundary conditions” Nj = n,- and wj = h.,-.
`
`4. The Interleaving Gain Exponent Conjecture.
`
`in which
`In this section we will consider systems of the form depicted in Figure 1,
`the individual encoders are truncated convolutional encoders, and study the behavior
`of the average ML decoder error probability as the input block length N approaches
`
`203
`
`Hughes, Exh. 1011, p. 5
`
`
`
`infinity. If Afih denotes the IOWE when the input block has length N, we introduce
`the following notation for the union bound (2.2) for systems of this type:
`11
`N
`
`(4.1)
`
`)%‘é’3§‘Z (Z 4.11..) 2*‘.
`
`11)=1
`
`h=1
`
`Next we define, for each fixed w 2 1 and h. 2 1,
`
`(4.2)
`
`a(w, h) = lim sup logN Afih.
`N-oo
`
`It follows from this definition that if 11; and h are fixed,
`
`Ag,’ ,2’! = 0(N°=<'"»'=>+'~‘)
`
`as N —+ 00,
`
`for any 6 > 0. Thus if we define
`
`(4.3)
`
`5M =
`
`a(w, h).
`
`it follows that for all in and h,
`
`Aghth = 0(NfiM+=)
`
`as N —» oo,
`
`for any 6 > O. The parameter l3M, which we shall call the interleaving gain exponent
`(IGE), was first introduced in [2] and [3] for parallel concatenation and later in [4] for
`serial concatenation. Extensive numerical simulations, and theoretical considerations
`that are not fully rigorous lead to the following conjecture about the behavior of the
`union bound for systems of the type shown in Figure 1.
`
`The IGE Conjecture. There exists a positive number 70, which depends on the q
`component convolutional codes and the tree structure of the overall system, but not
`on N, such that for any fixed En/N0 > *yo,as the block length N becomes large,
`
`(4.4)
`
`P,‘,’,‘3 = 0(N5M)
`
`Eq. (4.4) implies that if ,BM < 0, then for a. given E5,/No > ‘yo the word error prob-
`ability of the concatenated code decreases to zero as the input block size is increased.
`This is summarized by saying that there is word error probability interleaving gain}
`In [7], we discuss the calculation of a(w, h) and BM for a concatenated system of
`the type depicted in Figure 1, using analytical tools introduced in [3] and
`For
`example, for the parallel concatenation of q codes, with q — 1 interleavers, we have
`
`with equality if and only if each of the component codes is recursive. For a “classical”
`turbo code with q = 2, we have HM = 0, so there is no word error probability inter-
`leaving gain. This suggests that the word error probability for classic turbo codes will
`not improve with input block size, which is in agreement with simulations.
`
`1 There is a similar conjecture for the bit error probability which we do not discuss in
`this paper. Suflice it to say that the interleaving gain exponent for bit error probability
`is [3M—l..
`
`204
`
`Hughes, Exh. 1011, p. 6
`
`
`
`As another example, consider the serial concatenation of two convolutional codes.
`If the inner code is recursive then,
`
`0
`
`fiMS_‘k J+1’
`
`where dfiee is the minimum distance of the outer code. Therefore, for serial concate-
`nated codes, if 03} 2 3 there is interleaving gain for word error probability. (If the inner
`code is nonrecursive ;BM 2 0 and there is no interleaving gain.)
`
`5. A Class of Simple Turbo-Like Codes.
`
`In this section we will introduce a class of turbo-like codes which are simple enough
`so that we can prove the IGE conjecture. We call these codes repeat and accumulate
`(RA) codes. The general idea is shown in Figure 3. An information block of length
`N is repeated q times, scrambled by an interleaver of size qN, and then encoded by
`a rate 1 accumulator. The accumulator can be viewed as a truncated rate-1 recursive
`
`convolutional encoder with transfer function 1/(1 + D), but we prefer to think of it as
`a block code whose input block [$1, .
`.
`. ,r,,] and output block [y1, .
`.
`. ,y,,] are related
`by the formula
`
`Z/1 =-‘E1
`
`Z/2 =-’131+932
`
`(5.1)
`
`y3=£E1-l-1252-l-J13
`
`y,,=:1:1+:c2+x3+-~+:t,,.
`
`
`
`N
`rate 1/q
`repetition
`[W]
`qN x qN
`permutation
`matrix
`
`LENGTH
`[WEIGHT]
`
`Figure 3. Encoder for a (qN, N) repeat and accumulate
`code. The numbers above the input-output lines
`indicate the length of the corresponding block, and
`those below the lines indicate the weight of the block.
`
`To apply the union bound from Section 2 to the class of RA codes, we need the
`input-output weight enumerators for both the (qn,n) repetition code, and the (n,n)
`accumulator code. The outer repetition code is trivial: if the input block has length n,
`we have
`
`ifhaéqw
`if h = qw.
`
`Hughes, Exh. 1011, p. 7
`
`A(,,) _{0
`WJ‘ _
`(")
`
`W 2
`
`05
`
`(5 2)
`
`
`
`The inner accumulator code is less trivial, but it is possible to show that (again assuming
`the input block has length n):
`
`53>
`
`A%:?h=<r.7.’3)(..’;.3i.>e
`
`It follows then from the general formula (3.1), that for the (qN, N) RA code represented
`by Figure 3, the ensemble IOWE is
`
`(54)
`
`9
`
`mi
`
`14...): g As:.>..;a:.‘3,.
`M50 Nglw)
`h—1
`_(w)(€qw/2j)(fqw/21-1)
`‘
`0”)
`qw
`
`‘
`
`From (5.4) it is easy to compute the parameters a(w, h) and fiM in (4.2) and (4.3).
`The result is
`
`(5.5)
`(5.6)
`
`a(w, 5) = —
`3.. =— )("‘2’).
`
`2
`
`It follows from (5.6) that an RA code can have word error probability interleaving gain
`only if q 2 3.
`We are now prepared to use the union bound to prove the IGE conjecture for RA
`codes. In order to simplify the exposition as much as possible, we will assume for the
`rest of this section that q = 4, the extension to arbitrary q 2 3 being straightforward
`but rather lengthy. For q = 4, (5.6) becomes ,6M = -1, so the IGE conjecture is
`PVL[l,B = O(N'1) for E1,/N0 > 70 in this instance.
`The union bound (2.2) for the ensemble of q = 4 RA codes is, because of (5.4),
`
`‘N "” ($.’)(“Z’..7")( ‘H
`PUB = E 2 Zh.
`11:2 112:!
`my
`
`Denote the (w, h.)th term in the sum (5.7) by TN('w, h.):
`N 4N—h
`h—1
`TN (U)! h')d;f*4w hzh = (w)( 21:N)(2w_1) Zh'
`(41.0)
`
`Using standard techniques (e.g.
`(11), h),
`
`[8, Appendix A]), it is possible to show that for all
`
`(5.5)
`
`TN(w, 5) g D2'=“’<I=v>+‘°s2 21,
`
`where D = 4/\/1-1’ is a constant, a: = w/4N, y = h/4N,
`
`may) : —%H2(4i'5) + (1 — y:H2(r2_I—y) + yH2(2f),
`
`206
`
`Hughes, Exh. 1011, p. 8
`
`
`
`is the binary entropy function. The
`and H2(a:) : —-z1og2(r) — (1 ~— :r:)log2(1 — :12)
`maximum of the function F(:1:,y) in the range 0 _<_ 235 5 y S 1 — 22: occurs at (z,y) =
`(0.100,0.37l) and is 0.562281, so that if log? 2 < —O.562281, the exponent in (5.8) will
`be negative.
`
`Let us therefore assume that logz z < —0.562281, which is equivalent to Eb/N0 =
`—(1/r)lnz = —4lnz 2 4 - ln2 ~ 0.562281 = 1.559 = 1.928 dB.
`If E is defined to be
`E = —1og2 z + 0.562281, it follows from (5.8) for all in and h,
`
`(5.9)
`
`TN(w, h) g D2"‘E.
`
`What (5.9) tells us is that if Eb/No > 1.928 dB, most of the terms in the union bound
`(5.7) will tend to zero rapidly, as N —+ oo. The next step in the proof is to break the
`sum in (5.7) into two parts, corresponding to those terms for which (5.9) is helpful,
`and those for which it is not. To this end, define
`
`3
`hN"é‘E logz N,
`
`and write
`
`4N 15/?
`PUB = Z Z TN(w,h)
`h,=2 111:]
`
`13,, h/2
`4N
`h/2
`= Z Z TN(w,h) + Z 2 T~(w,h)
`h=2 w=1
`h:.hN+1w=1
`
`= $1 + 32.
`
`It’s easy to verify that when N is large enough, Aw+1,h/Aw), < 1 for h 5 luv and
`w 5 h/ 2 _<_ h N / 2, which shows Aw). is a. decreasing function of w for large N. Thus
`the sum S1 can be overbounded as follows (we omit some details):
`
`hN 11/2
`S1: h=2w=1
`(IN
`hN h/2
`= ETA/(17h)_+' E
`h=2
`FL:-2 111:2
`
`hN h/2
`= O(N"‘) + Z Z TN(w, h)
`h.=2w=2
`
`hp] h./2
`g 0(N“) + Z Z AW-'1
`h=2 111:2
`
`hp: h/2
`= O(N“) + Z Z O(h3/N2)z"
`h=2'w=2
`
`= 0(N-1) + O(h§’V/N2)
`= O(N‘1).
`
`207
`
`Hughes, Exh. 1011, p. 9
`
`
`
`For the sum S2, we bound each term TN(w, h) by (5.9):
`
`4N
`h/2
`S2: 2 ZT~(w,h)
`h=h~+1w=1
`4N h/2
`
`E 2 2132-113
`hN+1w=1
`4N
`
`= D/2 Z ;.2—’”3
`hN+1
`
`2_Eh” (h1v + 1)
`5D
`= O(N‘3log2N)
`= 0(N‘2).
`
`We have therefore shown that for the ensemble of q = 4 RA codes,
`1.928 dB,
`
`if Eb/N0 >
`
`(5.10)
`
`P,l;lB = S1+ 32 = 0(1v-1) + o(N-1) = o(N-1),
`
`which as we saw above, is the ICE conjecture in this case.
`Although the union bound gives a proof of the IGE conjecture for RA codes, the
`resulting value of 70 is by no means the best possible.
`Indeed,
`if we use the recent
`Viterbi—Viterbi improved union bound [9] to bound the sum S2, we can lower the value
`of 70 considerably, e.g. for q = 4 from 1.928 dB to 0.313 dB. In Figure 4 and Table 1 we
`display our numerical results on RA codes. There we compare the “cutoff threshold"
`70 for RA codes with q in the range 3 S q S 8 using both the classical union bound
`and the Viterbi—Viterbi improved union bound to the cutoff threshold for the ensemble
`of all codes (i.e., “random codes”) of a fixed rate. We believe that these values of
`70 can be reduced still further, for example by using the bound of [6] instead of the
`Viterbi-Viterbi bound.
`
`q
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`RA Codes (Union Bound)
`Random Codes (Union Bound)
`
`2.200
`2.031
`
`1.928
`1.853
`
`1.798
`1.775
`
`1.721
`1.694
`
`1.670
`1.651
`
`1.631
`1.620
`
`RA Codes (Viterbi Bound)
`Random Codes (Viterbi Bound)
`
`0.313 -0.125 -0.402 -0.592 -0.731
`1.112
`0.214 -0.224 -0.486 -0.662 -0.789 -0.885
`
`Binary Shannon Limit
`
`-0.495 -0.794 -0.963 -1.071 -1.150 -1.210
`
`Table 1. Numerical data gleaned from Figure 4.
`
`6. Performance of RA Codes with Iterative Decoding.
`
`The results of this paper show that the performance of RA codes with maximum-
`likelihood decoding is very good. However,
`the complexity of ML decoding of RA
`
`208
`
`Hughes, Exh. 1011, p. 10
`
`
`
`3 |"
`
`l
`
`r
`
`1*
`
`1
`
`I
`
`I
`
`I
`
`I
`
`-1
`
`71.
`
`union bound random codes
`- - -
`j viterbi bound random codes
`-- - -
`shannon limit binary input
`5L *
`*
`union bound RA codes
`+
`+
`viterbi bound RA codes
`
`, 2
`
`0
`
`I
`0.1
`
`1
`0.2
`
`1
`0.3
`
`_1 ?_L
`0.4
`0.5
`Code Rate Fl
`
`1.
`0.6
`
`_i,
`0.7
`
`v
`0.8
`
`n
`0.9
`
`Figure 4. Comparing the RA code “cutoff threshold” to
`the cutoff rate of random codes using both the classical
`union bound and the Viterbi~Viterbi improved union bound.
`
`codes, like that of all turbo-like codes, is prohibitively large. But an important feature
`of turbo-like codes is the availability of a simple iterative, message passing decoding
`algorithm that approximates ML decoding. We wrote a computer program to imple-
`ment this “turbo—like” decoding for RA codes with q = 3 (rate 1 / 3) and q = 4 (rate
`1/4), and the results are shown in Figure 5. We see in Figure 4, for example, that
`the empirical cutofl" threshold for RA codes for q = 3 appears to be less than 1 dB,
`compared to the upper bound of 1.112 dB found in Table 1.
`
`References.
`
`2.
`
`1. C. Berrou, A. Glavieux, and P. Thitimajshirna, “Near Shannon limit error-
`correcting coding and decoding:
`turbo codes,” Proc.
`1993 IEEE International
`Conference on Communications, Geneva, Switzerland (May 1993), pp. 1064-1070.
`S. Benedetto and G. Montorsi, “Unveiling turbo codes: some results on parallel
`concatenated coding schemes", IEEE Trans. on Inf. Theory, vol. 42, no. 2 (March
`1996), pp. 409~428..
`S. Benedetto and G. Montorsi, “Design of parallel concatenated convolutional
`codes,” IEEE Transactions on Communications, vol. 44, no. 5, (May 1996) pp. 591-
`600.
`
`3.
`
`4.
`
`S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation
`of interleaved codes: performance analysis, design, and iterative decoding,” IEEE
`Trans. on Information Theory, vol. 44, no. 3, (May 1998), pp. 909~926.
`
`209
`
`Hughes, Exh. 1011, p. 11