`
`TI-IIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION, CONTROL AND COMPUTDIG
`
`September 23 - 25, 1998'
`
`Allerton House, Monticello, Illinois
`Sponsored by the
`Coordinated Science Laboratory and the
`Department of Electrical and Computer Engineering of the
`University of Illinois at Urbana-Champaign
`
`i
`
`Apple 1203
`Apple 1203
`
`
`
`PROCEEDINGS
`
`THIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION, CONTROL. AND COMPUTING
`
`Tamer Basar
`Bruce Hajek
`Conference Co—Chairs
`
`Conference held
`September 23 - 25, I998
`Allerton House
`Monticello, Illinois
`
`Sponsored by
`The Coordinated Science Laboratory
`The Department of Electri::I|l(Iand Computer Engineering
`UNIVERSI'l9‘It3eI" ILLINOIS
`Urbana—CaImm paign
`
`Hughes, Exh. 1011, p. 2
`
`ii
`
`
`
`Coding Theorems for “Turbo-Like” Codes‘
`
`Dariush Divsalar, Hui Jin, and Robert J. McEliece
`Jet Propulsion Laboratory and California Institute of Technology
`Pasadena, California USA
`E—mail: dariush@sharmon.jp1.nasa.gov,
`(hui , 1'jm)@systems . caltech. edu
`
`Abstract.
`
`In this paper we discuss AWGN coding theorems for ensembles of coding systems which
`are built from fixed convolutional codes interconnected with random interleavers. We
`
`call these systems “turbo-like” codes and they include as special cases both the classical
`turbo codes [1,2,3] and the serial concatentation of interleaved convolutional codes
`We offer a general conjecture about the behavior of the ensemble (maximum—likelihood
`decoder) Word error probability as the Word length approches infinity. We prove this
`conjecture for a simple class of rate 1/q serially concatenated codes where the outer
`code is a q-fold repetition code and the inner code is a rate 1 convolutional code with
`transfer function 1 / (1 + D). We believe this represents the first rigorous proof of a
`coding theorem for turbo-like codes.
`
`1. Introduction.
`
`The 1993 discovery of turbo codes by Berrou, Glavieux, and Thitimajshima [1] has
`revolutionized the field of error-correcting codes.
`In brief, turbo codes have enough
`randomness to achieve reliable communication at data rates near capacity, yet enough
`structure to allow practical encoding and decoding algorithms. This paper is an attempt
`to illuminate the first of these two attributes, i.e., the “near Shannon limit” capabilities
`of turbo—like codes on the AWGN channel.
`
`Our specific goal is to prove AWGN coding theorems for a class of generalized con—
`catenated convolutional coding systems with interleavers, which we call “turbo—like”
`codes. This class includes both parallel concatenated convolutional codes (classical
`turbo codes) [1, 2, 3] and serial concatenated convolutional codes [4] as special cases.
`Beginning with a code structure of this type, with fixed component codes and inter-
`connection topology, we attempt to show that as the block length approaches infinity,
`the ensemble (over all possible interleavers) maximum likelihood error probability ap-
`proaches zero if E1,/Nd exceeds some threshold. Our proof technique is to derive an
`explicit expression for the ensemble input-output weight enumerator (IOWE) and then
`to use this expression,
`in combination with either the classical union bound, or the
`recent “improved” union bound of Viterbi and Viterbi [9], to show that the maximum
`likelihood word error probability approaches zero as N —> oo. Unfortunately the diffi-
`culty of the first step, i.e., the computation of the ensemble IOWE, has kept us from
`full success, except for some very simple coding systems, which we call repeat and ac-
`cumulate codes. Still, we are optimistic that this technique will yield coding theorems
`for a much wider class of interleaved concatenated codes. In any case, it is satisfying to
`have rigorously proved coding theorems for even a restricted class of turbo—1ike codes.
`Here is an outline of the paper. In Section 2 we quickly review the classical union
`bound on maximum—likelihood word error probability for block codes on the AWGN
`
`* Dariush Divsalar’s work, and a portion of Robert McEliece’s work, was performed
`at J PL under contract with NASA. The remainder of McEliece’s work, and Hui Jin’s
`work, was performed at Caltech and supported by NSF grant no. NCR-9505975,
`AFOSR grant no. 5F49620-97-1-0313, and grant from Qualcomm.
`
`201
`
`Hughes, Exh. 1011, p. 3
`
`
`
`In Section 3 we
`channel, which is seen to depend on the code’s weight enumerator.
`define the class of “turbo-like” codes, and give a formula for the average input-output
`weight enumerator for such a. code. In Section 4 we state a conjecture (the interleaver
`gain exponent conjecture) about the ML decoder performance of turbo-like codes. In
`Section 5, we define a special class of turbo—like codes, the repeat-and—accumulate codes,
`and prove the IGE conjecture for them. Finally, in Section 6 we present performance
`curves for some RA codes, using an iterative,
`turbo-like, decoding algorithm. This
`performance is seen to be remarkably good, despite the simplicity of the codes and the
`suboptimality of the decoding algorithm.
`
`2. Union Bounds on the Performance of Block Codes.
`
`In this section we will review the classical union bound on the maximum-likelihood
`
`word error probability for block codes.
`Consider a. binary linear (n, 1:) block code C with code rate 7' = it/n. The (output)
`weight enumemtor
`for C is the sequence of numbers A0, .
`.
`. ,A,,, where Ah de-
`notes the number of codewords in C with (output) weight h. The z'nput—outp'u.t weight
`enumemtor (IOWE) for C is the array of numbers A,,,,;,, w = 0,1,... ,k:, h = 0,1,. .
`. ,n:
`A,,,,;, denotes the number of codewords in C with input weight w and output weight h.
`The union bound on the word error probability Pw of the code C’ over a memoryless
`binary-input channel, using maximum likelihood decoding, has the well-known form
`n
`
`(2.1)
`
`(2.2)
`
`PW S E Ahzh
`11:1
`71
`k
`
`= Z (2 AW) zh.
`
`h=1 w=1
`
`In (2.1) and (2.2), the function zh represents an upper bound on the pairwise error
`probability for two codewords separated by Hamming (output) distance h. For AWGN
`channels, 2 = e"'Eb/N“ where E1,/N0 is the signal-to-noise ratio per bit.
`
`3. The Class of “Turbo-Like” Codes.
`
`In this section, we consider a general class of concatenated coding systems of the type
`depicted in Figure 1, with q encoders (circles) and q — 1 interleavers (boxes). The
`2th code C-3 is an (n,-, N,-) linear block code, and the ith encoder is preceded by an
`interleaver (permuter) P, of size N,-, except C1 which is not preceded by an interleaver,
`but rather is connected to the input. The overall structure must have no loops, i.e., it
`must be a graph-theoretic tree. We call a code of this type a “turbo-like” code.
`Define sq = {1, 2, .
`.
`.
`, q} and subsets of sq by 51 =
`6 sq : C, connected to input},
`so =
`E sq 2 C, connected to output }, and its complement 30. The overall system
`depicted in Figure 1 is then an encoder for an (n, N) block code with n. = 2,650 11,-.
`If we know the IOWE A,(,:.),_,h’s for the constituent codes C,, we can calculate
`the average IOWE A,,,,;, for the overall system (averaged over the set of all possible
`interleavers), using the uniform interleaver technique
`[2].
`(A uniform interleaver is
`defined as a probabilistic device that maps a given input word of weight in into all
`distinct
`permutations of it with equal probability p = 1/
`The result is
`
`9 AU)
`
`(11)
`
`AW‘: 2 2 A1(-l:1l1),hlII
`
`'*.‘¢"€=o h.:iE§o
`:h,»=).
`
`i=2
`
`W5
`
`202
`
`Hughes, Exh. 1011, p. 4
`
`
`
`In (3.1) we have 111,- = w ifi E .91, and w,- = hj if C; is preceeded by C'_,- (see Figure 2.).
`We do not give a proof of formula (3.1), but it is intuitively plausible if we note that
`the term AE:3',h/
`is the probability that a random input word to C,- of weight 111,-
`will produce an output word of weight h,-.
`For example, for the (T12 +113 +114, N) encoder of Figure 1 the formula (3.1) becomes
`
`A
`
`= Z AC1) mAW3.ha Awuh-4
`(N2)
`(N3)
`(M)
`1112
`we
`1114
`
`Mr’-2-’|a~h4
`(h2+ha+h4=-M
`
`(2)
`
`(3)
`
`(4)
`
`(2)
`(3)
`(4)
`E AU) Auhhz Ahnha Ahnh-a _
`‘W
`h1»"2-"3-'14
`(N)
`"=>
`<;::)
`hi
`(h2+h3+h4=h)
`
`ou tpu t
`
`Figure 1. A “turbo—like” code with
`S1 = {l,2},So = {2,3,4},§0 =
`
`
`
`Figure 2. Ci (an (17.1, N.) encoder) is connected to C]-
`(an (nj, Nj) encoder) by an interleaver of size Nj. We
`have the “boundary conditions” Nj = 71,- and w_,- = hi.
`
`4. The Interleaving Gain Exponent Conjecture.
`
`In this section we will consider systems of the form depicted in Figure 1, in which
`the individual encoders are truncated convolutional encoders, and study the behavior
`of the average ML decoder error probability as the input block length N approaches
`
`203
`
`Hughes, Exh. 1011, p. 5
`
`
`
`infinity. If Ag), denotes the IOWE when the input block has length N, we introduce
`the following notation for the union bound (2.2) for systems of this type:
`71
`N
`
`(4.1)
`
`W-‘g‘Z (2 A51.) z”.
`
`h=1 w=1
`
`Next we define, for each fixed w 2 1 and h. 2 1,
`
`(4.2)
`
`a(w, h) = lim suplogN Afilh.
`N——ooo
`
`It follows from this definition that if w and h are fixed,
`
`A,’;’,,,z’* = o(N°=<"~">+=)
`
`as N —» oo,
`
`for any 6 > 0. Thus if we define
`
`(4.3)
`
`fiM = I,Il1§.i(I;1g.)1(a(w,h).
`
`it follows that for all in and h,
`
`A,I),l',,b" = O(N‘6M+‘)
`
`as N -—> oo,
`
`for any 6 > 0. The parameter l’3M, which we shall call the interleaving gain exponent
`(IGE), was first introduced in [2] and [3] for parallel concatenation and later in [4] for
`serial concatenation. Extensive numerical simulations, and theoretical considerations
`that are not fully rigorous lead to the following conjecture about the behavior of the
`union bound for systems of the type shown in Figure 1.
`
`The IGE Conjecture. There exists a positive number "yo, which depends on the q
`component convolutional codes and the tree structure of the overall system, but not
`on N, such that for any fixed En/No > 70,33 the block length N becomes large,
`
`(4.4)
`
`P,‘,’,‘3 = 0(N5M)
`
`Eq. (4.4) implies that if fiM < 0, then for a given Eb/No > 70 the word error prob-
`ability of the concatenated code decreases to zero as the input block size is increased.
`This is summarized by saying that there is word error probability interleaving gain}
`In [7], we discuss the calculation of a(w, h) and [GM for a concatenated system of
`the type depicted in Figure 1, using analytical tools introduced in [3] and
`For
`example, for the parallel concatenation of (1 codes, with q — 1 interleavers, we have
`
`with equality if and only if each of the component codes is recursive. For a “classical”
`turbo code with q = 2, we have flM = 0, so there is no word error probability inter-
`leaving gain. This suggests that the word error probability for classic turbo codes will
`not improve with input block size, which is in agreement with simulations.
`
`1 There is a similar conjecture for the bit error probability which we do not discuss in
`this paper. Suffice it to say that the interleaving gain exponent for bit error probability
`is 5M — 1.
`
`204
`
`Hughes, Exh. 1011, p. 6
`
`
`
`As another example, consider the serial concatenation of two convolutional codes.
`If the inner code is recursive then,
`
`o
`
`fiMS_'k J+1’
`
`where dgee is the minimum distance of the outer code. Therefore, for serial concate-
`nated codes, if d_‘} 2 3 there is interleaving gain for word error probability. (If the inner
`code is nonrecursive ,8M 2 0 and there is no interleaving gain.)
`
`5. A Class of Simple Turbo-Like Codes.
`
`In this section we will introduce a class of turbo-like codes which are simple enough
`so that we can prove the IGE conjecture. We call these codes repeat and accumulate
`(RA) codes. The general idea is shown in Figure 3. An information block of length
`N is repeated q times, scrambled by an interleaver of size qN, and then encoded by
`a rate 1 accumulator. The accumulator can be viewed as a truncated ra.te—1 recursive
`
`convolutional encoder with transfer function 1/(1 + D), but we prefer to think of it as
`a block code whose input block [x1, .
`.
`. ,x,.] and output block [y1, . .
`. ,y,.,] are related
`by the formula
`
`?J1 =fE1
`
`.712 =-’E1+1E2
`
`(5.1)
`
`y3 =$1+:132+.’l,'3
`
`y,.=:I:1+x2+;t3+-~+:c,,.
`
`
`
`N
`rate 1/q
`[W]
`repetition
`
`qN x qN
`permutation
`matrix
`
`LENGTH
`[WEIGHT]
`
`Figure 3. Encoder for a (qN, N) repeat and accumulate
`code. The numbers above the input-output lines
`indicate the length of the corresponding block, and
`those below the lines indicate the weight of the block.
`
`To apply the union bound from Section 2 to the class of RA codes, we need the
`input—output weight enumerators for both the (qn,n) repetition code, and the (n, n)
`accumulator code. The outer repetition code is trivial: if the input block has length n,
`we have
`
`if h 75 qw
`
`if h = qw.
`
`Hughes, Exh. 1011, p. 7
`
`(52)
`
`(0) _
`
`Awvh - { (")
`
`0
`
`W 2
`
`05
`
`
`
`.‘
`3
`
`'5‘
`-1
`i._:,
`
`
`
`,m._E)f;!fi
`
`The inner accumulator code is less trivial, but it is possible to show that (again assuming
`the input block has length n):
`
`<5»
`
`A%:?~=(r.;2’3)(..’;23i.>i
`
`It follows then from the general formula (3.1), that for the (qN, N) RA code represented
`by Figure 3, the ensemble IOWE is
`
`(IN A(°) Ali)
`A5: = 2
`(qw)
`h1=0
`= (’.‘.’)(L".,’.Y,7.",)(...£72L.)_
`($5)
`
`From (5.4) it is easy to compute the parameters a(w,h) and ,BM in (4.2) and (4.3).
`The result is
`
`(5.5)
`(5.5)
`
`a(w, 5) = —
`3M = — (‘"4’).
`
`2
`
`It follows from (5.6) that an RA code can have word error probability interleaving gain
`only if q 2 3.
`We are now prepared to use the union bound to prove the ICE conjecture for RA
`codes. In order to simplify the exposition as much as possible, we will assume for the
`rest of this section that q = 4, the extension to arbitrary q 3 3 being straightforward
`but rather lengthy. For q = 4, (5.6) becomes ,8M = -1, so the ICE conjecture is
`PW3 = 0(N'1) for Eb/N0 > 70 in this instance.
`The union bound (2.2) for the ensemble of q = 4 RA codes is, because of (5.4),
`
`(5.7)
`
`‘N “'2 (’.X)(‘‘’¥.;’‘)( ‘"1
`PUB = 2 E Zh.
`h=2 112:1
`4w
`
`Denote the (w, h)th term in the sum (5.7) by TN (w, h):
`
`TN(1u,
`
`'; zh_
`(35)
`
`Using standard techniques (e.g.
`(11), h),
`
`[8, Appendix A]), it is possible to show that for all
`
`(5.8)
`
`TN(w, h) g D2“[F(“”+'°5° *1,
`
`where D = 4/5/7_r is a constant, 1‘ = w/4N, y = h/4N,
`
`F($7y) = —%H2(4$) + (1 — y:H2(1‘2_%) + yH2(2f)’
`
`206
`
`Hughes, Exh. 1011, p. 8
`
`
`
`is the binary entropy function. The
`and H2(r) = —xlog2(:c) — (1 — :c)log2(1 — :c)
`maximum of the function F(:r:, y) in the range 0 5 235 5 y 5 1 — 2:2: occurs at (x,y) =
`(0.100,0.371) and is 0.562281, so that if log? z < —O.562281, the exponent in (5.8) will
`be negative.
`Let us therefore assume that logz z < —0.562281, which is equivalent to Eb/No =
`—(1/r)lnz 2 —4ln2 2 4 - ln2 ~ 0.562281 = 1.559 = 1.928 dB.
`If E is defined to be
`E = —— logz z + 0.562281, it follows from (5.8) for all w and h,
`
`(5.9)
`
`TN(w, 2;) g D2"‘E.
`
`What (5.9) tells us is that if Eb/No > 1.928 dB, most of the terms in the union bound
`(5.7) will tend to zero rapidly, as N -—+ 00. The next step in the proof is to break the
`sum in (5.7) into two parts, corresponding to those terms for which (5.9) is helpful,
`and those for which it is not. To this end, define
`
`3
`hN“é‘E log, N,
`
`and write
`
`4N h/'2
`PUB = Z Z TN(w, h)
`h,=2 u/=1
`
`h,., h/2
`4N
`h/2
`= Z Z TN('U7ah) + Z Z TN(w»h)
`h=2 w=1
`h=hN+1w=1
`
`= 51 + S2.
`
`It’s easy to verify that when N is large enough, A,,,+1,;,/A,,,,;, < 1 for h 5 my and
`w 3 h/ 2 _<_ hgv / 2, which shows Aw), is a decreasing function of w for large N. Thus
`the sum S1 can be overbounded as follows (we omit some details):
`
`hzv 11/2
`S1: 2: Z TN(w7h)
`h=2w=1
`
`hN
`= 2TjV(l7h)
`h=2
`
`hN “/2
`
`h=2 111:2
`
`h,,, h/2
`= O(N”‘) + Z Z TN(w,h)
`h.=21u=2
`
`hp] h/2
`g o(N-1) + Z Z Awh
`h=21u=2
`
`= 0(N-1) + Z Z O(h3/N2)z“
`h=2 'w=2
`
`= 0(N“) + 0(h§’v/N2)
`= O(N“).
`
`207
`
`Hughes, Exh. 1011, p. 9
`
`
`
`For the sum S2, we bound each term TN(w, h) by (5.9):
`
`52
`
`11/2
`4N
`§:fFN(w,h)
`2:
`h:hN+1w=1
`4N h/2
`
`IA
`
`2 202*”
`hN+l w=1
`4N
`
`= D/2 Z h2”‘E
`h~+1
`
`2‘E"~(hN +1)
`5 D (1 — 2-5)?
`= O(N‘31og2N)
`= 0(N‘2).
`
`We have therefore shown that for the ensemble of q = 4 RA codes,
`1.928 dB,
`
`if Eb/N0 >
`
`(5.10)
`
`P3f,B = 51 + 32 = O(N'1) + o(N-1) = o(N-1),
`
`which as we saw above, is the ICE conjecture in this case.
`Although the union bound gives a proof of the IGE conjecture for RA codes, the
`resulting value of ‘yo is by no means the best possible.
`Indeed, if we use the recent
`Viterbi—Viterbi improved union bound [9] to bound the sum S2, we can lower the value
`of 70 considerably, e.g. for q = 4 from 1.928 dB to 0.313 dB. In Figure 4 and Table 1 We
`display our numerical results on RA codes. There we compare the “cutoff threshold”
`70 for RA codes with q in the range 3 3 q S 8 using both the classical union bound
`and the Viterbi—Viterbi improved union bound to the cutoff threshold for the ensemble
`of all codes (i.e., “random codes”) of a fixed rate. We believe that these values of
`70 can be reduced still further, for example by using the bound of [6] instead of the
`Viterbi-Viterbi bound.
`
`q
`
`3
`
`4
`
`5
`
`6
`
`7
`
`8
`
`RA Codes (Union Bound)
`Random Codes (Union Bound)
`
`2.200
`2.031
`
`1.928
`1.853
`
`1.798
`1.775
`
`1.721
`1.694
`
`1.670
`1.651
`
`1.631
`1.620
`
`RA Codes (Viterbi Bound)
`Random Codes (Viterbi Bound)
`
`0.313 -0.125 -0.402 -0.592 -0.731
`1.112
`0.214 -0.224 -0.486 -0.662 -0.789 -0.885
`
`Binary Shannon Limit
`
`-0.495 -0.794 -0.963 -1.071 -1.150 -1.210
`
`Table 1. Numerical data gleaned from Figure 4.
`
`6. Performance of RA Codes with Iterative Decoding.
`
`The results of this paper show that the performance of RA codes with maximum-
`likelihood decoding is very good. However, the complexity of ML decoding of RA
`
`208
`
`Hughes, Exh. 1011, p. 10
`
`
`
`Br
`
`5
`
`L
`
`1 1 I Iu:-r— 1- 1?:
`
`
`
`
`
`union bound random codes
`
`viterbi bound random codes
`
`- - ~
`shannon limit binary input
`
`*
`*
`union bound RA codes
`viterbi bound RA codes
`
`
`
`ljl____..l._
`
`O
`
`0.1
`
`0.2
`
`0.3
`
`0.4
`
`0.5
`Code Rate R
`
`0.6
`
`0.7
`
`0.8
`
`0.9
`
`Figure 4. Comparing the RA code “cutoff threshold” to
`the cutoff rate of random codes using both the classical
`union bound and the Viterbi-Viterbi improved union bound.
`
`codes, like that of all turbo—like codes, is prohibitively large. But an important feature
`of turbo—like codes is the availability of a simple iterative, message passing decoding
`algorithm that approximates ML decoding. We wrote a computer program to imple-
`ment this “turbo-like” decoding for RA codes with q = 3 (rate 1 / 3) and q = 4 (rate
`1/4), and the results are shown in Figure 5. We see in Figure 4, for example, that
`the empirical cutoff threshold for RA codes for q = 3 appears to be less than 1 dB,
`compared to the upper bound of 1.112 dB found in Table 1.
`
`References.
`
`2.
`
`1. C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-
`correcting coding and decoding:
`turbo codes,” Proc. 1993 IEEE International
`Conference on Communications, Geneva, Switzerland (May 1993), pp. 1064—1070.
`S. Benedetto and G. Montorsi, “Unveiling turbo codes: some results on parallel
`concatenated coding schemes”, IEEE Trans. on Inf. Theory, vol. 42, no. 2 (March
`1996), pp. 409-428..
`S. Benedetto and G. Montorsi, “Design of parallel concatenated convolutional
`codes,” IEEE Transactions on Communications, vol. 44, no. 5, (May 1996) pp. 591-
`600.
`
`3.
`
`4.
`
`S. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara, “Serial concatenation
`of interleaved codes: performance analysis, design, and iterative decoding,” IEEE
`Trans. on Information Theory, vol. 44, no. 3, (May 1998), pp. 909-926.
`
`209
`
`Hughes, Exh. 1011, p. 11