throbber

`
`Proceedings
`
`THIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION, CONTROL AND COMPUTING
`
`September 23 - 25, 1998
`
`Allerton House, Monticello, Illinois
`Sponsored by the
`Coordinated Science Laboratory and the
`Department of Electrical and Computer Engineering of the
`University of Illinois at Urbana-Champaign
`
`A
`
`.
`
`A
`
`Apple v. Caltech
`IPR2017—00701
`
`Replacement - Apple 1117
`
`Apple 10031117
`
`

`

`PROCEEDINGS
`
`THIRTY-SIXTH ANNUAL ALLERTON CONFERENCE
`ON COMMUVICATION. CONTROL. AND COMPUTIN’G
`
`Tamer Basar
`Bruce Hajek
`Conference (To—Chairs
`
`Conference held
`September 23 - 25, I998
`Allerton House
`Monticello. Illinois
`
`Sponsored by
`The Coordinated Science Laboratory
`The Department of ElectriélalIind Computer Engineering
`l.’NI\’ERS|1?\r’l(I;tI7 ILLINOIS
`Urbana-(fltampaign
`
`ii
`
`

`

`Coding Theorems for “Turbo-Like" Codes‘
`
`Dariush Divsalar, Hui Jin, and Robert J. McElicce
`Jet Propulsion Laboratory and California Institute of Technology
`Pasadena. California USA
`
`E-mail: dariushcshannon.jp1.nasa.gov,
`
`(hui. rjm)@systems.ca1tech.edu
`
`Abstract.
`
`In this paper we discuss AWGN coding theorems for ensembles of coding systems which
`are built from fixed convolutional codes interconnected with random interleavcrs. We
`
`call these systems “turbo-like" codes and they include as special cases both the classical
`turbo codes [1.2.3] and the serial concatentation of interleaved convolutional codes
`W'e oficr a general conjccturo about the behavior of the ensemble maximurn-likelihood
`decoder) word error probability as the word length approchcs infinity. we prove this
`conjecture for a simple class of rate 1/ q serially concatenated codes where the outer
`code is a q-fold repetition code and the inner code is a rate 1 convolutional code with
`transfer function 1/ (1 + D). W'e believe this represents the first rigorous proof of a
`coding theorem for turbo-like codes.
`
`1. Introduction.
`
`The 1993 discovery of turbo codes by Berrou. Glavicux. and Thitimajshima [1} has
`revolutionized the field of error-correcting codes.
`In brief. turbo codes have enough
`randomness to achieve reliable communication at data rates near capacity, yet enough
`structure to allow practical encoding and decoding algorithms. This paper is an attempt
`to illuminate the first of these two attributes, i.e., the “near Shannon limit” capabilities
`of turbo-like codes on the AWGN channel.
`
`Our specific goal is to prove AWGN coding theorems for a class of generalized con-
`catenated convolutional coding systems with interleavers. which we call “turbo—like"
`codes. This class includes both parallel concatenated convolutional codes (classical
`turbo codes) [1. 2. 3] and serial concatenated convolutionnl codes [4] as special cases.
`Beginning with a code structure of this type, with fixed component codes and intcr~
`connection topology. we attempt to show that as the block length approaches infinity,
`the ensemble (over all possible interleavers) maximum likelihood error probability ap-
`proaches zero if Eh/Nd exceeds some threshold. Our proof technique is to derive an
`explicit expression for the ensemble input-output weight enumeraror (IOWE) and then
`to use this expression.
`in combination with either the classical union bound, or the
`recent “unmoved” union bouan of Vitcrbi and Vitcrbi {9), to show that the maximum
`likelihood word error probability approaches zero as N -+ 0c. Unfortunately the diffi-
`culty of the first step, i.c.., the computation of the ensemble IOWE. has kept us from
`full success, except for some very simple coding systems. which we call repeat and ac-
`cumulate codes. Still. we are optimistic that this technique will yield coding theorems
`for a much wider class of interleaved concatenated codes. In any case, it is satisfying to
`have rigorously proved coding theorems for even a restricted class of turbo-like codes.
`Here is an outline of the paper. In Section 2 we quickly review the classical union
`bound on maximum-likelihood word error probability for block codes on the AWGN
`
`" Dariush Divsalar’s work, and a portion of Robert McEliece’s work, was performed
`at JPL under contract with NASA. The remainder of McElicce's work, and Hui Jin‘s
`work, was performed at Caltcch and supported by NSF grant no. DICE-9505975,
`AFOSR grant no. 5F49620-97—1-0313, and grant from Qualcomm.
`
`20!
`
`1
`
`

`

`In Section 3 we
`channel, which is seen to depend on the code’s weight enumerator.
`define the class of “turbo-like” codes, and give a formula for the average input-output
`weight enumerator for such a code. In Section 4 we state a conjecture (the interleaver
`gain exponent conjecture) about the ML decoder performance of turbo-like codes. In
`Section 5, we define a special class of turbo-like codes. the repeat-and-accumulate codes,
`and prove the ICE conjecture for them. Finally, in Section 6 we present performance
`curves for some RA codes, using an iterative,
`turbo-like, decoding algorithm. This
`performance is seen to be remarkably good, despite the simplicity of the codes and the
`suboptimality of the decoding algorithm.
`
`2. Union Bounds on the Performance of Block Codes.
`
`In this section we will review the classical union bound on the maximum-likelihood
`
`word error probability for block codes.
`Consider a binary linear (n, k) block code C with code rate r = k/n. The (output)
`weight enumerator (WE) for C is the sequence of numbers A0, .
`. . , A", where Ah de-
`notes the number of codewords in C with (output) weight h. The input-output weight
`enumerator (IOWE) for C is the array of numbers Awm, w = 0,1,...,k, h = 0, 1, .
`.
`.
`, n:
`Amy. denotes the number of codewords in C with input weight w and output weight h.
`The union bound on the word error probability PW of the code C' over a rnemoryless
`binary-input channel. using maximum likelihood decoding, has the well-known form
`n
`
`(2.1)
`
`(2.2)
`
`PW g ZAhz"
`h=l
`n
`k
`
`= Z (2 AM) 2".
`
`h=l w=l
`
`In (2.1) and (2.2), the function .2" represents an upper bound on the pairwise error
`probability for two codewords separated by Hamming (output) distance h. For AWGN
`channels, 2 = firm/M where E, /No is the signal-to-noise ratio per bit.
`
`3. The Class of “Turbo-Like” Codes.
`
`In this section, we consider a general class of concatenated coding systems of the type
`depicted in Figure 1, with q encoders (circles) and q — 1
`interleavers (boxes). The
`ith code C;
`is an (nhN.) linear block code, and the ith encoder is preceded by an
`interleaver (permuter) Pi of size N" except Cl which is not preceded by an interleaver,
`but rather is connected to the input. The overall structure must have no loops. i.e.. it
`must be a graph-theoretic tree. We call a code of this type a “turbo-like" code.
`Define 3,, = {1, 2, .
`.
`. ,q} and subsets of 3., by s; = {i 6 sq : C, connected to input},
`so = {i E 3,,
`: C,» connected to output }, and its complement 30. The overall system
`depicted in Figure l is then an encoder for an (n, N) block code with n = 2‘80 71,-.
`If we know the IOWE ASf‘m‘s for the constituent codes 0;, we can calculate
`the average IOWE Ankh for the overall system (averaged over the set of all pOssible
`interleavers), using the umform interleauer technique
`(A uniform interleaver is
`defined as a probabilistic device that maps a given input word of weight w into all
`W
`distinct (N‘) permutations of it with equal probability p = l/ (1‘).) The result is
`
`Q Afi)h
`I
`Aw-h= Z Z Iii-(MIT
`I=2 (10.)
`hI"E'O b.1630
`:n,-n
`
`(3-1)
`
`202
`
`2
`
`

`

`In (3.1) we have wi = w ifz‘ 6 s]. and w.- = hJ- if C; is preceeded by Cj (see Figure 2.).
`We do not give a proof of formula (3.1), but it is intuitively plausible if we note that
`the term ASEA/ is the probability that a random input word to C,- of weight w,-
`will produce an output word of weight h,».
`For example, for the (n; +713 +114, N) encoder of Figure 1 the formula (3.1) becomes
`
`_
`4
`‘ w-h —
`
`2:
`h;.h3.h3.h4
`(n,+n3+n‘=n)
`
`AU)
`whhi
`
`
`
` (2) (3) (4)
`Ami-’12 All-’31,“! Auu J14
`Na)
`N;
`N.)
`w;
`103)
`nu
`
`(4)
`(3)
`(2)
`
`2 AU) Audi: Ah1.h3 Alinhn
`Ruhr
`(N)
`(11;)
`(11.)
`hl.hq.h3.h4
`h,
`h;
`(h2+A3+h4=A)
`
`'
`
`
`
`output
`
`Figure 1. A “turbo-like" code with
`$1 = {1,2},so = {2,3,4},30 =
`
`
`
`(ni,Ni)
`
`(1'13"le
`
`Figure 2. C; (an (m, N;) encoder) is connected to C,-
`(an (n), N,) encoder) by an interleaver of size N]. We
`
`have the “boundary conditions” NJ = n. and 11;, hi.
`
`4. The Interleaving Gain Exponent Conjecture.
`
`in which
`In this section we will consider systems of the form depicted in Figure l.
`the individual encoders are truncated convolutional encoders, and study the behavior
`of the average ML decoder error probability as the input block length N approaches
`
`203
`
`3
`
`

`

`infinity. If All.“ denotes the IOVVE when the input block has length N , we introduce
`the following notation for the union bound (2.2) for systems of this type:
`n
`N
`
`(4.1)
`
`PUBdé'Z (Z .431”) 2".
`
`w=l
`
`h
`
`:1
`
`Next we define. for each fixed In _>_
`
`1 and h 2 l.
`
`(4.2)
`
`(1(111. h) = lim sup logN Ag'h.
`N—oc
`
`It follows from this definition that if w and h are fixed,
`
`Afihz" = 0(N“(w"'>+‘)
`
`as N —; 00,
`
`for any 6 > 0. Thus if we define
`
`(4.3)
`
`i3," =
`
`0(a). h).
`
`it follows that for all w and h,
`
`Afihzh = 0(1'V'5“+‘)
`
`as N —-+
`
`for any 6 > O. The parameter [3H, which we shall call the mterleaving gain exponent
`(ICE). was first introduced in
`and [3] for parallel concatenation and later in
`for
`serial concatenation. Extensive numerical simulations, and theoretical considerations
`that are not fully rigorous lead to the following conjecture about the behavior of the
`union bound for systems of the type shown in Figure l.
`
`The IGE Conjecture. There exists a positive number 70, which depends on the q
`component convolutional codes and the tree structure of the overall system, but not
`on N. such that for any fixed En/No > 70,35 the block length N becomes large,
`
`(4.4)
`
`PS,” = 0(NBM)
`
`Eq. (4.4) implies that if [1M < 0, then for a given Eh/No > '70 the word error prob-
`ability of the concatenated code decreases to zero as the input block size is increased.
`This is summarized by saying that there is word error probability interleaving gain.1
`In [7], we discuss the calculation of 0(w. h) and {3}" for a concatenated system of
`the type depicted in Figure 1, using analytical tools introduced in
`and
`For
`example, for the parallel concatenation of q codes, with q — l interieavers, we have
`
`31", S _q + 2!
`
`with equality if and only if each of the component codes is recursive. For a “classical”
`turbo code with q = 2, we. have HM = 0, so there is no word error probability inter-
`leaving gain. This suggests that the word error probability for classic turbo codes will
`not improve with input block size. which is in agreement with simulations.
`
`1 There is a similar conjecture for the bit error probability which we do not discuss in
`this paper. Suffice it to say that the interleaving gain exponent for bit error probability
`is 5M -1.
`
`204
`
`
`
`4
`
`

`

`As another example. consider the serial concatenation of two convolutional codes.
`If the inner code is recursive then.
`
`gm 3 —
`
`1°
`
`1
`
`+1.
`
`where ( {m is the. minimum distance of the out-er code. Therefore, for serial concate-
`nated codes. if (1‘; Z 3 there is interleaving gain for word error probability. (If the inner
`code is nonrecursive d“, 2 0 and there is no interleaving gain.)
`
`5. A Class of Simple 'Durbo-Like Codes.
`
`In this section we will introduce a class of turbo-like codes which are simple enough
`so that we can pr0vc the ICE conjecture. We call these codes repeat and accumulate
`(RA) codes. The general idea is shown in Figure 3. An information block of length
`N is repeated q times, Scrambled by an interleavcr of size qN, and then encoded by
`a rate 1 accumulator. The accumulator can be viewed as a truncated rate-1 recursive
`
`convolutional encoder with transfer function 1/ (l + D), but we prefer to think of it as
`a block code whose input block [1].. .
`. ,r,.] and output block (3/1.... .y,.] are related
`by the formula
`
`lllzll
`
`1/2 =11 +12
`
`('1)
`
`y3=11+172+173
`
`LENGTH
`
`[WEIGHT]
`
`
`
`rate l/q
`N
`[w]
`repetltlon
`[qw]
`
`qN x qN
`permutation
`matrix
`
`Figure 3. Encoder for a ((1N.N) repeat and accumulate
`code. The numbers above the input~output lines
`indicate the length of thc corresponding block. and
`those below the lines indicate the weight of the block.
`
`To apply the union bound from Section 2 to the class of RA codes. we need the
`input-output weight ennmcrators for both the (qn, n) repetition code, and the (71,71)
`accumulator code. The outer repetition code is trivial: if the input block has length n,
`we have
`
`'
`
`(a) _
`
`AwJA — { (n)
`
`0
`
`u!
`
`ifhfiéqw
`
`h = qw.
`
`205
`
`5
`
`

`

`The inner accumulator code is less trivial. but it is possible to show that (again assuming
`the input block has length n):
`
`(as)
`
`Asa=(:.;2’:)(r.’;2ail>-
`
`It follows then from the general formula (3.1), that for the (qN, N) RA code represented
`by Figure 3. the ensemble IOVVE is
`
`(5.4)
`
`(Mi
`(0)
`(I)
`AW) .. 2 Mix
`w.h “
`(qN)
`hl=0
`qvu
`N
`N-h
`h—l
`(w) ('quw/Z‘I) ([qw/Tl—l)
`qw
`(W)
`
`'
`
`From (5.4) it is easy to compute the parameters o:(w,h) and 6M in (4.2) and (4.3).
`The result is
`
`(5.5)
`(5.6)
`
`comm) = —
`at, = — [
`
`2
`
`
`(4;?)1'
`
`It follows from (5.6) that an RA code can have word error probability interleaving gain
`only if q 2 3.
`We are now prepared to use the union bound to prove the ICE conjecture for RA
`codes.
`In order to simplify the exposition as much as possible. we will assume for the
`rest of this section that q = 4, the extension to arbitrary q 2 3 being straightforward
`but rather lengthy. For q = 4. (5.6) becomes 6M = -1. so the ICE conjecture is
`P}? = 0(N“) for Eb/No > 70 in this instance.
`The union bound (2.2) for the ensemble of q = 4 RA codes is. because of (5.4).
`
`4N h/2 (N) (4N—h)(h—l
`Pyle/B = Z Z w
`2(IZN) 2w—l zh.
`h=2 w=l
`4w
`
`Denote the (u:.h)t.h term in the sum (5.7) by TN(~w. h):
`
`(~)(“”"‘ ("‘I
`def
`TNW'h) = ,1th = WA.
`(4141)
`
`Using standard techniques (e.g.
`(w- h):
`
`[8, Appendix Al),
`
`it is possible to show that for all
`
`(5.8)
`
`TN(w.h) S DzMF‘(:.-.y)-6-loga :1‘
`
`where D = 4/\/7r is a constant. :1: = w/4N, y = h/4N,
`
`F(1:,y) =
`
`206
`
`
`
`6
`
`

`

`and H2(1‘) = —xlog2(r) — (l - Illog2(1 -— r) is the. binary entropy function. The
`maximum of the function F(:r,y) in the range 0 S 21: S y S l — 21 occurs at (1,31) =
`(0100,0371) and is 0.562281, so that if log? 2 < --0.56228l, the exponent in (5.8) will
`be negative.
`Let us therefore assume that log.2 z < -0.562‘281, which is equivalent to Eb/No =
`—(i/'r)lnz — —4an 2 4 - ln2 - 0.562231 = 1.559 = [.928 dB.
`If E is defined to be
`E = — log? 2 + 0.562281, it follows from (5.8) for all to and h,
`
`(5.9)
`
`may, )1) g 02-”.
`
`What (5.9) tells us is that if Eb/No > [.928 dB, most of the terms in the. union bound
`(5.?) will tend to zero rapidly. as N —o 00. The next step in the proof is to break the
`sum in (5.7) into two parts, corresponding to those terms for which (5.9) is helpful.
`and timsc for which it is not. To this end. define
`
`c 3
`.
`hNdzf—E log; N.
`
`and write
`
`4N M2
`
`P}? = Z 2 mm. 12)
`h;2w=l
`
`hN 0/2
`4N
`h/‘Z
`= Z Z mom) + Z Z THU-uh)
`h==2w=l
`h=h,v+lw=l
`
`= SI + 52.
`
`It's easy to verify that when N is large enough. AwHIh/Aw‘h < 1 for h S h,\- and
`w 5 h/2 S h_.\,-/2. which shows Aw}.
`is a decreasing; function of w for large N. Thus
`the sum 5'1 can be overhounded as folluws (we omit some details):
`
`h~ h/2
`51 = Z Z mush)
`h=2 u-=1
`
`hN
`hN h/2
`= ZTNUJL) + Z Z TN(w.h)
`hz’l
`h:2n.~:‘2
`a” M:
`= 0w") + Z Z TN(u.-_, h)
`h::2w:2
`
`ha: 5/2
`s ow”) + Z 2 Ana"
`h=2w'—2
`
`hN h/Z
`= 0(N“) + Z Z 0(h3/N=)zh
`11:2 10:2
`
`= 0(N“) + ()(hf‘v/Nz)
`= 0(N“).
`
`207
`
`7
`
`

`

`For the sum 83. we bound each term T~(u:, h.) by (5.9):
`
`«N
`
`hf?
`
`52: Z Ext-(nan)
`h=h~+1w=1
`4N hf2
`
`szzmv
`11x44 [v.21
`
`‘
`4N
`= 0/2 2 hr”
`h~+1
`
`'A
`
`ODII
`II o
`
`A
`
`2— Ehxal‘m + l)
`
`(N ‘3 log; N)
`N”).
`
`We have therefore shown that for the ensemble of q = -1 RA codes. if Eb/No >
`1.928 dB.
`
`(5.10)
`
`PS.“ = 51+ 52 = amt-1) + our-1): 0(N").
`
`which as we saw above. is the [GE conjecture in this case.
`Although the union bound gives a proof of the ICE conjecture for RA codes. the
`resulting value of “m is by no means the best possible.
`Indeed.
`if we. use the recent
`Viterbi-Viterbi improved union bound
`to bound the sum 52. we can lower the value
`of 7., considerably. e.g. for q = 4 from 1.928 (18 to 0.313 dB. In Figure 4 and Table 1 we
`display our numerical results on RA codes. There we. compare the “cutoff threshold"
`70 for RA codes with q in the range 3 5 q S 8 using both the classical union bound
`and the Viterbi—Viterbi improved union bound to the cutoff threshold for the ensemble
`of all codes (i.e.. "random codes") of a fixed rate. We believe that those values of
`70 can be reduced still further. for example by using the bound of
`instead of the
`Viterbi-Viterbi bound.
`
`(1
`
`3
`
`4
`
`:3
`
`6
`
`T
`
`8
`
`RA Codes (Union Bound)
`Random Codes (Union Bound)
`
`2.200
`2.031
`
`1.928
`1.853
`
`1.798
`1.775
`
`1.721
`1.694
`
`1.670
`1.651
`
`[.631
`1.620
`
`RA Codes (Viterbi Bound)
`Random Codes (V iterbi Bound)
`
`Binary Shannon Limit
`
`0.313 -—0.l‘25 —0.40‘2 —0.59'2 —0.731
`1.112
`0.214 —0.‘22-1 41.486 —0.662 —0.789 41.885
`
`—0.-"195 -0.794 —U.963 —l.071 —-1.l:30 —1.210
`
`
`Table 1. Ninnerical data gleaned from Figure 4.
`
`6. Performance of RA Codes with Iterative Decoding.
`
`the performance of RA codes with maximum-
`The results of this paper show that
`likelihood decoding is very good. However,
`the complexity of ML decoding of RA
`
`208
`
`
`
`8
`
`

`

`
`
`B —"‘r
`
`r—‘
`
`"r
`
`'—’r—
`
`r"
`
`>
`
`'r
`
`.\
`
`"x
`'v\-.‘.
`
`
`
`
`_.l-_...l._...«L...~_L_-__._.L....
`
`.\‘'h
`I'\\'\'\ .
`
`Ira“- _.. _..—_..—
`‘
`- -
`union bound random codes
`—-
`viierbi bound random codes
`-- - -
`Shannon limit binary input
`-
`0
`union bound RA codes
`
`i1»
`+
`viterbi bound RA codes
`
`i
`
`7
`
`s
`
`5
`
`1r
`
`‘I u
`
`\
`
`0.4
`
`0.5
`Code R319 R
`
`0.6
`
`l 0
`
`.3
`
`
`
`Figure 4. Comparing the RA code "Cutoli threshold" to
`the cutoff rate of random codes using both the classical
`union bound and the Vitcrbi-Viterbi improved union bound.
`
`codes. like that of all turbo-like codes. is prohibitively large. But. an important feature
`of turbo-like codes is the availability of a simple iterative, message passing decoding
`algorithm that approximates ML decoding. W'e wrote a computer progrmn to imple-
`ment this “turbo-like" decoding for RA codes with q = 3 (rate 1/3) and q = 4 (rate
`1/4). and the results are shown in Figure 5. We see in Figure 4. for example, that
`the empirical cutofl' threshold for RA codes for q = 3 appears to be less than 1 dB.
`compared to the upper bound of 1.112 dB found in Table 1.
`
`References.
`
`1. C. Berrou. A. Glavieux. and P. T hitimajshima. "Near Shannon limit error—
`corrccting coding and decoding:
`turbo codes." Proc.
`1993 IEEE International
`Conference on Communications. Geneva. Switzerland (May 1993). pp. 1064—1070.
`2. S. Benedetto and G. Montorsi. “Unveiling turbo codes: some results on parallel
`concatenated coding schemes”. lEli'E Trans. on Inf. Theory. vol. 42, no. 2 (March
`1996). pp. 409- 428..
`S. Benedetto and G. Montorsi. “Design of parallel concatenated convolutional
`codes." IEEE Trauma-ions on Conunun-zcatwns. vol. 4-1. no. 5. (May 1996) pp. 591~
`600.
`S. Benedetto. D. Divsalnr. G. Montorsi. and F. Pollam. “Serial concatenation
`
`4.
`
`3.
`
`of interleaved codes: performance analysis. design. and iterative decoding." [EEE
`Trans. on Information Theory, vol. 44. no. 3. (May 1998). pp. 909—926.
`
`209
`
`9
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket