`
`PROCEEDINCS
`
`TH IRTY-Sf, VENTH ANNUAL ALLERTON CONFERENCE
`ON COMMUNICATION, CONTROL, AND COMPUTING
`
`!t
`tI
`
`5 Ft
`
`.:1
`
`.:
`
`4 ! :
`
`,
`r$
`
`E i
`
`l
`
`)
`
`Bruce Hajek
`R.S. Sreenivas
`Conference Co-Chairs
`
`1,. , j:it . r. : r..r-, j
`_"',.1r-_
`n-.nl'
`"'.
`i !l L'L,. ilii-
`I u''.
`-..:;.iJ,,,,-:i
`1 1 MAY 2fi00
`. ,1.i.';f
`L.
`^-..r1-!-nrni-r
`l, '; L, t:,liri-)
`
`Conference held
`September 22, September 23, and September 24' 1999
`Allerton House
`Monticello, Illinois
`
`.
`. Sponsored bY
`The Coordinated Science Laboratory
`and
`The Department of Electrical and Computer Engineering
`of the
`UNIVERSITY OF ILLINOIS
`at
`Urbana-ChamPaign
`
`.rtl ilI] tll lrll
`..I FRO'I'T BLDS BOSTON SPA.
`,f( USE AT THE BRITISH LIBRARY
`ST. PANCRAS READING ROOM ONLY
`
`DSS-2a I 01/1i
`
`Hughes, Exh. 1057, p. 1
`
`
`
`ri
`
`The Serial Concatenation of Rate-1 Codes
`Through Unifonn Random Interleavers *
`Henry D. Pfister and Paul H. Siegel
`Signal Transmission and Recording Group
`Department of Electrical and Computer Engineering
`University of California, San Diego
`La Jolla, California 92493-0407
`{hpfi stea psiegel } @ucsd.edu
`
`Abstract
`
`Until the Repeat Accumulate codes of Divsalar, et al. l4), few people would haye
`guessed that simple rate-1 codes could play a crucial role in the construction of "good"
`codes. In this paper, we wiil construct "good" linear block codes at any rate r < 1 by
`serially concatenating an arbitrary outer code ofrate r with a large number of rate-1 inner
`codes through unifonn random interleavers. We derive the average output weight enumer-
`ator for this ensemble in the limit as the number of inner codes goes to infinity. Using a
`probabilistic upper bound on the minimum drstance, we prove that long codes from this
`ensemble will achieve the Gilbert-Varshamov bound with high probability. Finaily, by nu-
`mencally evaluating the probabilistic upper bound, we observe that it is typically achieved
`with a. small number of inner codes.
`
`Introduction
`
`The introduction of turbo codes by Berrou, Glavieux, and Thitimajshima [3] is remarkable
`because it combined simple components together to set a new standard for error-correcting
`codes. Since then, iterative "turbo" decoding has made it practical to consider a whole new
`world of concatenated codes while the use of "random" interieavers and recursive convolu-
`tional encoders has given us a starting point for choosing new code stmctures. Many of these
`concatenated code structures fit into a class that Divsalar, Jin, and NIcEIiece call "turbolike"
`codes [4]. This class includes their Repeat Accumulate (RA) codes which consist only of a rep-
`etition code, an interleaver, and an accumulator. Still they prove that, for sufficiently 1ow rates
`and any fixed E6/N6 greater than a thieshold, these codes have vanishing word error probabil-
`ity as the block length goes to infinity. This shows that powerful error-corecting codes may be
`constructed from extremely simple components.
`In this paper we consider the serial concatenation of an arbitrary outer code of rate r ( I
`with rn identr,:al rate- 1 inner codes where. following the convention cf turbo coding literature,
`we use the te;:m serial concatenation to mean serial concatenation through a "ratdom" inter-
`leaver. Any real system must, of course, choose a particular interleaver. Our analysis, however,
`will make use of the unifurm random interleaver (URI) t2l which is equivalent to averaging
`over all possible interleavers. We analyze this system using a probabilistic bound on the min-
`imum distance and show that, in the limit as the number of inner codes m goes to infinity, the
`minimum distance is bounded by an expression that resembles the Giibert Bound (GB) [5].
`*This work was supported in part by the National Science Foundation (NSF) under grant NCR-961 2802 and
`by the National Storage Industry Consortium (NSIC).
`
`260
`
`Hughes, Exh. 1057, p. 2
`
`
`
`Figure 1: Our system consists of any rate r < 1 code followed by rn rate-l codes.
`
`Our work is largely motivated by [4] and by rhe results of Oberg and Siegel [10]. Both
`papers consider the effect of a simple rate-1 'Accumulate" code in a serially concatenated
`system. In [4] a coding theorem is proved for RA codes, while in [10] the'Accumulate" code
`is analyzed as a precoder for the dicode magnetic recording channel. Benedetto, et al. also
`investigated the design and performance of Double Serially Concatenated Codes in [ 1 ].
`If the outer code consists of multiple independent copies of a short block code and the
`inner code is a cascade of m interleaved 'Accumulate" codes, we wiil refer to these codes
`as Generalized Repeated Accumulated (GRA*) codes. McEliece has analyzed thr: maximum
`likelihood decoding performance of these codes for rn : 7 l9l, and we focus on the minimum
`distarrce of these codes for m ), L.
`The outline of the paper is as foliows. In Section 2 we review the weight enumerator (VIE)
`of linear block codes and the union bound on the probability of error for maximum likelihood
`decoding. We also review the average weight enumerator for the serial concatenation of two
`linear block codes through a URI, and relate serial concatenation to matrix multiplication using
`a normalized form of each code's inpLa outpnt weight enumerator {IO\NE).In Section 3 we
`introduce our system, shown in Figure 1, and we compute its average output WE. In Section
`4 we derive a probabilistic bound on the minimum distance of any code, taken from a random
`ensemble, in terms of the ensemble's average WE. Applying this bound to the WE from Section
`3 gives an expression very similar to the GB, and examining the bound as the block length
`goes to infinity produces the Giibert-Varshamov Bound (GVB). In Section 5 we numerically
`evaluate our bound on minimum distance for various GRA'n codes and observe that 3 or 4
`'Accumulate" codes seem to be sufficient to. achieve the bounii. corresponding to asymptotically
`large m. Finally, in Section 6 we discuss some conclusions and directions for future work.
`2 Weight Enumerators and Serial Concatenation
`2.L The Union Bound
`In this section, we review the weight enumerator of a linear block code and the union bound on
`error probability for maximum likelihood decoding. The IOWE A,u1 of an (n, k) block code
`is the number of codewords with input weight ,l, and output weight h, and the WE .45 is the
`number of codewords with output weight h and any input weighr. Using these definitions, the
`probability of word error is upper bounded by
`nk
`p <\-\-a
`
`..h
`
`'- - k?,"* n' '
`
`and the probability of bit error is upper bounded by
`
`nk
`P, < tIY.e*n.0.
`7:r7, *
`The term z[ represents an upper bound on the pairwise en'or probabilitv, between any two
`codewords differing in h positions, for the channel of inteiest. The constant z is defined for
`many memoryless channels [7, Section 5.3], and for the AWGN channel it is z : s-(k/n){.Ea/No) .
`
`261
`
`Hughes, Exh. 1057, p. 3
`
`
`
`2.2 Serial Concatenation through a Uniform Interleaver
`In this secrion, we review the serial concatenation ol codes through a uniform random inter-
`leaver. The introduction oi URI in the analysis of turbo codes by Benedetto and Nlontorsi
`[2] has made the analysis of complex concatenated coding systems relatirely straightforward,
`und ,sing the uRI for anaiysis is equivalenl to averaging over all possible.interleavers' The
`importanl property of the dzu is that the distribution of outpur sequences is a function only
`of tn" *"ig1,t iistilbution of input sequences. More precisely, an input sequence of rveight '--
`producesa-llpossibleoutputSequencesofweightt,'eachwithequalprobability..
`'- -
`ion.la", uny (n, ,k) blo.t coa" with IOWE 4-,6 preceded by a URi. 1&'e will refer to such a
`code as a uniform)y interlea;-ed code (IJIC). The probabillty of the combined system mapping
`an input ."qu.n." of weighr u to an output sequence of iveight h is
`Pr (w - h) : 4Yf
`(I)
`we can now consider an (n,,t) block code formed by first encoding with an (n1, ft) outer
`code with IO\yE Af)", then permuting the output bits with a URI, and finally encoding again
`with an (n, n1) inner code with IowE /f)r. The average number of codewords with input
`weight a, and output weight h is then given by
`n1
`
`(1)
`
`A*, - L, o[:L,Pr (h1 * t')
`hr =o
`J (i)
`nr
`a.- a(o) ,'h.h
`,/
`tn,
`l,-l
`\nll
`hr=o
`
`/Ltrh,
`'-
`
`(2)
`
`The average IOWE for the serial conc.atenation of two codes may also be written as the
`matrix producl of the IoWE for the outer code and a normalized version of the I9\YE for the
`inner code. Let us define, for any code, the input output weight transition probability (IOWTP)
`P-'l,astheprobabilitythatauniformrandominputSequenceofweightrlismappedtoan
`output sequence ofweight h. From (1), we can see that
`,t(l
`"-,h
`(:)
`
`-(r)
`'.,n:
`
`(r)
`
`Substituting (3) into (2), we have
`r . - $ a(,) pt,,. :A(o)p(').
`n''n:
`u'h1 r h1'h'
`?'=o
`where A(o) is the matrix representation of the outer code IOWE and P(i) is the matrix represen-
`tationoftheinnercodeloWTP'ByinductivelyapplyingthistomultipleinnercodelowTP
`matdces, one can see that matrix multiplication computes the overall A-4 for an arbitrary
`number of serial concatenations. It is also clear from (3) that IOWTP matlices are stochastic
`(i.e. a1I rows sum to 1).
`2.3 A Simple ExamPle
`In this section, we will compute the IowE and IowTP of the rate-l 'Accumulate" code [4]'
`The .Accumulate" code is a block code formed by truncating the simplest recursive convo-
`lutional code possible, having generator malrix G(D) : 1/(1 @ D)' after n symbols' The
`
`262
`
`Hughes, Exh. 1057, p. 4
`
`
`
`Input Sequence
`Input Weight
`Output Sequence
`Output Weight
`
`001
`
`1
`00i
`
`000
`0
`000
`0
`
`010
`
`1
`011
`2
`
`100
`
`1
`
`t11
`3
`
`011
`
`2
`010
`
`I01
`2
`110
`2
`
`110
`
`2
`
`100
`
`111
`
`J
`
`101
`
`2
`
`Table 1: Input-output sequences and weight mappings for n' : 3 'Accumulate" code'
`
`generator matrix for this block code is an n x n matrix with all 1',s in the upper triangle and a1i
`0'selsewhere.Intheexampie,wewilllookatthecasen:3Thegeneratormatrixis
`f 1 11
`G: I o 1 1
`l0 01
`Using.Tablel,weseethatauniformrandominputofweightlmapstooutputweightsi,2,
`and iwith equal probability, and cannot be mapped to output weight 0. so the rr.r : 1 row of
`,fr" Iboi.ip ,irtrix is I O i7: 113 i/3 ]. Filling in the rest of the entries, we give both the
`IOWE A.,h and the associated IOWTP P.,6 in malrix form:
`
`ii ! 1 rl P-h:l: ;il li:'i'l
`
`A_
`
`Lo o I o.l,,,n Lo o t o .i.,,,
`3 Multipte Rate-L Serial Concatenations
`3.1 The Input Output Weight Enumerator
`ln this section, we will consider a code f.ormed by encoding rn * 1 times. The f,rst (outer)
`encoder is for an (n, k) block code with IOWE Afl. The next rn (inner) encoders are for
`identical rate-l UICs of block length n with IOWE A?, lwe 1et P be the IOWTP matrix
`associated witn Af)n, ihen we can write the average IOWE '4',1 for this code as
`A-,n: L, o';\,[P']n,,, .
`fit=0
`The iinearity of the code goo.rni""s that the matrix P will be block diagonal with at least
`two blocks because inputs of iveight 0 will aiways be mapped to outputs of weight 0 and inpuls
`of weight greater than 0 will alwiys be mapped to ourputs of weight greater than 0. So let the
`firstblockbethelxlsubrnatrixa-ssociatedwithr-c,:h:0,andletthesecondblockQbethe
`n x n submatrix formed by deleting the first row and column of P. writing P-as the product
`of block diagonal matrices, we see that
`rr n l
`p*:Lo e_l
`3.2 Stationary Distritrutions and Markov Chains
`In this section, we wili discuss the slationary distributions of a Markov Chain (MC) and how
`they relate to the stationary weight disuibutions of a raLe-1 UIC. This discussion is based on
`
`(4)
`
`/oJ
`
`/
`
`Hughes, Exh. 1057, p. 5
`
`
`
`the observation that if P is a fiaite dimensional stochastic matrix, then there is an associated
`IvIC with state transition matrix P. Applying this to the IO'WTP marrix of any UIC, rve see that
`all UICs have an associated NIC.
`A MC, rvith state transition matrix P, has a stationary distribution r. : Iro,... ,;,,] if
`z-F : a' and ! ;r; : 1. Accordinely, a rate- i UIC has a stationary weight distribution a- if zr
`is a stationary distribution of the code's associated Iv{C. Recali that a MC is irreducible if therc
`is a path from any stale to any other state with a flnite number of steps. Using these definitions,
`we can draw upon some weil-known results from the theory of non-negative matrices and MCs
`t111.
`THEOREM 1. Ai irreducible Markov Chain has a unique statianary distribution.
`
`We define a rate-1 UIC to be irreducible if the Q submatrix of its IOWTP matrix P can be
`associated with an ireducible MC. Similariy, this implies that there is a path from any weight to
`any other weight in a finite number of encodings. We now apply Theorem 1 to the Q submatrix
`of an irreducible rate-1 UIC. Since the matrix Q does not include inputs and outputs of weight
`0, we must assume no : 0 to make the following staiionary weight distribution unique.
`PRoposITIoN 1 . The unique stationary weight distribution r : Vro,
`,r,l ofan irreducible
`rate-l UIC with rs : g It
`
`o,:,(l'
`'" 2"-t'
`The example from Section 2.3 is irreducible, and applying Proposition 1 gives
`
`fortllt<n.
`
`LU 1 1 1
`
`[1 o o o l
`I l0 1/3 113 t/3 l:rn
`)10 2/3 rl\ 0 l-1"
`Lo b i o I
`An ineducible MC is primitive if its state transition matrix has a unique eigenvalue of
`maximum modulus. Accordingly, we define an irreducible rate-1 UIC tobe primitive tf the
`MC associated with the Q submatrix of the IOWTP matrix is primitive. The follor.ving theorem
`from the theory of MCs [11] will allow to examine the asymptotic behavior of (4) as m goes
`to infinity.
`
`rrz 1_rr1
`7
`
`7
`
`1111.
`
`THEoREM 2. A primitive Markov Chain with state transition matrix P and. unique stationary
`distribution r satisJies the limit
`
`[;]
`The example from Section 2.3 is also primitive, and applying Theorem 2 gives
`0 0l
`-
`1000
`[1
`0 1/3 113 r/3
`lo
`317 117 I
`o 2/3 1/3 0
`317 1/7 l'
`lu
`0010
`l0
`3/7 tl7 )
`
`lim P*:
`
`264
`
`lim
`
`0
`317
`
`3/7
`
`Hughes, Exh. 1057, p. 6
`
`
`
`3"3 Asymptotic Behavior for Many Concatenations
`In this section, we use (4) and rheorem 2 to compute the average wE of any rate r < 1 outer
`code serially concatenated with rn primitive rate-1 UICs, in the limit as rn goes to infinity.
`The intriguing part of this result is that the WE is indepenCent of the particular outer code and
`primitive rate-1 UIC chosen. We note that this is essentially a new construction for a uniform
`random ensemble of linear codes.
`THEOREM 3. Consider a rate-l codeformed by serially concatenating m primitive rate-l
`uICs. For non-zero input weights and in the limit as m goes to infnity, the output weight
`distribution is independent of the input weight distribtiion and is
`
`.i;6 :
`
`(;)
`2i _1.
`conolr-eny. The ensemble averaged wE for non-ryro output v)elghts for any rate r < I code
`serially concatenated with m primitive {lICs, in the limit as rn goes to infinity, is
`A, : iL,ol:\,[P-]n,,
`(f r ot^,)*:(2'^- Di{..
`4 Bounds on the Minimum Distance
`4.1 A General Bound on the Distribution of d-1, from A6
`In this section, we derive an upper bound on rhe probability that a randomly chosen code from
`some ensemble has d-;, < d. This upper bound can be computed using only the average WE
`of the ensemble. A similar bound was used by Gallager to bound the minimum distance of
`Low Density Parity Check Codes [6].
`TgpoRrira 4. The probability that a code, randomly chosenfrom an ensemble with average
`WEA;, has d*.in < d is bounded by
`
`/rn
`
`n
`
`\u:1 hr:l
`
`/n\
`
`\
`
`/
`
`'
`
`(5)
`
`Fr(d^i*< d) < LA,
`
`P_roof. Let an ensemble of linear codes with average wE 7h be deflned by a set of wEs
`! ttr) tzt
`/(M) \ -.
`l,-h ,-,h,...r..r
`., -*chctrosenwithequal probability.Further, letdflo*betheminimum
`distance of the code associared with Af ). we can upper bound the probability of choosing a
`code with d*an I d, from this ensemble. First we deflne an indicator function
`,
`I O if condition false
`r(conortronl :
`t 1 ifconditionrrue
`and we note that for all non-negative integers ,,
`1(r>0) <z
`
`(6)
`
`265
`
`,l
`
`Hughes, Exh. 1057, p. 7
`
`
`
`First counting the number of codes with d-i, < d, and then substituting an equivalent condition
`in the indicator function we have
`
`: + i r(d(;),. < d) : +t, ((Ert,),,)
`
`,r,
`
`-i:\
`Upper bounding the indicator function with (6) and then summing over i gives
`
`I
`
`pr(d_,n q a) s,r/jtt Af):LAh.
`
`1
`
`I,I d,_I
`
`i:l h=l
`
`d_l
`
`h=1
`
`tr
`
`4.2 An Application of the Bound
`We now apply Theorem 4 to the WE in (5). This leads to a statement that, for a given block
`Iength n and rate r, upper bounds the probability of picking a code with minimum distance less
`than some threshoid. Let d. (n, r, e) be the largest d which satisfies
`
`3 r"t z" -r
`) Ll(_e*l
`?4\h/*2'n-1
`for block length n, rate r, and 0 ( e ( 1. For this d*, we can rearange terms to get
`d'-1 , ,
`if"\
`?4\h/
`Changing the lower limit of the sum and r'earranging we have
`
`-1<2"-r,
`-2'"-L
`
`(t)
`
`|
`
`-aff,
`
`1 /-\
`
`r#(;)<.
`
`h=t '
`
`Substituting for the expression of the WE Z5 given in (5), we have
`
`'
`
`d'-t
`\-Au 1e
`a
`
`which, when combined with Theorem 4, implies that
`P(d^a < d-) S ..
`So with probabiiity at least 1 - e, a randomly chosen code from this ensemble will have mini-
`mum distance atleast d (n,r, e).
`The Gilbert Bound (GB) for binary codes [5] says that there exists at least one code with
`block length n, rate r, and minimum distance d if
`d-ltt
`
`(8)
`
`2."\-fn\ <2".
`- /-\hl''
`h=0\/
`
`(9)
`
`Hughes, Exh. 1057, p. 8
`
`
`
`P blocks
`
`lnterleaved
`1/(1+D)
`Encoder I
`
`Figure 2: Encoder for a GRA' Code with the block size indicated at each stage.
`
`If we substitute e : ()(r-r)n - l)(2'" - t)lQ - l) into (7), then we have an expression
`identical to (9). Since this e is strictly less than one, it follows from (8) that there exisrs at least
`one code in our ensemblewilh d*1n ) d. so we have qualitatively rhe same result as rhe GB.
`The Gilbert-Varshamov Bound (GVB) takes its name from the GB and from a related bound
`due to Varshamov [5]. The GVB is the form of both bounds in the limit as n goes to inflnity,
`and it says that there is a code with rate r and normalized minimum distance 6 : d^;nln if
`,?(d) <1-r
`where -FI(z) : -z logz r - (1 - z) logr(1 - r) is the binary entropy funcrion.
`If we let
`
`(10)
`
`d"(r, e) : um la.(n,., e)
`and examine (7) in the limit as n, goes to t";;;, -" find that our bound says somerhing even
`stronger than the GVB. In fact, we flnd that d.(r, e) is equal to the largest d that satisfies (10)
`for any e > 0. This implies that in the limit as n goes to infinity, almost all of our codes will
`have a normalized minimum distance d satisfying (10). This makes our codes "good" in the
`sense that, for a fixed rate as the block length goes to infinity, almost all of the codes in our
`ensemble have a normalized minimum distance that is bounded away from zero. It should be
`noted that this behavior is well-known for long random codes.
`
`5 Generalized Repeated Accumulated (GRA-) Codes
`In this section, we describe GRAm codes and apply Theorem 4 to some specific examples.
`GRAT codes are formed by the serial concatenation of a simple outer code, which consists of p
`independent copies of a short (n, k) block code, and m, interleaved rate-1 ''Accumulare" codes.
`The encoder for GRA' codes is shown in Figure 2. The performance of long GRA1 codes with
`maximum likelihood decoding was reported in [9], but cases with m > 1 were not considered.
`So we give results pertaining to the minimum distance GRA. codes using a few examples.
`In order to apply Theorem 4 to a speciflc ensemble, we must compute its average WE and
`choose an e. For the following resuits, we computed the average WEs numerically and chose
`e : I12. This means that at least half of the codes in our ensemble have a minimum distance at
`least as large as the values shown in Figure 3. For the short block codes, we chose: a repeat by
`2 (R2), a repeat by 4 Ra), a rateT lS single parity check (P8), and the (8, 4) extended Hamming
`code (H8).
`It is important to note that, at a fixed rate, a "good" code is defined by a minimum distance
`rvhich grows linearly with the block length. When examining these results, we will focus on
`whether or not the minimum distance appears to be growing linearly with block length and
`how close it is to the GB. For m : l, it is known that the typical minimum disrance grows
`O@G-z>ia'l where do is the minimum distance of the repeated ourer code [8]. Examining
`Figure 3 for m : 1, we see that the minimum distance grorvs slowly for R4 and H8 and
`not all for R2 and P8. While for m : 2, the minimum disrance growrh of R4, H8, and R2
`
`267
`
`Hughes, Exh. 1057, p. 9
`
`
`
`-o- Repeat by 4
`Fepeat by 2
`Rate 7/8 Parity
`Hamming (8,4)
`Gilbed Bound R=1/4
`Gilbert Bound F='112
`Gilbert Bound B=718
`
`+-
`
`1 E
`
`(a1 m: I
`
`z.-_ :-o
`
`- - -; - - -* - _ _* - - --
`400 600 800
`glock Length
`(b) m:2
`
`600
`Block Length
`(c)rn=3
`
`(d)rn:4
`
`Figure 3: Probabilistic lower bound on the minimum distance of various GRA,* codes.
`
`appears distinctly linear. It is difficult to determine the growth late of P8 with rn : 2 from
`these results. At m :3, all of the codes appear to have a minimum distance growing linearly
`with block length and the rates are very close to the GB. Finally, with m : 4, the bound on
`minimum distance and the GB are aknost indistinguishable. These resuits are very encouraging
`and suggest that, over a wide range of rates, only a few 'Accumulate" codes ale sufficient to
`approach the GB on minimum distance.
`
`6 Conclusions and Future Work
`In this paper, we began by showing the relationship between serial concatenation through a uni-
`form random interleaver and matrix multiplication of input output weight transition probability
`(IOWTP) matrices. We then introduced an ensemble of codes consisting of any rate r < 1 outer
`code followed by an infinite number of rate-l primitive uniformly interleaved codes, and com-
`puted the ensemble's average weight enumerator. This was done by introducing a comespon-
`dence between IOWTP matrices and Markov Chains (MCs), and drawing on some well-known
`limit theorems from MC theory. Next, we derived a probabilistic bound on the minimum dis-
`tance of codes from this ensemble and noted that this bound is almost identical to both the
`finite block length Gilbert Bound (GB) and the infinite block length Gilbert-Varshamov Bound
`
`268
`
`Hughes, Exh. 1057, p. 10
`
`
`
`(GVB). This implies that the ensemble of codes is "good" because, for long block lengths and
`fixed rate, almosi all of the codes in our ensemble have a normaiized minimum distance meet-
`ing the GVB. Finally, by evaluating our bound on minimum distance for specific outer codes
`and a small number of 'Accumulate" codes, we observed that a small number of inner codes
`may be sufficient to approach the bound for an infinite number.
`We are currently evaluating the iterative decoding of GRA- codes and working to prove a
`coding theorem similar to [4] for these codes.
`Acknov:ledgement. The authors would like to express their gratitude to M. Oberg for posing
`the initial question that led to this research and for helpful discussions along the way.
`
`References
`
`Ii] s. Benedetto, D. Divsalar, G. Montorsi, and F. Pollara. Analysis, design, and iterative de-
`coding of double serially concatenated codes with interlavers. IEEE Journal on Selected
`Areas in Commun.,16(2):231-244, Feb. 1998.
`
`[2] S. Benedetto and G. Montorsi. Unveiling turbo codes: Some results on parallel concate-
`nated coding schemes. IEEE Trans. Inform. Theory,42(2):409428, March 1996.
`
`[3] C. Benou, A. Glavieux, and P. Thitimajshima. Near Shannon limit error-correcting cod-
`ing and decoding: Turbo-codes. In Proc. IEEE Int. Conf. Commun, volume 2, pages
`1064-1070, Geneva, Switzerland, May 1993.
`
`[4] D. Divsalar, H. Jin, and R. J. McEliece. Coding rheorems for "turbo-like,, codes. In
`Proc. 36th Anruml Allerton Conf. on Commun., Control, and Comp-, pages 201-210,
`Monticello, IL, USA, Sept. 1998.
`
`[5] T. Ericson. Bounds on the size of a c.ode, pages 45-69. Number 128 in Lecture Notes in
`Control and Information Sciences. Springer-Verlag Berlin, Heidelberg, Germany, 1989.
`
`[6] R. G. Gallager. Low-density pariry-check codes. Research Monograph 21, The MIT
`Press, Cambridge, MA, 1963.
`
`t7l R. G. Gallager. Information Theory and Reliable Communications. John Wiley and Sons,
`New -t'ork, NY 1968.
`
`[8] N. Kahale and R. Urbanke. On the minimum distance of parallel and serially concarenared
`codes. In Prac. IEEE Int. Symposium on Inform. Theory, page 31, Cambridge, MA, Aug.
`1 998.
`
`[9] R. J. McEliece. Repeat accumulate codes [a class of turbo-like codes that we can anal-
`ysel. MA Summer Program: Codes, Systems, and Graphical Models, Aug. 1999.
`http://www.ima.umn.edu./talks/workshops/at92-73.99lmceliece/rnceliece.html.
`
`t10l M. Oberg and P. H. Siegel. Performance analysis of turbo-equalized dicode partial-
`response channel. rn Proc. 36th Annual Allerton Conf. on commun., control, anri comp.,
`pages 230-239, Monticello, IL, USA, Sept. i998.
`
`t11] E. Seneta. Non -Negative Matrices: An Introduction to Theory and Applications. Wiley,
`New York, Nl 2nd edition, 1973.
`
`269
`
`Hughes, Exh. 1057, p. 11
`
`