`on Cormrmnicat'ion, ContTol and Comvat'ing 1999, Allerton House, Illinois.
`
`Irregular Turbocodes
`
`Brendan J. Frey
`Computer Science, University of Waterloo
`Electrical and Computer Engineering, University of Illinois at Urbana
`http:/ /www.cs.uwaterloo.ca/ rvfrey
`
`David J. C. MacKay
`Department of Physics, Cavendish Laboratories
`Cmnbridge University
`http:/ /wol.ra.phy.cam.ac.uk/mackay
`
`Abstract
`
`Recently~ several groups have increased the coding gain of iteratively decoded
`Gallager codes (low density parity check codes) by varying the number of parity
`check equations in which each codeword bit participates. In regular turbocodcs,
`eaeh "systematic bit" participates in exaetly 2 trellis sections. We construct ir(cid:173)
`regular turbocodes with systematic bits that participate in varying numbers of
`trellis sections. These codes can be decoded by the iterative application of the
`sum-product algorithm (a low-complexity, more general form of the turbodecoding
`algorithm). By making the original rate 1/2 turboeode of Berrou et al. slightly
`irregular, we obtain a coding gain of 0.15 di3 at a block length of N = 131,072,
`bringing the irregular turbocode within 0.3 di3 of capacity. .Just like regular tur(cid:173)
`bocodcs, irregular turbocodcs arc linear-time cncodablc.
`
`1
`
`Introduction
`
`Recent work on irregular Gallager codes (lmv density parity check codes) has slwwn that
`by making the code·word bits participate in varying numbers of parity check equations,
`significant coding gains can be achieved [1-3]. Although Gallager codes have been shmvn
`to perform better than turbocodes at BERs belmv 10_,, [4]1, until recently Gallager codes
`performed over 0.5 dB worse than turhocodes for BERs greater than 10-5 . Hmvever,
`in [3], Richardson et al. found irregular Gallager codes that perform 0.16 dB better than
`the original turhocode at BERs greater than 10-5 [5] for a block length of N ~ 131, 072.
`
`1 Gallager codes to not exhibit decoding errors, only decoding failures, at long blor:k lengths \Vit.h
`N > 0, 000.
`
`1
`
`Hughes, Exh. 1012, p. 1
`
`
`
`In this paper) we show that by tweaking a turbococle so that it is irregular) ;ve obtain a
`coding gain of 0.15 dB for a block length of N = 131, 072. For example, anN= 131,072
`irregular turbocode achieves Ell/ iV0 = 0.48 di3 at BER = 10-4
`, a performance similar to
`the best irregular Gallager code published to elate [3]. By further optimi,;ing the degree
`profile, the permuter and the trellis polynomials, ·we expect to find even better irregular
`turboc:odes. Like their regular cousins, irregular turboeodes exhibit a BEll flattening
`due to lmv-weight codevvords.
`
`2
`
`Irregular turbocodes
`
`In Fig. 1, vve show hcrw to vievv a turbocode so that it can be made irregular. The first
`picture shows the set of systematic hits (middle row of discs) being fed directly into
`one convolutional code (the chain at the top) and being permuted before being fed into
`another convolutional code (the chain at the bottom). For a rate 1/2 turbocode, each
`constituent convolutional code should be rate 2/3 (which may, for example. be obtained
`by punetnring a lmvcr-ratc convolutional code).
`Since the order of the systematic bits is irrelevant) \Ve may also introduce a. permuter
`before the upper convolutional code, as shown in the second picture. In the third picture,
`we have simply drawn the hvo pernmters and convolutional codes side by side.
`For long turbocodcs, the values of the initial state and the final state of the com·o(cid:173)
`lutional chains do not significantly influence performance (e.g., see [6]). So, as shovm in
`the fourth picture, we can view a turbocode as a code that copies the systematic bits,
`permutes both sets of these bits, and then feeds them into a convolutional code. \Vc refer
`to this turbocode as ''regular", since each systematic bit is copied exactly once.
`The final picture illustrates one way the above turbocode can be made irregular.
`Senne of the systematic bits are '~tied" together, in effect causing some systematic bits to
`be replicated more than once. Notice that too keep the rate of the overall code fixed at
`1/ 2, some extra parity bits must be punctured.
`\Iore generally, an irregular· tnrbocode has the form shown in Fig. 2, which is a type of
`"trellis-constrained code" as described in [7]. \Ve specify a degnx pmfile, fd E [0, 1], d E
`.fd is the fraction of codeword bits that have degree d and D is the
`{1, 2, ... , D}.
`maximum degree. Each codeword bit \Vith degree d is repeated d times before being fed
`into the permuter. Several classes of pcrmutcr lead to linear-time cncodahlc codes. In
`particular, if the bits in the convolutional code arc partitioned into "systematic bits"
`and "parity bits", then by connecting each parity bit to a degree 1 codeword bit, \ve can
`encode in linear time.
`The average codeword bit degree is
`
`/)
`
`d= Ld·.td
`
`(1)
`
`d="l
`The overall rate R of an irregular turbocode is related to the rate R' of the convolutional
`code and the average degree d by
`
`d(1 - R') = 1 - R.
`
`(2)
`
`So, if the average degree is increased, the rate of the convolutional code must also be
`incn~ascd to k(~(~p the overall rate constant. This ean be dorw by puncturing tlw convo(cid:173)
`lutional code or by designing a new, higher rate convolutional code.
`
`2
`
`Hughes, Exh. 1012, p. 2
`
`
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Hughes, Exh. 1012, p. 3
`
`
`
`Convolutional code
`
`Permuter
`
`Rep 2
`
`Rep 2
`
`Rep 3
`
`Rep 3
`
`Rep
`
`D
`
`Rep
`
`D
`
`f1
`
`f2
`
`f3
`
`fD
`
`Hughes, Exh. 1012, p. 4
`
`
`
`0.02
`
`0.015
`
`0.01
`
`0.005
`
`0
`
`0
`
`0.06
`
`0.05
`
`0.04
`
`0.03
`
`0.02
`
`0.01
`
`0.08
`0.06
`0.04
`0.02
`Fraction fe of degree 10 bits
`
`0.1
`
`0
`
`20
`15
`10
`5
`0
`Degree of elite bits making up 5% of the codeword bits
`
`Hughes, Exh. 1012, p. 5
`
`
`
`0.1
`
`0.01
`
`0.001
`
`0.0001
`
`1e-05
`
`0
`
`0.2
`
`0.6
`0.4
`Eb/No (dB)
`
`0.8
`
`1
`
`Hughes, Exh. 1012, p. 6
`
`
`
`6 Conclusions
`
`\Ve have shown that by making the original, regular turbocode irregular, a. coding gain
`of 0.15 di3 is obtained, bringing the irregular turbocode within 0.3 di3 of capacity at a
`BER of 10-4 • This irregular turbocode performs in the same regime as the best known
`irregular Gallager code.
`\iVe emphasize that we obtained these results by tvvcaking the regular tnrbocode orig(cid:173)
`inally introduced by Berrou et al.. \Ve believe we ·will be able to improve these perfor(cid:173)
`mance curves significantl:y, both by exploring the polynomials used in the convolutional
`code and by a.djusting the degree profile and the perrnuter structure. (One way to speed
`up the search is to extend the method of "'density evolution'' [3] to models with state.)
`In particular, vve are investigating ways to select the permuter and the polynomials to
`eliminate lmv-vveight codevvords, thus reducing the flattening effect [13-15]. \Ve are also
`studying vva.ys of constraining the degree 1 "parity" bits (i.e. , increasing their degree) to
`eliminate lmv-vvcight codev-mrcls.
`
`References
`
`[1] Iv1. G. Luby, Ivi. l\Iitzenmacher, M.A. Shokrollahi, and D. A. Spielman, "Improved
`low-density parity-check codes using irregular graphs and belief propagation," m
`Proceedings of IEEE International S:yrnpos'imn on Iujonrwtion Theory, 1998.
`
`[2] D. J. C .. \IacKay, S. T. \Vilson, and l'd. C. Davey, "Comparison of constructions of
`irregular Gallager codes," TRRR Transactions on Comrrwnications, vol. 47, October
`1999.
`
`[3] T. Richardson, A. Shokrollahi, and R. Crbanke, "Design of provably good lmv den(cid:173)
`sity parity check codes." Submitted to IEEE Transactions on Information Theor-y,
`July 1999.
`
`[4] D . .J. C. 1jaeKay, "Good error-correcting codes based on very sparse matrices,"
`IEEE Tr-ansactions on Information Theory, vol. 45, pp. 399 431, ~larch 1999.
`
`[i5] C. Berrou and A. Glavieux, "Near optimum error correcting coding a.nd decoding:
`Turbo-codes," IEEE Transactions on Communications, vol. 44) pp. 1261-1271, Oc(cid:173)
`tober 1996.
`
`[6] B. J. Frey, Graphical Models for Machine Learning and Digital Communication.
`Cambridge MA.: MIT Press, 1998. See http: I /www. cs. utoronto. ca/1"../frey.
`
`[7] B . .J. Frey and D . .J. C. :\IacKay, ':Trellis-constrained codes," in Proceedings of the
`35th Allerton Conference on Communication, Control and Computing 199? 1998.
`Available at http: I /www. cs. utoronto. ca/ rvfrey.
`
`[8] :.J. \Viberg, H.-A. Loeliger, and R. Kotter, ::Codes and iterative decoding on gen(cid:173)
`eral graphs," European Transactions on Tdecommnnications, vol. 6, pp. 513-525,
`September/October 1995.
`
`[9] F. R. Kschischang and I3 . .J. Frey, "Iterative decoding of compound codes by prob(cid:173)
`ability propagation in graphical models,)) IEEE Jou.r-nal on Selected Arr~as in Com(cid:173)
`munications, vol. 16, pp. 219-230, February 1998.
`
`7
`
`Hughes, Exh. 1012, p. 7
`
`
`
`[10] R. J . .tvicEliece, D. J. C. :VIacKay) and J. F. Cheng) "Turbo-decoding a.s an instance
`of Pearl's ~belief propagation' algorithm,') IEEE Jo'Urnal on Selected A Teas in Com(cid:173)
`m'Unications, vol. 16, February 1998.
`
`[11] L. R Bahl, .J. Cocke, F . .Jelinek, and .J. H.aviv, "Optimal decoding of linear codes for
`minimi7.ing symbol error rate," IEEE Transactions on Information Theory, vol. 20,
`pp. 284-287, March 1974.
`
`[12] L. E. Baum and T. Petrie, "Statistical inference for probabilistic functions of finite
`state markov chains," Annals of Mathematical Statistics, vol. 37, pp. 1559-1563,
`1966.
`
`[13] .tv1. Oberg and F. H. Siegel, "Application of distance spectrum analysis to turbocode
`performance improvement," in Proceedings of the 35th Allerton ConfeTence on Com(cid:173)
`nwnicalionJ Conlml and Comp'Uling 1997) 1998.
`
`[14] D. Divsalar and F. Pollan:t., "Turbo-eodes for PCS applications," in Proceedings of
`the International Conference on Communications, pp .. )4-59, 199i:i.
`
`[15] D. Divsalar and R. J. IvicEliece, ~'Effective free distance of turbocodes," ElectTorvics
`Letter-s) ~vol. 32, pp. 445-446) February 1996.
`
`8
`
`Hughes, Exh. 1012, p. 8