`on C07rL7rLumcat'i0n, Comfrol and C07rLputi7Lg 1.9.9.9, Allerton House, Illinois.
`
`Irregular Turbocodes
`
`Brendan J. Frey
`
`Computer Science, University of Waterloo
`
`Electrical and Computer Engineering, University of Illinois at Urbana
`http: //www.cs.uwaterloo.ca/~frey
`
`David J. C. MacKay
`
`Department of Physics, Cavendish Laboratories
`Cambridge University
`http://wol.ra.phy.cam.ac.uk/mackay
`
`Abstract
`
`Recently, several groups have increased the coding gain of iteratively decoded
`Gallager codes (low density parity check codes) by varying the number of parity
`check equations in which each codeword bit participates.
`In regular turbocodes,
`each “systematic bit” participates in exactly 2 trellis sections. We construct ir-
`regular turbocodes with systematic bits that participate in varying numbers of
`trellis sections. These codes can be decoded by the iterative application of the
`sum—product algorithm (a low—complexity, more general form of the turbodecoding
`algorithm). By making the original rate 1/2 turbocode of Berrou et al.
`slightly
`irregular, we obtain a coding gain of 0.15 dB at a block length of N = 131,072,
`bringing the irregular turbocode within 0.3 dB of capacity. Just like regular tur-
`bocodes, irregular turbocodes are linear—time encodable.
`
`1
`
`Introduction
`
`Recent work on irregular Gallager codes (low density parity check codes) has shown that
`by making the codeword bits participate in varying numbers of parity check equations,
`significant coding gains can be achieved [1—3]. Although Gallager codes have been shown
`to perform better than turbocodes at BERs below 10’5 [4]1, until recently Gallager codes
`performed over 0.5 dB worse than turbocodes for BERs greater than 10’5. However,
`in [3], Richardson et al. found irregular Gallager codes that perform 0.16 dB better than
`the original turbocode at BERs greater than 10‘5 [5] for a block length of N m 131,072.
`
`1Gallager codes to not exhibit decoding errors, only decoding failures, at long block lengths with
`N > 5,000.
`
`1
`
`(cid:36)(cid:83)(cid:83)(cid:79)(cid:72)(cid:3)(cid:20)(cid:19)(cid:20)(cid:19)
`Apple 1010
`
`
`
`In this paper, we show that by tweaking a turbocode so that it is irregular, we obtain a
`coding gain of 0.15 dB for a block length of N = 131,072. For example, an N = 131,072
`irregular turbocode achieves Eb/N0 : 0.48 dB at BER = 10*“, a performance similar to
`the best irregular Gallager code published to date
`By further optimizing the degree
`profile, the permuter and the trellis polynomials, we expect to find even better irregular
`turbocodes. Like their regular cousins, irregular turbocodes exhibit a BER flattening
`due to low—weight codewords.
`
`2
`
`IhTeguku'turb0codes
`
`In Fig. 1, we show how to view a turbocode so that it can be made irregular. The first
`picture shows the set of systematic bits (middle row of discs) being fed directly into
`one convolutional code (the chain at the top) and being permuted before being fed into
`another convolutional code (the chain at the bottom). For a rate 1/2 turbocode, each
`constituent convolutional code should be rate 2 / 3 (which may, for example, be obtained
`by puncturing a lower—rate convolutional code).
`
`Since the order of the systematic bits is irrelevant, we may also introduce a permuter
`before the upper convolutional code, as shown in the second picture. In the third picture,
`we have simply drawn the two permuters and convolutional codes side by side.
`
`For long turbocodes, the values of the initial state and the final state of the convo-
`lutional chains do not significantly influence performance (e.g., see
`So, as shown in
`the fourth picture, we can view a turbocode as a code that copies the systematic bits,
`permutes both sets of these bits, and then feeds them into a convolutional code. We refer
`to this turbocode as “regular”, since each systematic bit is copied exactly once.
`
`The final picture illustrates one way the above turbocode can be made irregular.
`Some of the systematic bits are “tied” together, in effect causing some systematic bits to
`be replicated more than once. Notice that too keep the rate of the overall code fixed at
`1 / 2, some extra parity bits must be punctured.
`
`More generally, an irregular turbocode has the form shown in Fig. 2, which is a type of
`“trellis—constrained code” as described in
`We specify a degree profile, fd 6 [0,1],d 6
`{1,2,... ,D}.
`fd is the fraction of codeword bits that have degree d and D is the
`maximum degree. Each codeword bit with degree d is repeated d times before being fed
`into the permuter. Several classes of permuter lead to linear—time encodable codes.
`In
`particular, if the bits in the convolutional code are partitioned into “systematic bits”
`and “parity bits”, then by connecting each parity bit to a degree 1 codeword bit, we can
`encode in linear time.
`
`The average codeword bit degree is
`
`d=ZD:d-fd
`
`<1)
`
`The overall rate R of an irregular turbocode is related to the rate R’ of the convolutional
`code and the average degree d by
`
`fl1—R)=1—R.
`
`(m
`
`So, if the average degree is increased, the rate of the convolutional code must also be
`increased to keep the overall rate constant. This can be done by puncturing the convo-
`lutional code or by designing a new, higher rate convolutional code.
`
`
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`Permuter
`
`
`
`Convolutional code
`
`Permuter
`
`Rep 2
`
`Rep 2
`
`Rep 3
`
`Rep 3
`
`Rep
`
`D
`
`Rep
`
`D
`
`f1
`
`f2
`
`f3
`
`fD
`
`
`
`0.06
`
`0.05
`
`0.04
`
`0.03
`
`0.02
`
`0.01
`
`BER
`
`0.02
`
`0.015
`
`0.01
`
`BER
`
`0.005
`
`0
`
`0
`
`0.08
`0.06
`0.04
`0.02
`Fraction fe of degree 10 bits
`
`0.1
`
`0
`
`20
`15
`10
`5
`0
`Degree of elite bits making up 5% of the codeword bits
`
`
`
`0.1
`0.1
`
`0.01
`0.01
`
`0.001
`0.001
`
`BER
`
`BER
`
`0.0001
`
`0.0001
`
`1e-O5
`1e-05
`
`0
`0
`
`"
`0.2
`0.2
`
`'
`1
`0.6
`0.4
`0.6
`0.4
`Eb/No (dB)
`Eb/No (dB)
`
`'
`0.8
`0.8
`
`1
`1
`
`Figure 4: Performance of the original block length N = 131, 072 turbocode (dashed line)
`and one of its irregular cousins (solid line). These results are for irregular turbocodes
`obtained by tweaking the original turbocode — we are currently searching for optimal
`degree profiles, permuters and trellis polynomials.
`
`experiment, we simulated enough blocks to obtain a relatively small confidence interval.
`The results are shown in Fig. 3a, which indicates that for degree 10 elite bits, the best
`fraction is roughly 0.05. Next, we kept the fraction of elite bits fixed at fe : 0.05 and
`we varied the degree of the elite bits. The results are shown in Fig. 3b, which indicates
`that for a fraction of 0.05, the best degree is roughly 10.
`
`These results show that for e : 10, f6 = 0.05 is a good fraction and that for f6 = 0.05,
`e = 10 is a good degree. However, values of e and fa that give good profiles are probably
`correlated, so we are currently extending our search.
`
`5 Results
`
`Fig. 4 shows the simulated BER—Eg,/N0 curves for the original block length N = 131, 072
`regular turbocode (dashed line) and its irregular cousin (solid line), using profile e = 10,
`fe = 0.05.
`
`The irregular turbocode clearly performs better than the regular turbocode for BER
`> 104‘. At BER = 104, the N : 131, 072 irregular turbocode is 0.3 dB from capacity,
`a 0.15 dB improvement over the regular turbocode.
`
`For high Eb/N0, most of the errors for the irregular turbocode were due to low—weight
`codewords. According to preliminary results, the distribution of error weights appears to
`indicate that the fiattening effect for the particular N = 131, 072 irregular turbocode we
`constructed occurs at a higher BER than it does for the regular turbocode. However, the
`fiattening effect is highly sensitive to the technique used to construct the permuter (we
`drew it at random) and the design of the convolutional code (we just further punctured
`the convolutional code used in the original turbocode). We are currently experimenting
`with techniques for lowering the level of the fiattening effect (e.g., see [13]).
`
`
`
`6 Conclusions
`
`We have shown that by making the original, regular turbocode irregular, a coding gain
`of 0.15 dB is obtained, bringing the irregular turbocode within 0.3 dB of capacity at a
`BER of 10*“. This irregular turbocode performs in the same regime as the best known
`irregular Gallager code.
`
`We emphasize that we obtained these results by tweaking the regular turbocode orig-
`inally introduced by Berrou et al.. We believe we will be able to improve these perfor-
`mance curves significantly, both by exploring the polynomials used in the convolutional
`code and by adjusting the degree profile and the permuter structure. (One way to speed
`up the search is to extend the method of “density evolution” [3] to models with state.)
`In particular, we are investigating ways to select the permuter and the polynomials to
`eliminate low—weight codewords, thus reducing the flattening effect [13-15]. We are also
`studying ways of constraining the degree 1 “parity” bits (i.e., increasing their degree) to
`eliminate low—weight codewords.
`
`References
`
`[1] M. G. Luby, M. Mitzenmacher, M. A. Shokrollahi, and D. A. Spielman, “Improved
`low—density parity—check codes using irregular graphs and belief propagation,” in
`Proceedings of IEEE International Symposium on Information Theory, 1998.
`
`[2] D. J. C. MacKay, S. T. Wilson, and M. C. Davey, “Comparison of constructions of
`irregular Gallager codes,” IEEE Transactions on Communications, vol. 47, October
`1999.
`
`[3] T. Richardson, A. Shokrollahi, and R. Urbanke, “Design of provably good low den-
`sity parity check codes.” Submitted to IEEE Transactions on Information Theory,
`July 1999.
`
`[4] D. J. C. MacKay, “Good error—correcting codes based on very sparse matrices,”
`IEEE Transactions on Information Theory, vol. 45, pp. 399-431, March 1999.
`
`[5] C. Berrou and A. Glavieux, “Near optimum error correcting coding and decoding:
`Turbo—codes,” IEEE Transactions on Communications, vol. 44, pp. 1261-1271, Oc-
`tober 1996.
`
`[6] B. J. Frey, Graphical Models for Machine Learning and Digital Communication.
`Cambridge MA.: MIT Press, 1998. See http://www.cs .utoronto.ca/rvfrey.
`
`[7] B. J. Frey and D. J. C. MacKay, “Trellis—constrained codes,” in Proceedings of the
`35”‘ Allerton Conference on Communication, Control and Computing 1997, 1998.
`Available at http : //www . cs .utoronto . ca/rvfrey.
`
`[8] N. Wiberg, H.—A. Loeliger, and R. Kotter, “Codes and iterative decoding on gen-
`eral graphs,” European Transactions on Telecommunications, vol. 6, pp. 513-525,
`September/October 1995.
`
`[9] F. R. Kschischang and B. J. Frey, “Iterative decoding of compound codes by prob-
`ability propagation in graphical models,” IEEE Journal on Selected Areas in Com-
`munications, vol. 16, pp. 219-230, February 1998.
`
`
`
`[10] R. J. l\/lcEliece, D. J. C. MacKay, and J. F. Cheng, “Turbo—decoding as an instance
`of Pearls ‘belief propagation’ algorithm,” IEEE Journal on Selected Areas in Corn-
`munications, vol. 16, February 1998.
`
`[11] L. R. Bahl, J. Cocke, F. Jelinek, and J. Raviv, “Optimal decoding of linear codes for
`minimizing symbol error rate,” IEEE Transactions on Information Theory, VOl. 20,
`pp. 284—287, March 1974.
`
`[12] L. E. Baum and T. Petrie, “Statistical inference for probabilistic functions of finite
`state markov chains,” Annals of Mathematical Statistics, Vol. 37, pp. 1559-1563,
`1966.
`
`[13] M. Cberg and P. H. Siegel, “Application of distance spectrum analysis to turbocode
`performance improvement,” in Proceedings of the 35”“ Allerton Conference on Corn-
`munication, Control and Computing 1997, 1998.
`
`[14] D. Divsalar and F. Pollara, “Turbo—codes for PCS applications,” in Proceedings of
`the International Conference on Communications, pp. 54-59, 1995.
`
`[15] D. Divsalar and R. J. l\/lcEliece, “Effective free distance of turbocodes,” Electronics
`Letters, VOl. 32, pp. 445-446, February 1996.