`in Mathematics
`and its Applications
`
`Volume 123
`
`Series Editor
`Willard Miller, Jr.
`
`Springer
`New York
`Berlin
`Heidelberg
`Barcelona
`Hong Kong
`Lo11don
`Milan
`Paris
`Singapore
`Tokyo
`
`Hughes, Ex. 1058, p. 1
`
`
`
`Brian Marcus
`IBM Almaden Re earch Center, K65-802
`650 Harry Rd.
`San Jose, CA 95120
`USA
`e-mail: marcus@almaden.ibm.com
`
`Joachim Rosenthal
`Department of Mathematics
`University of Notre Dame
`'otre Dame, IN 46556-5683
`USA
`e-mail: rosen @nd.edu
`
`Library of Congress Cataloging-in-Publication Data
`Codes, systems, and graphical models I editors, Brian Marcus, Joachim Rosenthal.
`p. em. -{The IMA volumes in mathematics and its applications ; 123)
`Includes bibliographical references and index.
`ISBN 0-387-95173-3
`I. Coding theory-Congresses. 2. System theory-Congre ses. 3. Symbolic
`I. Marcus, Brian, 1949-
`II. Rosenthal, Joachim, 196!-
`dynamics-Congresses.
`Workshop on Codes, Systems, and Graphical Model (I 999)
`IV. Series.
`QA268 .C63 2001
`003'.54-dc21
`Printed on acid-free paper.
`
`Ill. !MA
`
`00-052274
`
`© 2001 Springer-Verlag New York, Inc.
`All rights reserved. This work may not be translated or copied in whole or in pan without the
`written permission of the publisher (Springer- Verlag New York, Inc., 175 Fifth Avenue, New York,
`NY 10010, USA), except for brief excerpts in connection with review or cholarly analysis. Use
`in connection with any form of information
`torage and retrieval, electronic adaptation , computer
`oftware, or by similar or dissimilar methodology now known or hereafter developed i forbidden.
`The use of general descriptive names, trade names, trademarks, etc., in this publication, even if the
`former are not especially identified, is not to be taken as a sign that such names, as understood by
`the Trade Marks and Merchandi e Marks Act, may accordingly be used freely by anyone.
`Authorization to photocopy items for internal or personal use, or the internal or personal use of
`specific clients, is granted by Springer-Verlag New York, Inc., provided that the appropriate fee is
`paid directly to Copyright Clearance Center, 222 Rosewood Drive, Danver, MA 01923, USA (Tele(cid:173)
`phone: (508) 750-8400), staling the ISBN number, the title of the book. and the tirst and last page
`numbers of each article copied. The copyright owner's consent does not include copying for general
`distribution, promotion, new works, or resale. In these cases, specific written permission must first
`be obtained from the publisher.
`
`Production managed by A. Orrantia; manufacturing supervised by Joe Quatela.
`Camera-ready copy prepared by the IMA.
`Printed and bound by Sheridan Books. Inc., Ann Arbor, MI.
`Printed in the United States of America.
`9 8 7 6 5 4 3 2 I
`1 BN 0-387-95173-3
`
`SPJN I 0789096
`Springer-Verlag New York Berlin Heidelberg
`A member of BerrelsmannSpringer Science+Business Media GmbH
`
`Hughes, Ex. 1058, p. 2
`
`
`
`CONTENTS
`
`Foreword ............................................................. v
`
`Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii
`
`PART 1. OVERVIEWS
`
`An introduction to the analysis of iterative coding
`systems ............................................................... 1
`Tom Richardson and Rudiger Urbanke
`
`Connections between linear systems and convolutional
`codes ................................................................ 39
`Joachim Rosenthal
`
`Multi-dimensional symbolic dynamical systems ....................... 67
`Klaus Schmidt
`
`PART 2. CODES ON GRAPHS
`
`Linear-congruence constructions of low-density
`parity-check codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
`J. Bond, S. Hui, and H. Schmidt
`
`On the effective weights of pseudocodewords for
`codes defined on graphs with cycles ................................. 101
`G. David Forney, Jr., Ralf Koetter,
`Frank R. Kschischang, and Alex Reznik
`
`Evaluation of Gallager codes for short block
`length and high rate applications. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
`David J.C. MacKay and Matthew C. Davey
`
`Two small Gallager codes ........................................... 131
`David J.C. MacKay and Matthew C. Davey
`
`Mildly non-linear codes ............................................. 135
`Alan Parks
`
`Capacity-achieving sequences . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
`M.A. Shokrollahi
`
`ix
`
`Hughes, Ex. 1058, p. 3
`
`
`
`X
`
`CONTENTS
`
`Hypertrellis: A generalization of trellis and factor
`graph ............................................................... 167
`Wai Ho Mow
`
`PART 3: DECODING TECHNIQUES
`
`BSC thresholds for code ensembles based on
`"typical pairs" decoding ............................... · · · · · · · · · · · · · 195
`Srinivas Aji, Hui Jin, Aamod Khandekar,
`David J.C. MacKay, and Robert J. McEliece
`
`Properties of the tailbiting BCJR decoder ....................... · · .. 211
`John B. Anderson and Kemal E. Tepe
`
`Iterative decoding of tail-biting trellises and
`connections with symbolic dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 239
`G. David Forney, Jr., Frank R. Kschischang,
`Brian Marcus, and Selim Tuncel
`
`Algorithms for decoding and interpolation. . . . . . . . . . . . . . . . . . . . . . . . . . . 265
`M argreet K uijper
`
`An algebraic description of iterative decoding schemes ............... 283
`Elke Offer and Emina Soljanin
`
`Recursive construction of Grobner bases for the
`solution of polynomial congruences... . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 299
`Henry O'Keeffe and Patrick Fitzpatrick
`
`On iterative decoding of cycle codes of graphs ....................... 311
`Gilles Zemor
`
`PART 4. CONVOLUTIONAL CODES AND CODES OVER RINGS
`
`Convolutional codes over finite Abelian groups:
`Some basic results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327
`Fabio Fagnani and Sandro Zampieri
`
`Symbolic dynamics and convolutional codes ......................... 347
`Bruce Kitchens
`
`LinT.e'har codes and their duals over artinian rings . . . . . . . . . . . . . . . . . . . . . 361
`omas Mittelholzer
`
`Hughes, Ex. 1058, p. 4
`
`
`
`CONTENTS
`
`xi
`
`Unit memory convolutional codes with maximum
`distance ............................................................ 381
`Roxana Smarandache
`
`Basic properties of multidimensional convolutional
`codes ............................................................... 397
`Paul Weiner
`
`PART 5. SYMBOLIC DYNAMICS AND AUTOMATA THEORY
`
`Length distributions and regular sequences .......................... 415
`Frederique Bassino, Marie-Pierre Beal,
`and Dominique Perrin
`
`Handelman's theorem on polynomials with positive
`multiples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 439
`Valerio de Angelis and Selim Tuncel
`
`Topological dynamics of cellular automata ........................... 447
`Petr Kurka
`
`A spanning tree invariant for Markov shifts .......................... 487
`Douglas Lind and Selim Tuncel
`
`List of workshop participants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 499
`
`Hughes, Ex. 1058, p. 5
`
`
`
`-~~------·--------------------~~====
`
`Part 1. Overviews
`
`Hughes, Ex. 1058, p. 6
`
`
`
`r
`
`1-
`
`~
`
`AN INTRODUCTION TO THE ANALYSIS OF ITERATIVE
`CODING SYSTEMS
`
`TOM RICHARDSON* AND RUDIGER URBANKEt
`
`Abstract. This paper is a tutorial on recent advances in the analysis of iterative
`coding systems as exemplified by low-density parity-check codes and turbo codes.
`The theory described herein is composed of various pieces. The main components
`are concentration of system performance over the ensemble of codes and inputs, the ex(cid:173)
`istence of threshold phenomena in decoding performance, and the computational and/or
`analytical determination of thresholds and its implications in system design.
`We present and motivate the fundamental ideas and indicate some technical aspects
`but proofs and many technical details have been omitted in deference to accessibility to
`the concepts. Low-density parity-check codes and parallel concatenated codes serve as
`contrasting examples and as vehicles for the development.
`
`Key words. turbo codes, low-density parity-check codes, belief propagation, it(cid:173)
`erative decoding, stability condition, threshold, output-symmetric channels, Azuma's
`inequality, support tree.
`
`AMS{MOS) subject classifications. 94B05.
`
`1. Introduction. This paper is a tutorial on recent advances in the
`analysis and design of iterative coding systems as exemplified by low(cid:173)
`density parity-check (LDPC) codes and turbo codes~ We will outline a
`mathematical framework within which both of the above mentioned coding
`systems may be analyzed. Certain aspects of the theory have important
`practical implications while other aspects are more academic. Provable
`statements concerning the asymptotic performance of these iterative cod(cid:173)
`ing systems can be made. Here, asymptotic refers to the length of the
`code. For codes of short length the theory we shall present does not yield
`accurate predictions of the performance. Nevertheless, the ordering of cod(cid:173)
`ing systems which is implied by the asymptotic case tends to hold even
`for fairly short lengths. The bit error probability is the natural measure of
`performance in the theory of iterative coding systems but the theory also
`offers insights and guidance for various other criteria, e.g., the block error
`probability.
`We will here briefly describe an example and formulate some claims
`that represent what we consider to be the apex of the theory: Let us
`consider the class of (3,6)-regular LDPC codes (see Section 2.1.1 for a
`definition) of length n for use over an additive white Gaussian noise channel,
`i.e., we transmit a codeword consisting of n bits Xi E { ± 1} and receive n
`values Yi = Xi + Zi where the Zi are i.i.d. zero mean Gaussian random
`
`*Bell Labs, Lucent Technologies, Murray Hill, NJ 07974, USA; email:
`tjr@lucent. com.
`t Swiss Federal Institute of Technology - Lausanne, LTHC-DSC, CH-1015 Lausanne,
`Switzerland; email: Rudiger. Urbanke@epfl. ch.
`
`Hughes, Ex. 1058, p. 7
`
`
`
`2
`
`TOM RICHARDSON AND RUDIGER URBANKE
`
`variables with variance CJ2• We choose the particular code, as determined
`by its associated particular graph, at random (see Section 2). We tran.smit
`one codeword over the channel and decode using the sum-product algonthm
`for £ iterations. The following statements are consequences of the theory:
`1. The expected bit error rate approaches some number E = t:(£) as n
`tends to infinity and this number is computable by a deterministic
`algorithm.
`.
`.
`2. For any o > 0, the probability that the actual fractiOn of hit errors
`lies outside the range ( E - o, E + o) converges to zero exponentially
`fast inn.
`3. There exists a maximum channel parameter CJ* (in this case CJ* ::
`0.88), the threshold, such that limt--; 00 t:(£) = 0 if CJ < CJ* and
`limt--; 00 E( £) > 0 if CJ > CJ*.
`Each of these statements generalize to some extent to a wide variety of
`codes, channels, and decoders. The existence of t:(i) in statement 1 is
`very general, holding for all cases of interest. In general there may not be
`any efficient algorithm to compute this number t:(i) but, fortunately, for
`the case of most interest, namely the sum-product algorithm, an efficient
`algorithm is known. Statement 2 holds in essentially all cases of interest
`and depends only on the asymptotics of the structure of the graphs which
`define the codes. Statement 3 depends on both the decoding algorithm
`used and the class of channels considered (AWGN). It depends on the fact
`that the channels are ordered by physical degradation (see section 7) and
`that the asymptotic decoding performance respects this ordering.
`Although the threshold, as introduced above in 3, is an asymptotic
`parameter of the codes, it has proven to have tremendous practical signif(cid:173)
`icance. It is only a slight exaggeration to assert that, comparing coding
`systems with the same rates, the system with the higher threshold will per(cid:173)
`form better for nearly all n. The larger n is, the more valid the assertion
`is. Even though the assertion is not entirely true for small n, one can still
`significantly improve designs by looking for system parameters that exhibit
`larger thresholds.
`To design for large threshold one needs to be able to determine it or
`at least accurately estimate it. Therefore an important facet of the theory
`of these coding systems deals with the calculation of the threshold.
`In his Ph.D. thesis of 1961, Gallager [13] invented both LDPC codes
`and iterative decoding. With the notable exceptions of Zyablov and Pinsker
`[45) and Tanner [40) (see also Sourlas [39)), iterative coding systems were
`all hut ~o:got.te~ until the introduction of turbo codes by Berrou, Glavieux
`and ThitimaJshima [4) (see also [19)). In the wake of the discovery of turbo
`codes LDPC.codes were rediscovered by MacKay and Neal [24), completing
`the cycle which had started some thirty years earlier.
`For some simple cases Gallager was able to determine thresholds for
`the systems he co~sidered. In the work of Luby et. al. [22] the authors used
`threshold calculatiOns for the binary erasure channel (BEC) and the binary
`
`Hughes, Ex. 1058, p. 8
`
`
`
`INTRODUCTION TO THE ANALYSIS OF ITERATIVE CODING SYSTEMS 3
`
`symmetric channel (BSC) under hard decision decoding to optimize the
`parameters of irregular LDPC codes (Gallager had considered only regular
`LDPC codes) with respect to the threshold. By this approach they showed
`that very significant improvements in performance were possible. Indeed,
`they explicitly exhibited a sequence of LDPC code ensembles which, under
`iterative decoding, are capable of achieving the (Shannon) capacity of the
`BEC. To date, the BEC is the only non-trivial channel for which capacity
`achieving iterative coding schemes are explicitly known.
`Another important aspect of the work presented in [22] is the method
`of analysis. The approach differed from that taken by Gallager in certain
`key respects. The approach of Gallager allowed statements to be made
`concerning the asymptotics of certain special constructions of his codes.
`The approach of Luby et. al., allowed similar and, in some ways, stronger
`statements (such as the ones given in our example above) to be made
`concerning the asymptotics of random ensembles, i.e., randomly chosen
`codes from a given class. This placed the emphasis clearly on the threshold
`and away from particular constructions and opened the door to irregular
`codes for which constructions are generally very difficult to find.
`In [30] the approach taken in [22] was generalized to cover a very
`broad class of channels and decoders. Also in [30], an algorithm was intro(cid:173)
`duced for determining thresholds of LDPC codes for the same broad class
`of channels and the most powerful and important iterative decoder, the
`sum-product algorithm, also called belief propagation, which is the name
`for a generalization of the algorithm as independently developed in the AI
`community by Pearl [28].1 In [29] the full power of these results was re(cid:173)
`vealed by producing classes of LDPC codes that perform extremely close
`to the best possible as determined by the Shannon capacity formula. For
`the additive white Gaussian noise channel (AWGNC) the best code of rate
`one-half presented there has a threshold within 0.06dB of capacity, and sim(cid:173)
`ulation results demonstrate a LDPC code of length 106 which achieves a bit
`error probability of 10-6 , less than 0.13dB away from capacity. Recent im(cid:173)
`provements have demonstrated thresholds within 0.012dB of capacity [5].
`These results strongly indicate that LDPC codes can approach Shannon
`capacity. As pointed out above, only in the case of the BEC has such a
`result actually been proved [23]. Resolving the question for more general
`channels remains one of the most challenging open problems in the field.
`In the case of turbo codes the same general theory applies, although
`there are some additional technical problems to be overcome. In the setting
`of turbo codes belief propagation corresponds to the use of the BCJR algo(cid:173)
`rithm for the decoding of the component codes together with an exchange
`of extrinsic information. This is generally known as "turbo decoding". 2
`
`1The recognition of the sum-product algorithm as an instance of belief propagation
`was made by Frey and Kschischang [12] and also by McEliece, Rodemich, and Cheng
`[26].
`2 The original incarnation of turbo decoding in [4] was not belief propagation, Robert-
`
`Hughes, Ex. 1058, p. 9
`
`
`
`4
`
`TOM RICHARDSON AND RUDIGER URBANKE
`
`Thus one can similarly explore turbo codes and generalizations of turbo
`code; from the perspective of threshold behavior. It is possible to describe
`an algorithm for computing thresholds for turbo codes but such an al(cid:173)
`gorithm appears computationally infeasible except in the simplest cases.
`Fortunately, it is possible to determine thresholds to any desired degree
`of accuracy using Monte-Carlo methods. The putative deterministic algo(cid:173)
`rithm used to compute thresholds can be mimicked by random sampling.
`One can prove that certain ergodic properties of the computation guaran(cid:173)
`tee convergence of the Monte-Carlo approach (assuming true randomness)
`to the answer that would have been determined by the exact algorithm.
`Moreover, all of the information used to optimize LDPC codes is available
`from the Monte-Carlo approach and, therefore, it is possible to optimize
`thresholds for various extensions of turbo codes. Generalizations of turbo
`codes appear to exhibit thresholds approaching Shannon capacity. The
`work described here appears in [31].
`In a very precise sense, determining the threshold by the methods in(cid:173)
`dicated above corresponds to modeling how decoding would proceed on
`an infinitely long code. One determines not merely the threshold, but the
`statistics of the entire decoding process. Decoding proceeds in discrete time
`by passing messages along edges in a graph. In the infinite limit one con(cid:173)
`siders the distribution of the messages (pick an edge uniformly at raridom,
`what message is it carrying?) These distributions are parameterized in
`a suitable fashion and the algorithms mentioned above iteratively update
`the distributions in correspondence with iterations of the decoding process.
`The sequence of distributions so obtained and the method used to obtain
`them are referred to as density evolution. Clearly, density evolution is key
`to understanding the decoding behavior of such systems and the study of
`density evolution is a fundamental outgrowth of the general theory.
`Our purpose in this paper is to provide the reader with a vehicle for
`quickly grasping the key features, assumptions, and results of the general
`theory. The paper consists of further details and examples intended to be
`sufficient to equip the reader with a practical overview of the current state of
`knowledge as embodied in this theory. The paper is loosely organized along
`the lines of assumptions and conclusions: Each section is devoted to stating
`ass~mptions required for some part of the theory, usually some examples, to
`statr~g what conclusions can be drawn, and indicating what mathematical
`techmques are used to draw the conclusions. We have ordered material
`from most general to most specific. Rather that attempting to present the
`most g~n~ral, all-encompassing, form of the theory, we have tried to make
`the marn rdeas as accessible as possible.
`
`2. Graphical representations of codes. The mathematical frame(cid:173)
`work described in this paper applies to various code constructions based on
`graphs. To keep notation to a minimum, we will restrict our attention in
`
`son [33] refined the algorithm into a form equivalent to belief propagation.
`
`Hughes, Ex. 1058, p. 10
`
`
`
`INTRODUCTION TO THE ANALYSIS OF ITERATIVE CODING SYSTEMS 5
`
`this paper to the standard parallel concatenated code with two component
`codes and to LDPC codes. For a more detailed treatment of turbo codes
`which includes also serially concatenated codes and generalized turbo codes
`we refer the reader to [31]. The theory developed here also extends to much
`more general graphical representations such as those developed by, e.g., N.
`Wiberg and H.-A. Loeliger and R. Kotter [44], Kschischang, Frey, Loeliger
`[21], Kschischang and Frey [20], or Forney [11]. LDPC codes as well as
`turbo codes can be represented using quite simple graphs and the decoders
`of interest operate directly and locally on these graphs. (In the case of
`turbo codes one should bear in mind windowed decoding. For a description
`of windowed decoding see Section 5.2. The theory extends to standard, i.e.,
`non-windowed, turbo decoding by taking limits but the analogy between
`LDPC codes and turbo codes is closer in the case of windowed decoding.)
`The theory addresses ensembles of codes and their associated ensem(cid:173)
`bles of graphs. For block codes the idea of looking at ensembles of codes
`rather than individual codes is as old as coding theory itself and originated
`with Shannon. In the setting of turbo codes this idea was introduced by
`Benedetto and Montorsi [3] and it was motivated by the desire to bound
`the maximum likelihood performance of turbo codes. 3 The ensembles are
`characterized by certain fixed parameters which are independent of the
`length of the code. The graphs associated to LDPC codes, for example,
`are parameterized by their degree sequences (.A, p) (see below for details).
`The graphs associated to turbo codes are parameterized by the polynomi(cid:173)
`als determining the constituent codes, their interconnection structure, i.e.,
`parallel vrs. serial, and puncturing patterns. Given the size of the code n,
`these fixed parameters determine the number of the various node types in
`the graph. The ensemble of codes is defined in terms of the possible edges
`which complete the specification of the graph. In both of the above cases,
`the graphs are bipartite: One set of nodes, the variable nodes, corresponds
`to variables (bits) and the other set, the check nodes, corresponds to the
`linear constraints defining the code.
`In general, the fixed parameters and the length n determine the nodes
`of the graph and a collection of permissible edges. One then considers
`the set of all permissible edge assignments and places on them a suitable
`probability distribution. The theory addresses properties of the ensemble
`as n gets large.
`
`2.1. Ensembles of codes and graphs. We shall now present more
`detailed definitions for code ensembles of LDPC codes and parallel con(cid:173)
`catenated codes. These examples, although important, are not exhaustive.
`Note that the local connectivity of the graphs remains bounded indepen(cid:173)
`dent of the size of the graph. This is the critical property supporting the
`
`3 A considerable amount of additional research has been done in the direction of
`bounding the maximum likelihood performance of a code from information on its spec(cid:173)
`trum, see e.g. [10, 42, 34, 35, 9].
`
`Hughes, Ex. 1058, p. 11
`
`
`
`6
`
`TOM RICHARDSON AND RUDIGER URBANKE
`
`asymptotic analysis, i.e., the concentration theorem.
`2.1.1. LDPC codes. As described above, low-density parity-check
`codes are well represented by bipartite graphs in which the variable nodes
`corresponds to elements of the codeword and the check nodes correspond
`to the set of parity-check constraints satisfied by codewords of the code.
`Regular low-density parity-check codes are those for which all nodes of the
`same type have the same degree. Thus, a (3,6)-regular low-density parity(cid:173)
`check code has a graphical representation in which all variable nodes have
`degree three and all check nodes have degree six. The bipartite graph
`determining such a code is shown in Fig. 1.
`
`VI
`
`v2
`
`VJ
`
`V4
`
`v5
`
`v6
`
`V7
`
`Vg
`
`Vg
`
`V10
`
`cl
`
`c2
`
`CJ
`
`C4
`
`c5
`
`FIG. 1. A (3,6)-regular code of length 10 and rate one-half. There are 10 variable
`nod~s and 5 ch~ck nodes. For each check node c; the sum {over GF{2}} of all adjacent
`vanable nodes u equal to zero.
`
`For an irregular low-density parity-check code the degrees of each set
`of n~des ar~ chosen according to some distribution. Thus, an irregular low(cid:173)
`density pan.ty-check code might have a graphical representation in which
`half the vanable nodes have degree three and half have degree four while
`h~lf the constraint nodes have degree six and half have degree eight: For a
`given lbelngth and a given degree sequence (finite distribution) we define an
`ensem e of codes by choos·
`th d
`.
`.
`mg e e ges, 1.e., the connections between van-
`bl
`d h k
`a e an c ec nodes rando 1 M
`.
`•
`m Y·
`ore precisely, we enumerate the ~dges
`
`Hughes, Ex. 1058, p. 12
`
`
`
`.
`
`-
`
`'~/
`
`' - - - -
`
`\
`
`--
`
`-
`-~-~
`
`..
`
`i'
`i'
`
`INTRODUCTION TO THE ANALYSIS OF ITERATIVE CODING SYSTEMS
`
`7
`
`emanating from the variable nodes in some arbitrary order and proceed in
`the same way with the edges emanating from the check nodes. Assume
`that the number of edges is E. Then a code (a particular instance of this
`ensemble) can be identified with a permutation onE letters. Note that all
`elements in this ensemble are equiprobable. In practice the edges are never
`chosen entirely randomly since, e.g., certain potentially unfortunate events,
`such as double edges and very short loops, in the graph construction can
`be easily avoided.
`Hence, for a given length n the ensemble of codes will be determined
`once the various fractions of variable and check node degrees have been
`specified. Although this specification could be done in various ways the
`following notation introduced in [22] leads to particularly elegant state(cid:173)
`ments of many of the most fundamental results. Let dz and dr denote
`the maximum variable node and check node degrees, respectively, and let
`
`-\(x) := z=t~1 AiXi-1 and p(x) := z=t,: 1 PiXi- 1 denote polynomials with
`non-negative coefficients such that -\(1) = p(1) = 1. More precisely, let the
`coefficients, Ai (pi) represent the fraction of edges emanating from variable
`(check) nodes of degree i. Then, clearly, this degree sequence pair (-\, p)
`completely specifies the distribution of the node degrees. The alert reader
`may have noticed several curious points about this notation. First, we do
`not specify the fraction of nodes of various degrees but rather the fraction
`of edges that emanate from nodes of various degrees. Clearly, it is easy
`to convert back and forth between this edge perspective and a node per(cid:173)
`spective. E.g., assume that half the variable nodes have degree three and
`half have degree four and that there is a total of n nodes. Since every
`degree three node has three edges emanating from it, whereas every degree
`four nodes has four edges emanating to it we see that there are in total
`1/2 · 3n edges which emanate from degree three nodes and that there are
`in total 1/2 · 4n edges which emanate from degree four nodes. Therefore
`1
`2
`3
`4
`1/ 2·
`3/7
`d '
`4/7
`th
`.
`h'
`/\4 = 112 .3+1; 2 .4 =
`3 = 112 .3+1/2 .4 =
`an
`so
`at m t IS case
`,\
`/
`·
`-\(x) = 3j7x2 +4/7x3
`• Second, the fraction of edges which emanate from a
`degree i node is the coefficient of xi- 1 rather than xi as one might expect at
`first. The ultimate justification for this choice comes from the fact that, as
`we will see later, simple quantities like A'(O) or p'(1) take on an operational
`meaning. A particular striking example of the elegance of this notation is
`given by the stability condition which we will discuss in Section 8.3. This
`condition takes on the form A'(O)p'(1) < g(a) where g is a function of the
`channel parameter a only.
`
`2.1.2. Turbo codes. For every integer n we define an ensemble of
`standard parallel concatenated codes in the following manner. We first fix
`the two rational functions G1(D) = :~fg~ and G2 (D) = :~fg] which de(cid:173)
`scribe the recursive convolutional encoding functions. Although the general
`case does not pose any technical difficulties, in order to simplify notation,
`we will assume that all codewords of a convolutional encoder start in the
`
`Hughes, Ex. 1058, p. 13
`
`
`
`8
`
`TOM RICHARDSON AND RUDIGER URBANKE
`
`zero state but are not terminated. For x E {±1}n let 'Yi(x), i = 1,2,
`denote the corresponding encoding functions. Then for fixed component
`codes and a given permutation 1r on n letters the unpunctured codewords
`of a standard parallel concatenated code have the form (x, 1'1 (x), 1'2(1r(x))).
`Therefore, for fixed component codes and a fixed puncturing pattern there
`is a one-to-one correspondence between permutations on n letters and codes
`in the ensemble. We will assume a uniform probability distribution on the
`set of such permutations. This is the same ensemble considered in [3] but
`the present focus is on the analysis of the performance of turbo codes under
`iterative decoding rather than under maximum likelihood decoding.
`The graphical representation of the code contains variable nodes, as
`in the LDPC code case, and check nodes, which, in this case, represent a
`large number of linear constraints on the bits associated to the variable
`nodes. A check node represents the linear constraints imposed by an en(cid:173)
`tire constituent code. Equivalently, it represents the trellis which in turn
`represents the constituent code. Hence, for standard parallel concatenated
`codes with two component codes, there are only two check nodes.
`
`3. Decoding: Symmetries of the channel and the decoder. In
`this paper we will limit ourselves to the case of binary codes and transmis(cid:173)
`sion over memoryless channels since in this setting all fundamental ideas
`can be represented with a minimum of notational overhead. The gener(cid:173)
`alization of the theory to larger alphabets [7] or channels with memory
`[14, 16, 15, 18, 25] is quite straightforward and does not require any new
`fundamental concepts. As usual for this case, we will assume antipodal
`signalling, i.e., the channel input alphabet is equal to { ±1 }.
`The decoding algorithms of interest operate directly on the graph,
`described in Section 2.1, that represents the code. The algorithms are lo(cid:173)
`calized and distributed: edges carry messages between nodes and nodes
`process the incoming messages received via their adjacent edges in order
`to determine the outgoing messages. The algorithms proceed in discrete
`steps, each step consisting of a cycle of information passing followed by pro(cid:173)
`cessing. Generally speaking, computation is memory less so that, in a given
`step, processing depends only on the most recent information received from
`neighboring nodes. It is possible to analyze decoders with memory but we
`will not consider such decoders here. Given the above setup we distinguish
`between two types of processing which may occur according to the depen(cid:173)
`dency of the outgoing information. When the outgoing information along
`an edge depends only on information which has come in along other edges,
`then we say that the algorithm is a message-passing algorithm. The sum(cid:173)
`product algorithm is the most important example of such an algorithm.
`T~e _most im~ortant example of a non message-passing algorithm is the
`fizppmg algonthm. This is a very low complexity hard-decision decoder
`of LDPC cod:s in which bits at the variable nodes are 'flipped' in a given
`round dependmg on the number of unsatisfied and satisfied c~nstraint~ they
`
`Hughes, Ex. 1058, p. 14
`
`
`
`~-----------·------------------------------------
`
`INTRODUCTION TO THE ANALYSIS OF ITERATIVE CODING SYSTEMS 9
`
`Systematic bits x
`
`T
`1
`T
`j_
`
`Parity bits 1'1 ( x)
`
`T
`j_
`
`Parity bits 1'2(n(x)) •
`
`:
`
`FIG. 2. A graphical representation of a standard parallel concatenated code analo(cid:173)
`gous to the bipartite graph of a LDPC code.
`
`are connected to. Here, we say that a constraint is satisfied if and only if the
`modulo two sum of its neighbor bits is 0. We note that the techniques used
`to analyze these two types of decoders are quite distinct and the nature of
`statements which can be made tend to differ significantly. (Nevertheless,
`certain aspects of the theory outlined here, in particular the concentration
`results, carry over to many variants of flipping.) The state