`
`1711
`
`Design Methods for Irregular Repeat–Accumulate
`Codes
`
`Aline Roumy, Member, IEEE, Souad Guemghar, Student Member, IEEE, Giuseppe Caire, Senior Member, IEEE,
`and Sergio Verdú, Fellow, IEEE
`
`Abstract—We optimize the random-like ensemble of irregular
`repeat–accumulate (IRA) codes for binary-input symmetric
`channels in the large block-length limit. Our optimization tech-
`nique is based on approximating the evolution of the densities
`(DE) of the messages exchanged by the belief-propagation (BP)
`message-passing decoder by a one-dimensional dynamical system.
`In this way, the code ensemble optimization can be solved by
`linear programming. We propose four such DE approximation
`methods, and compare the performance of the obtained code
`ensembles over the binary-symmetric channel (BSC) and the
`binary-antipodal input additive white Gaussian noise channel
`(BIAWGNC). Our results clearly identify the best among the
`proposed methods and show that the IRA codes obtained by these
`methods are competitive with respect to the best known irregular
`low-density parity-check (LDPC) codes. In view of this and the
`very simple encoding structure of IRA codes, they emerge as
`attractive design choices.
`Index Terms—Belief propagation (BP), channel capacity, den-
`sity evolution, low-density parity-check (LDPC) codes, stability,
`threshold, turbo codes.
`
`I. INTRODUCTION
`
`S INCE the discovery of turbo codes [1], there have been sev-
`
`eral notable inventions in the field of random-like codes.
`In particular, the rediscovery of the low-density parity-check
`(LDPC) codes, originally proposed in [2], the introduction of
`irregular LDPCs [3], [4], and the introduction of the repeat-ac-
`cumulate (RA) codes [5].
`In [3], [4], irregular LDPCs were shown to asymptotically
`achieve the capacity of the binary erasure channel (BEC) under
`iterative message-passing decoding. Although the BEC is the
`only channel for which such a result currently exists, irreg-
`ular LDPC codes have been designed for other binary-input
`channels (e.g.,
`the binary-symmetric channel
`(BSC),
`the
`binary-antipodal input additive white Gaussian noise channel
`(BIAWGNC) [6], and the binary-input intersymbol interference
`(ISI) channel [7]–[9]) and have shown to achieve very good
`performance.
`First attempts to optimize irregular LDPC codes ([10] for the
`BEC and other channels [11]) with the density evolution (DE)
`technique computes the expected performance for a random-like
`
`Manuscript received October 22, 2002; revised April 1, 2004.
`A. Roumy is with IRISA-INRIA, 35042 Rennes, France (e-mail: aline.roumy
`@irisa.fr).
`S. Guemghar and G. Caire are with the Eurecom Institute, 06904 Sophia-
`Antipolis, France (e-mail: Souad.Guemghar@eurecom.fr; Giuseppe.Caire@eu-
`recom.fr).
`S. Verdú is with the Department of Electrical Engineering, Princeton Univer-
`sity, Princeton, NJ 08544 USA (e-mail: verdu@princeton.edu).
`Communicated by R. Urbanke, Associate Editor for Coding Techniques.
`Digital Object Identifier 10.1109/TIT.2004.831778
`
`code ensemble in the limit of infinite code block length. In order
`to reduce the computational burden of ensemble optimization
`based on the DE, faster techniques have been proposed, based
`on the approximation of the DE by a one-dimensional dynam-
`ical system (recursion). These techniques are exact only for the
`BEC (for which DE is one-dimensional). The most popular tech-
`niques proposed so far are based on the Gaussian approximation
`(GA) of messages exchanged in the message-passing decoder.
`GA in addition to the symmetry condition of message densi-
`ties implies that the Gaussian density of messages is expressed
`by a single parameter. Techniques differ in the parameter to be
`tracked and in the mapping functions defining the dynamical
`system [12]–[18].
`irregular LDPCs motivated other
`The introduction of
`schemes such as irregular RA (IRA) [19], for which similar
`results exist (achievability of the BEC capacity) and irregular
`turbo codes [20]. IRA codes are, in fact, special subclasses
`of both irregular LDPCs and irregular turbo codes. In IRA
`of information bits is repeated times, for
`codes, a fraction
`. The distribution
`
`is referred to as the repetition profile, and it is kept as a degree
`of freedom in the optimization of the IRA ensemble. After the
`repetition stage, the resulting sequence is interleaved and input
`to a recursive finite-state machine (called accumulator) which
`outputs one bit for every input symbols, where
`is referred to
`as grouping factor and is also a design parameter.
`IRA codes are an appealing choice because the encoder is
`extremely simple, their performance is quite competitive with
`that of turbo codes and LDPCs, and they can be decoded with a
`very-low-complexity iterative decoding scheme.
`The only other work that has proposed a method to design
`IRA codes is [19], [21] where the design focuses on the
`choice of the grouping factor and the repetition profile. The
`recursive finite-state machine is the simplest one which gives
`full freedom to choose any rational number between
`and
`as the coding rate. We will also restrict our study to IRAs
`that use the same simple recursion of [19], although it might
`be expected that better codes can be obtained by including
`the finite-state machine as a degree of freedom in the overall
`ensemble optimization. The method used in [19] to choose
`the repetition profile was based on the infinite-block-length
`GA of message-passing decoding proposed in [14]. In this
`work, we propose and compare four low-complexity ensemble
`
`0018-9448/04$20.00 © 2004 IEEE
`
`CALTECH - EXHIBIT 2001
`
`
`
`1712
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 8, AUGUST 2004
`
`Fig. 1.
`
`IRA encoder.
`
`optimization methods. Our approach to design IRAs is based
`on several tools that have been noticed recently: the EXtrinsic
`mutual Information Transfer (EXIT) function and its analyt-
`ical properties [12], [22], [23], reciprocal channel (duality)
`approximation [22], [24], and the nonstrict convexity of mutual
`information.
`The rest of the paper is organized as follows. Section II
`presents the systematic IRA encoder and its related decoder: the
`belief-propagation (BP) message-passing algorithm. Existing
`results on the analysis of the decoder (i.e., DE technique) are
`summarized and applied to the IRA code ensemble. This leads
`to a two-dimensional dynamical system whose state is defined
`on the space of symmetric distributions, for which we derive a
`local stability condition. In Section III, we propose a general
`framework in order to approximate the DE (defined on the
`space of distributions) by a standard dynamical system defined
`on the reals. We propose four low-complexity ensemble opti-
`mization methods as special cases of our general framework.
`These methods differ by the way the message densities and the
`BP transformations are approximated:
`
`1) GA, with reciprocal channel (duality) approximation;
`2) BEC approximation, with reciprocal channel approxima-
`tion;
`3) GA, with EXIT function of the inner decoder;
`4) BEC approximation, with EXIT function of the inner de-
`coder.
`
`All four methods lead to optimization problems solvable by
`linear programming. In Section IV, we show that the first pro-
`posed method yields a one-dimensional DE approximation with
`the same stability condition as the exact DE, whereas the exact
`stability condition must be added to the ensemble optimization
`as an explicit additional constraint for the second method. Then,
`we show that, in general, the GA methods are optimistic, in the
`sense that there is no guarantee that the optimized rate is below
`capacity. On the contrary, we show that for the BEC approxima-
`tion methods rates below capacity are guaranteed. In Section V,
`we compare our code optimization methods by evaluating their
`iterative decoding threshold (evaluated by the exact DE) over
`the BIAWGNC and the BSC.
`
`II. ENCODING, DECODING, AND DENSITY EVOLUTION
`
`Fig. 1 shows the block diagram of a systematic IRA encoder.
`is encoded
`A block of information bits
`. Each bit
`is re-
`by an (irregular) repetition code of rate
`times, where
`is a sequence of integers
`peated
`and
`(
`is the maximum
`such that
`repetition factor). The block of repeated symbols is interleaved,
`
`Fig. 2. Tanner graph of an IRA code.
`
`and the resulting block
`by an accumulator, defined by the recursion
`
`is encoded
`
`(1)
`
`, where
`with initial condition
`is the accumulator output block corresponding to the input
`is a given integer (referred to as grouping factor),
`,
`is an integer. Finally, the code-
`and we assume that
`word corresponding to the information block is given by
`.
`The transmission channel is memoryless, binary-input, and
`sat-
`symmetric-output, i.e., its transition probability
`isfies
`
`(2)
`
`where
`indicates a reflection of the output alphabet.1
`IRA codes are best represented by their Tanner graph [25]
`(see Fig. 2). In general, the Tanner graph of a linear code is a
`bipartite graph whose node set is partitioned into two subsets:
`the bitnodes, corresponding to the coded symbols, and the chec-
`knodes, corresponding to the parity-check equations that code-
`words must satisfy. The graph has an edge between bitnode
`and checknode
`if the symbol corresponding to
`participates
`in the parity-check equation corresponding to .
`Since the IRA encoder is systematic (see Fig. 1), it is useful to
`further classify the bitnodes into two subclasses: the information
`bitnodes, corresponding to information bits, and the parity bitn-
`odes, corresponding to the symbols output by the accumulator.
`Those information bits that are repeated times are represented
`by bitnodes with degree , as they participate in parity-check
`information bit
`equations. Each checknode is connected to
`nodes and to two parity bitnodes and represents one of the equa-
`) (1). The connections between checkn-
`tions (for a particular
`odes and information bitnodes are determined by the interleaver
`and are highly randomized. On the contrary, the connections be-
`tween checknodes and parity bitnodes are arranged in a regular
`
`1If the output alphabet is the real line, then y coincides with ordinary reflec-
`tion with respect to the origin. Generalizations to other alphabets are immediate.
`
`
`
`ROUMY et al.: DESIGN METHODS FOR IRREGULAR REPEAT–ACCUMULATE CODES
`
`zig-zag pattern since, according to (1), every pair of consecutive
`parity bits are involved in one parity-check equation.
`and
`A random IRA code ensemble with parameters
`(information) block length is formed by all graphs of the form
`information bitnodes, grouping factor
`, and
`of Fig. 2 with
`edges connected to information bitnodes of degree , for
`. The sequence of nonnegative coefficients
`is referred to as the degree distribu-
`such that
`tion of the ensemble. The probability distribution over the code
`ensemble is induced by the uniform probability over all inter-
`elements.
`leavers (permutations) of
`The information bitnodes average degree is given by
`. The number of edges connecting information
`. The number of
`bitnodes to checknodes is
`parity bitnodes is
`. Finally, the code rate
`is given by
`
`(3)
`
`, we get
`and
`Under the constraints
`set to
`is
`. Therefore, the highest rate with parameter
`. This motivates the use of
`in order to get higher rates.
`
`A. Belief Propagation Decoding of IRA Codes
`In this work, we consider BP message-passing decoding
`[26]–[28]. In message-passing decoding algorithms, the graph
`nodes receive messages from their neighbors, compute new
`messages, and forward them to their neighbors. The algorithm
`is defined by the code Tanner graph, by the set on which
`messages take on values, by the node computation rules, and
`by the node activation scheduling.
`In BP decoding, messages take on values in the extended real
`. The BP decoder is initialized by setting all
`line
`messages output by the checknodes equal to zero. Each bitnode
`is associated with the channel observation message (log-like-
`lihood ratio)
`
`(4)
`
`is the channel output corresponding to the transmis-
`where
`.
`sion of the code symbol
`The BP node computation rules are given as follows. For a
`given node, we identify an adjacent edge as outgoing and all
`of de-
`other adjacent edges as incoming. Consider a bitnode
`denote the messages received from
`gree and let
`incoming edges and
`the associated channel obser-
`the
`vation message. The message
`passed along the outgoing
`edge is given by
`
`(5)
`
`de-
`and let
`of degree
`Consider a checknode
`incoming edges. The
`note the messages received from the
`passed along the outgoing edge is given by
`message
`
`(6)
`
`1713
`
`(7)
`
`where the mapping
`
`is defined by [11]
`
`and where the sign function is defined as [11]
`
`if
`with probability
`with probability
`if
`.
`
`if
`if
`
`Since the code Tanner graph has cycles, different schedulings
`yield in general nonequivalent BP algorithms. In this work, we
`shall consider the following “classical” schedulings.
`
`• LDPC-like scheduling [19]. In this case, all bitnodes and
`all checknodes are activated alternately and in parallel.
`Every time a node is activated, it sends outgoing messages
`to all its neighbors. A decoding iteration (or “round” [31])
`consists of the activation of all bitnodes and all checkn-
`odes.
`
`• Turbo-like scheduling. Following [29], a good de-
`coding scheduling consists of isolating large trellis-like
`subgraphs (or, more generally, normal realizations in
`Forney’s terminology) and applying locally the forward–
`backward Bahl–Cocke–Jelinek–Raviv (BCJR) algorithm
`[30] (that implements efficiently the BP algorithm on
`normal cycle-free graphs), as done for turbo codes [1].
`A decoding iteration consists of activating all the in-
`formation bitnodes in parallel (according to (5)) and of
`running the BCJR algorithm over the entire accumulator
`trellis. In particular, the checknodes do not send messages
`to the information bitnodes until the BCJR iteration is
`completed.
`
`Notice that for both of the above schedulings one decoder itera-
`tion corresponds to the activation of all information bitnodes in
`the graph exactly once.
`
`B. Density Evolution and Stability
`
`The bit-error rate (BER) performance of BP decoding aver-
`aged over the IRA code ensemble and over the noise observa-
`tions can be analyzed, for any finite number of iterations and
`, by the DE technique [11]. The usefulness
`in the limit of
`of the DE method stems from the Concentration Theorem [31],
`[10] which guarantees that, with high probability, the BER after
`iterations of the BP decoder applied to a randomly selected
`code in the ensemble and to a randomly generated channel noise
`sequence is close to the BER computed by DE, for sufficiently
`large block length.
`Next, we formulate the DE for IRA codes and we study
`the stability condition of the fixed-point corresponding to
`zero BER. As in [11, Sec. III-B], we introduce the space of
`distributions whose elements are nonnegative nondecreasing
`and domain the
`right-continuous functions with range in
`extended real line.
`
`
`
`1714
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 8, AUGUST 2004
`
`It can be shown that, for a binary-input symmetric-output
`channel, the distributions of messages at any iteration of the DE
`satisfy the symmetry condition
`
`for any function
`for which the integral exists. If
`, (8) is equivalent to
`
`has density
`
`(9)
`
`(8)
`
`ries of (10)–(13) are sequences of pairs of symmetric distribu-
`). For this system, the BER at iteration is given
`tions
`.
`by
`It is easy to see that
`is a fixed point of (10)–(13).
`The local stability of this fixed point is given by the following
`result.
`
`Theorem 1: The fixed point
`stable if and only if
`
`for the DE is locally
`
`With some abuse of terminology, distributions satisfying (8) are
`said to be symmetric. The space of symmetric distributions will
`.
`be denoted by
`The BER operator
`
`is defined by
`
`where
`Proof: See Appendix I.
`
`.
`
`(15)
`
`. We intro-
`is the left-continuous version of
`where
`duce the “delta at zero” distribution, denoted by
`, for which
`, and the “delta at infinity” distribution, denoted
`, for which
`.
`by
`The symmetry property (8) implies that a sequence of sym-
`converges to
`if and only if
`metric distributions
`, where convergence of distributions is in
`the sense given in [11, Sec. III-F].
`The DE for IRA code ensembles is given by the following
`proposition whose derivation is omitted as it is completely anal-
`ogous to the derivation of DE in [11] for irregular LDPC codes.
`
`) denote the average
`(respectively,
`Proposition 1: Let
`distribution of messages passed from an information bitnode
`(respectively, parity bitnode) to a checknode, at iteration . Let
`(respectively,
`) denote the average distribution of mes-
`sages passed from a checknode to an information bitnode (re-
`spectively, parity bitnode), at iteration .
`Under the cycle-free condition,
`lowing recursion:
`
`satisfy the fol-
`
`Here necessity and sufficiency are used in the sense of [11].
`By following steps analogous to [11], it can be shown that if
`such that if for some
`(15) holds, then there exists
`
`tends to in-
`converges to zero as
`then
`is strictly larger than the right-hand
`finity. On the contrary, if
`such that for all
`side (RHS) of (15), then there exists
`
`III. IRA ENSEMBLE OPTIMIZATION
`
`In this section, we tackle the problem of optimizing the IRA
`code ensemble parameters for a broad class of binary-input sym-
`metric-output channels.
`for
`A property of DE given in Proposition 1 is that
`is a nonincreasing nonnegative sequence. Hence,
`exists. Consider a family of channels
`
`the limit
`
`(10)
`(11)
`(12)
`
`(13)
`
`is, for example, an indicator of
`where the channel parameter
`the noise level in the channel. Following [31], we say that
`is monotone with respect to the IRA code ensemble
`under BP decoding if, for any finite
`
`for
`
`, where
`, with initial condition
`denotes the distribution of the channel observation messages
`denotes convolution of distributions, defined by
`(4),
`
`(14)
`
`where
`
`denotes
`
`-fold convolution,
`
`when
`
`(defined on
`is the distribution of
`, and
`denotes the inverse mapping of
`is the distribution of
`when
`
`),
`, i.e.,
`.
`
`The DE recursion (10)–(13) is a two-dimensional nonlinear
`(i.e., the state trajecto-
`dynamical system with state space
`
`are the message distributions at iteration of
`and
`where
`and
`, respectively.
`DE applied to channels
`, where
`is the trajectory
`Let BER
`. The threshold
`of the
`of DE applied to the channel
`over the monotone family
`is the worst
`ensemble
`case channel parameter for which the limiting BER is zero, i.e.,
`
`BER
`
`(16)
`
`Thus, for every value of
`, the optimal IRA ensemble parame-
`subject to vanishing BER
`,
`and
`maximize
`ters
`i.e., are solution of the optimization problem
`
`maximize
`subject to
`and to
`
`BER
`
`,
`
`(17)
`
`
`
`ROUMY et al.: DESIGN METHODS FOR IRREGULAR REPEAT–ACCUMULATE CODES
`
`1715
`
`the solution of which can be found by some numerical tech-
`is given
`niques, as in [11]. However, the constraint BER
`directly in terms of the fixed point of the DE recursion, and
`makes optimization very computationally intensive.
`A variety of methods have been developed in order to simplify
`the code ensemble optimization [19], [24], [14], [32]. They con-
`sist of replacing the DE with a dynamical system defined over
`the reals (rather than over the space of distributions), whose tra-
`jectories and fixed points are related in some way to the trajec-
`tories and the fixed point of the DE. Essentially, all proposed
`approximated DE methods can be formalized as follows. Let
`and
`be mappings of the set of sym-
`metric distributions to the real numbers and vice versa. Then,
`can be derived from
`a dynamical system with state space
`(10)–(13) as
`
`for
`, with initial condition
`where
`are the system state variables.
`By eliminating the intermediate distributions
`can put (18)–(21) in the form
`
`(18)
`
`(19)
`
`(20)
`
`(21)
`
`, and
`
`and
`
`, we
`
`(22)
`
`For all DE approximations considered in this work, the map-
`and
`and the functions
`and
`satisfy the following
`pings
`desirable properties.
`
`1)
`2)
`3)
`
`and
`.
`
`,
`,
`are defined on
`
`.
`.
`
`and have range in
`
`4)
`5)
`
`and
`
`.
`
`is a fixed point of the
`, i.e.,
`recursion (22). Moreover, this fixed point corresponds to
`of the exact DE.
`the zero-BER fixed point
`6) If
`, the function
`is strictly decreasing
`. Therefore, the equation
`in
`
`for all
`
`has a unique solution in
`solution will be denoted by
`
`for all
`.
`
`It follows that all fixed points of (22) must satisfy
`
`. This
`
`(23)
`
`, (23)
`and that in order to avoid fixed points other than
`must not have solutions in the interval
`, i.e., it must satisfy
`
`(24)
`
`Fig. 3. EXIT model.
`
`Notice that, in general, (24) is neither a necessary nor a sufficient
`condition for the uniqueness of the zero-BER fixed point of the
`exact DE. However, if the quality of the DE approximation is
`good, this provides a heuristic for the code ensemble optimiza-
`tion.
`by (24) in (17), we
`By replacing the constraint BER
`obtain the approximated IRA ensemble optimization method as
`
`maximize
`subject to
`and to
`
`,
`
`(25)
`
`.
`
`Approximations of the DE recursion differ essentially in the
`choice of
`and
`, and in the way the intermediate distribu-
`tions
`and
`and the channel message distribution
`are
`approximated. Next, we illustrate the approximation methods
`considered in this work.
`
`A. EXIT Functions
`Several recent works show that DE can be accurately de-
`scribed in terms of the evolution of the mutual information be-
`tween the variables associated with the bitnodes and their mes-
`sages (see [12], [33]–[35], [13], [23], [18]).
`The key idea in order to approximate DE by mutual infor-
`mation evolution is to describe each computation node in BP
`decoding by a mutual information transfer function. For histor-
`ical reasons, this function is usually referred to as the EXtrinsic
`mutual Information Transfer (EXIT) function.
`EXIT functions are generally defined as follows. Consider the
`model of Fig. 3, where the box represents a generalized compu-
`tation node of the BP algorithm (i.e., it might contain a sub-
`graph formed by several nodes and edges, and might depend
`on some other random variables such as channel observations,
`denote the input mes-
`not shown in Fig. 3). Let
`sages, assumed independent and identically distributed (i.i.d.)
`, and let
`denote the output message. Let
`denote the binary code symbol associated with message
`, for
`, and let
`denote the binary code symbol asso-
`. Since
`, we can think
`ciated with message
`of
`and
`as the outputs of binary-input symmetric-output
`and
`and transition probabilities
`channels with inputs
`
`(26)
`(27)
`
`respectively.
`The channel (26) models the a priori information that the
`’s, and the channel (27)
`node receives about the symbols
`models the extrinsic information [1] that the node generates
`.
`about the symbol
`
`
`
`1716
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 8, AUGUST 2004
`
`We define the binary-input symmetric-output capacity func-
`, such that
`tional
`
`(28)
`
`Namely, maps any symmetric distribution
`into the capacity2
`of the binary-input symmetric-output channel with transition
`.
`probability
`Then, we let
`
`Fig. 4. Reciprocal channel approximation.
`
`For a distribution
`defined in (28), and for all
`
`be equal
`, we let the mapping
`we define the mapping
`
`to
`
`where
`
`(30)
`
`(31)
`
`into the symmetric Gaussian distri-
`Namely, maps
`bution
`such that the BIAWGNC with transition prob-
`has capacity .
`ability
`The first key approximation in Method 1 is
`
`(32)
`
`for some
`.
`, we make use of the recip-
`and
`In order to compute
`rocal channel approximation [24] also called approximate du-
`ality property of EXIT functions in [22]. This states that the
`EXIT function of a checknode is accurately approximated by
`the EXIT function of a bitnode with the same degree after the
`and
`(see
`change of variables
`Fig. 4). Using approximate duality, we replace the checknode
`into
`.
`by a bitnode and change
`Since for a bitnode the output message is the sum of the input
`messages (see (5)), and since the input distributions
`and
`are Gaussian, also the output distribution is
`Gaussian, with mean
`
`for messages sent to information bitnodes and
`
`for messages sent to parity bitnodes. Finally,
`by
`
`and
`
`are given
`
`The second key approximation in Method 1 is to replace
`a discrete (symmetric) distribution such that
`
`(33)
`
`with
`
`(34)
`
`denote the capacities of the channels (26) and (27), respectively.
`The EXIT function of the node of Fig. 3 is the set of pairs
`, for all
`and for some (arbitrary) choice of
`such that
`. Notice that
`the input distribution
`the EXIT function of a node is not uniquely defined, since it de-
`. In general, different choices yield
`pends on the choice of
`different transfer functions.
`The approximations of the DE considered in this work are
`based on EXIT functions, and track the evolution of the mutual
`information between the messages output by the bitnodes and
`the associated code symbols.
`Remark. Two properties of binary-input symmetric-output
`channels: Before concluding this section, we take a brief
`detour in order to point out two properties of binary-input
`symmetric-output channels. Consider a binary-input sym-
`, where
`is not
`metric-output channel with
`necessarily symmetric (in the sense of (8)). Its capacity can be
`written as
`
`(29)
`
`By concatenating the transformation
`to the channel output, we obtain a new binary-input symmetric-
`such that
`.
`output channel with
`Moreover, since
`is a sufficient statistic for
`, the original
`channel has the same capacity as the new channel, given by
`. Therefore, by defining appropriately the channel output,
`the capacity of any binary-input symmetric-output channel can
`always be put in the form (28).
`Another interesting property is the following.
`
`Proposition 2: The mutual information functional is not
`strictly convex on the set of binary-input symmetric-output
`.
`channels with transition probability
`Proof: See Appendix II.
`
`B. Method 1
`The first approximation of the DE considered in this work
`assumes that the distributions at any iteration are Gaussian. A
`Gaussian distribution satisfies the symmetry condition (9) if and
`only if its variance is equal to twice the absolute value of its
`to denote
`mean. We introduce the shorthand notation
`the symmetric Gaussian distribution (or density, depending on
`.
`the context) with mean , i.e.,
`
`2Recall that the capacity of a binary-input symmetric-output memoryless
`channel is achieved by uniform i.i.d. inputs.
`
`,
`
`, and
`
`for some integer
`.
`With this assumption, from the definition (28) of the operator
`and since [11]: a) the convolution of symmetric distributions
`is symmetric, and b) the convex combination of symmetric dis-
`
`such that
`
`
`
`ROUMY et al.: DESIGN METHODS FOR IRREGULAR REPEAT–ACCUMULATE CODES
`
`1717
`
`tributions is symmetric; it is immediate to write (18) and (19)
`as (35) at the bottom of the page. The desired DE approxima-
`tion in the form (22) is obtained (implicitly) by combining (33)
`and (35). Notice that (35) is linear in the repetition profile and
`the optimization problem (25) can be solved as linear program-
`ming.
`
`Example 1. Discrete-output channels: In general, when the
`channel output is discrete then the approximation (34) holds ex-
`actly. For example, for the BSC with transition probability we
`have
`
`Example 2: The BIAWGNC defined by
`, is a channel such that
`
`, where
`
`(36)
`
`In this case, since convolving symmetric Gaussian distributions
`yields a symmetric Gaussian distribution whose mean is the sum
`of the means, the discretization approximation (34) is not nec-
`essary and we have
`
`By applying the operator
`and using (31) we obtain the DE
`approximation for the BIAWGNC as (38) at the bottom of the
`page.
`
`(37)
`
`C. Method 2
`The second approximation of the DE considered in this work
`assumes that the distributions of messages at any iteration
`.
`consist of two mass points, one at zero and the other at
`For such distributions, we introduce the shorthand notation
`.
`be equal to
`
`defined in (28) and the
`
`We let the mapping
`be
`mapping
`
`With these mappings, (20) and (21) can be put in the form
`
`where we used the fact that, as it can be easily seen from the
`and
`in (46)–(48)
`definitions of
`
`(40)
`
`to be
`
`and
`Notice that, while in Method 1 we assumed
`symmetric Gaussian (see (32)), here (40) holds exactly.
`As a consequence of these mappings, the communication
`, is replaced by a
`channel of the parity bits, with distribution
`BEC with erasure probability
`Furthermore, for any
`
`.
`
`we have
`
`From this result, it is immediate to obtain the approximated DE
`recursion as
`
`(41)
`
`Notice that (41) is the standard (exact) DE for the IRA ensemble
`over a BEC (see [19]) with the same capacity of the
`.
`actual binary-input symmetric-output channel, given by
`We point out here that this method, consisting of replacing the
`actual channel with a BEC with equal capacity and optimizing
`the code ensemble for the BEC, was proposed in [24] for the
`optimization of LDPC ensembles. Interestingly, this method fol-
`lows as a special case of our general approach for DE approxi-
`and
`.
`mation, for a particular choice of the mappings
`In this case, the fixed-point equation corresponding to (23) is
`obtained in closed form as
`
`for all
`
`.
`
`(for details, see [19]).
`
`(39)
`
`(42)
`
`(35)
`
`(38)
`
`
`
`1718
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 50, NO. 8, AUGUST 2004
`
`For Method 3, by discretizing the channel observation distri-
`bution as in (34), we have3
`
`Fig. 5. Turbo-like IRA decoder.
`
`D. Methods 3 and 4
`
`Methods 1 and 2 yield (almost) closed-form DE approxima-
`tions at the price of some approximations of the message dis-
`tributions and, above all, of the checknodes output distributions
`and
`.
`literature on random-like code
`In much of the current
`ensemble optimization, the EXIT function of a decoding block
`is obtained by Monte Carlo simulation, by generating i.i.d.
`input messages, estimating the distribution of the output mes-
`sages, and computing a one-dimensional quantity [12]–[18].
`Following this approach, we shall consider the IRA decoder
`with turbo-like scheduling (see Fig. 5) and obtain the EXIT
`functions of the inner and outer decoders.
`The inner (accumulator) and outer (repetition) decoders are
`characterized by an EXIT function as defined in Section III-A,
`. In general,
`for some guess of the (symmetric) distribution
`the EXIT function of the decoders can be obtained as follows.
`
`1) Let the channel observation messages be i.i.d.,
`.
`2) Assume the decoder input messages are i.i.d.,
`.
`3) Obtain either in closed form or by Monte Carlo simula-
`of the
`tion the corresponding marginal distribution
`decoder output messages.
`,
`4) Let
`function curve.
`
`be a point on the EXIT
`
`Our Methods 3 and 4 consist of applying the above ap-
`proach under the assumptions
`and
`, respectively.
`Let the resulting EXIT functions of the inner and outer de-
`and by
`, re-
`coders be denoted by
`denote the mutual information between the
`spectively, and let
`messages at the output of the outer decoder (repetition code) and
`the corresponding symbols (information bitnodes).
`The resulting approximated DE is given by
`
`(43)
`
`,
`The corresponding fixed-point equation is given by
`and the condition for the uniqueness of the fixed point at
`,
`. The
`for all
`corresponding to (24), is
`resulting IRA optimization methods are obtained by using this
`condition in (25).
`While for the inner decoder (accumulator) we are forced to
`resort to Monte Carlo simulation, it is interesting to notice that,
`due to the simplicity of the repetition code, for both Methods 3
`can
`and 4 the EXIT function of the outer decoder
`be obtained in closed form.
`
`For Method 4 we have
`
`(44)
`
`(45)
`
`IV. PROPERTIES OF THE APPROXIMATED DE
`
`In this section, we show some properties of the approximated
`DE derived in Section III.
`
`A. Stability Condition
`Consider the DE approximation of Method 1. As indicated
`in Section III-B,
`is a fixed-point of the system
`(33)–(35). We have the following result.
`
`Theorem 2: The fixed point at
`is stable if and only if the fixed point
`DE (10)–(13) is stable.
`Proof: See Appendix III.
`
`of the system (33)–(35)
`of the exact
`
`For other DE approximations, stability does not generally
`imply stability of the corresponding exact DE. Consider the DE
`is a fixed point of the system
`approximation of Method 2.
`(41). We have the following result.
`
`Proposition 3: The local stability condition of the approxi-
`mated DE with Method 2 is less stringent than that of the exact
`DE.
`
`Proof: See Appendix IV.
`
`If an approximated DE has a less stringent stability condition,
`then the exact stability condition must be added to the ensemble
`optimization as an explicit additional constraint. It should be
`noticed that the DE approximations used in [24], [14], [19] re-
`quire the additional stability constraint. For example, the codes
`are
`presented in [19] for the BIAWGNC and for which
`not stable. Therefore, the BER for an arbitrary large number of
`iterations is not vanishing.
`
`B. Fixed-Points, Coding Rate, and Channel Capacity
`An interesting property of optimization Methods 2 and 4 is
`that