throbber

`Efficient Encoding of Low-Density Parity-Check Codes
`
`Tom Richardson
`
`†
`
`R¨udiger Urbanke
`
`‡
`
`March 6, 2001
`
`Abstract
`
`Low-density parity-check codes can be considered serious competitors to turbo codes in terms
`of performance and complexity and they are based on a similar philosophy: constrained random
`code ensembles and iterative decoding algorithms.
`In this paper we consider the encoding problem for low-density parity-check codes. More
`generally, we consider the encoding problem for codes specified by sparse parity-check matrices.
`We show how to exploit the sparseness of the parity-check matrix to obtain efficient encoders.
`For the (3,6)-regular low-density parity-check code for example, the complexity of encoding is
`essentially quadratic in the block length. However, we show that the associated coefficient can
`be made quite small, so that encoding codes even of length n (cid:3) 100, 000 is still quite practical.
`More importantly, we will show that “optimized” codes actually admit linear time encoding.
`
`Keywords: turbo codes, parity-check, decoding, encoding, sparse matrices, random graphs, binary
`
`erasure channel.
`
`1
`
`Introduction
`
`Low-density parity-check (LDPC) codes were originally invented and investigated by Gallager [1].
`
`The crucial innovation was Gallager’s introduction of iterative decoding algorithms (or message-
`
`passing decoders) which he showed to be capable of achieving a significant fraction of channel
`
`capacity at low complexity. Except for the papers by Zyablov and Pinsker [2], Margulis [3] and
`
`Tanner [4] the field then lay dormant for the next 30 years. Interest in LDPC codes was rekindled in
`
`the wake of the discovery of turbo-codes and LDPC codes were independently rediscovered by both
`MacKay and Neal [5] and Wiberg [6].1 The past few years have brought many new developments
`
`in this area. First, in several papers Luby, Mitzenmacher, Shokrollahi, Spielman and Stemann
`∗
`†
`‡
`
`This work was performed while both authors were at Bell Labs.
`Flarion Technologies, Bedminster, NJ 07921, USA, email: richardson@flarion.com
`EPFL, LTHC-DSC, CH-1015 Lausanne, email: Rudiger.Urbanke@epfl.ch
`1Similar concepts have also appeared in the physics literature [7, 8].
`
`1
`
`CALTECH - EXHIBIT 2011
`Apple Inc. v. California Institute of Technology
`IPR2017-00701
`
`

`

`introduced new tools for the investigation of message-passing decoders for the binary erasure channel
`
`(BEC) and the binary symmetric channel (BSC) (under hard decision message-passing decoding)
`
`[9, 10], and they extended Gallager’s definition of LDPC codes to include irregular codes, see also
`
`[5]. The same authors also exhibited sequences of codes which, asymptotically in the block-length,
`
`provably achieve capacity on a BEC. It was then shown in [11] that similar analytic tools can be
`
`used to study the asymptotic behavior of a very broad class of message passing algorithms for a
`
`wide class of channels and it was demonstrated in [12] that LDPC codes can come extremely close
`
`to capacity on many channels.
`
`In many ways LDPC codes can be considered serious competitors to turbo codes. In particular,
`
`LDPC codes exhibit an asymptotically better performance than turbo codes and they admit a wide
`
`range of trade-offs between performance and decoding complexity. One major criticism concerning
`
`LDPC codes has been their apparent high encoding complexity. Whereas turbo codes can be
`
`encoded in linear time, a straightforward encoder implementation for a LDPC code has complexity
`
`quadratic in the block length. Several authors have addressed this issue:
`
`1. It was suggested in [13] and [9] to use cascaded rather than bipartite graphs. By choosing
`
`the number of stages and the relative size of each stage carefully one can construct codes
`
`which are encodable and decodable in linear time. One drawback of this approach lies in the
`
`fact that each stage (which acts like a subcode) has a length which is in general considerably
`
`smaller than the length of the overall code. This results, in general, in a performance loss
`
`compared to a standard LDPC code with the same overall length.
`
`2. In [14] it was suggested to force the parity check matrix to have (almost) lower triangular
`
`form, i.e., the ensemble of codes is restricted not only by the degree constraints but also
`
`by the constraint that the parity check matrix have lower triangular shape. This restriction
`
`guarantees a linear time encoding complexity but, in general, it also results in some loss of
`
`performance.
`
`It is the aim of this paper to show that, even without cascade constructions or restrictions on the
`
`shape of the parity check matrix, the encoding complexity is quite manageable in most cases and
`
`provably linear in many cases. More precisely, for a (3, 6)-regular code of length n the encoding
`complexity seems indeed to be of order n2 but the actual number of operations required is no more
`than 0.0172n2 + O(n), and, because of the extremely small constant factor, even large block lengths
`
`2
`
`

`

`admit practically feasible encoders. We will also show that “optimized” irregular codes have a
`linear encoding complexity and that the required amount of preprocessing is of order at most n3/2.
`
`The proof of these facts is achieved in several stages. We first show in Section 2 that the encoding
`complexity is upper bounded by n + g2, where g, the gap, measures in some way to be made
`precise shortly, the “distance” of the given parity check matrix to a lower triangular matrix. In
`
`Section 3 we then discuss several greedy algorithms to triangulate matrices and we show that for
`
`these algorithms, when applied to elements of a given ensemble, the gap concentrates around its
`
`expected value with high probability. As mentioned above, for the (3, 6)-regular code the best
`
`greedy algorithm which we discuss results in an expected gap of 0.017n. Finally, in Section 4
`√
`
`we prove that for all known “optimized” codes the expected gap is actually of order less than
`
`n,
`
`resulting in the promised linear encoding complexity. In practice the gap is usually a small constant.
`√
`
`n bound can be improved but it would require a significantly more complex presentation.
`
`The
`
`We finish this section with a brief review of some basic notation and properties concerning LDPC
`
`codes. For a more thorough discussion we refer the reader to [1, 11, 12].
`
`Low-density parity-check codes are linear codes. Hence, they can be expressed as the null space of
`
`a parity check matrix H, i.e., x is a codeword if and only if
`
`HxT = 0T .
`
`The modifier ‘low-density’ applies to H; The matrix H should be sparse. For example, if H has
`2 × n, where n is even, then we might require H to have 3 ones per column and 6 ones
`dimension n
`per row. Conditioned on these constraints we choose H at random as discussed in more detail
`
`below. We refer to the associated code as a (3, 6)-regular LDPC code. The sparseness of H enables
`
`efficient (sub-optimal) decoding, while the randomness ensures (in the probabilistic sense) a good
`
`code [1].
`
`Example 1. [Parity-Check Matrix of (3, 6)-Regular Code of Length 12] The following matrix H
`will serve as an example.
`
`.
`
`(1)
`
`⎞⎟⎟⎟⎟⎟⎟⎠
`
`1
`1
`0
`1
`0
`0
`
`1 1
`1 1
`0 0
`0 0
`1 0
`0 1
`
`0 0
`1 1
`0 0
`1 0
`1 1
`0 1
`
`1 1
`0 0
`1 1
`0 0
`0 1
`1 0
`
`0 0
`0 0
`1 0
`1 1
`1 1
`0 1
`
`0 1
`0 0
`1 1
`1 0
`0 0
`1 1
`
`0
`1
`1
`1
`0
`0
`
`3
`
`⎛⎜⎜⎜⎜⎜⎜⎝
`
`H =
`
`

`

`In the theory of LDPC codes it is customary and useful not to focus on particular codes but to
`
`consider ensembles of codes. These ensembles are usually defined in terms of ensembles of bipartite
`
`graphs [13, 15]. E.g., the bipartite graph which represents the code defined in Example 1 is shown
`
`shorthand
`
`γ to denote
`
`γi
`
`in Fig. 1. The left set of nodes represents the variables whereas the right set of nodes represents the
`(cid:8)
`constraints. An ensemble of bipartite graphs is defined in terms of a pair of degree distributions.
`i γixi−1 is simply a polynomial with non-negative real coefficients
`A degree distribution γ(x) =
`satisfying γ(1) = 1. Typically, γi denotes the fraction of edges in a graph which are incident to a
`(cid:9)
`(cid:9) 1
`(cid:8)
`node (variable or constraint node as the case may be) of degree i. In the sequel, we will use the
`i =
`0 γ(x) dx. This quantity gives the inverse of the average node
`i≥1
`(cid:9)
`degree. Associated to a degree distribution pair (λ, ρ) is the rate r(λ, ρ) defined as
`ρ(cid:9)
`r(λ, ρ) := 1 −
`

`E.g., for the degree distribution pair (x2, x5), which corresponds to the regular (3, 6) Gallager code,
`the rate is 1
`2.
`
`.
`
`(2)
`
`Given a pair (λ, ρ) of degree distributions and a natural number n, we define an ensemble of
`bipartite graphs Cn(λ, ρ) in the following way. All graphs in the ensemble Cn(λ, ρ) will have left
`(cid:8)
`(cid:8)
`nodes which are associated to λ and right nodes which are associated to ρ. More precisely, assume
`i≥1 λixi−1 and ρ(x) :=
`i≥1 ρixi−1. We can convert these degree distributions into
`that λ(x) :=
`(cid:2)
`(cid:2)
`
`λ and ˜ρi := ρiρ. Each graph in Cn(λ, ρ) has n˜λi left nodes
`node perspective by defining ˜λi := λi
`of degree i and (1 − r(λ, ρ))n˜ρi right nodes of degree i. The order of these nodes is arbitrary but
`fixed. Here, to simplify notation, we assume that (λ, ρ) and n are chosen in such a way that all
`
`i
`
`i
`
`these quantities are integers. A node of degree i has i sockets from which the i edges emanate and
`
`these sockets are ordered. So in total there are
`λi(cid:9)
`(cid:9)
`= n(cid:9)
`(1 − r(λ, ρ))n




`≥1
`≥1
`≥1
`≥1
`ordered sockets on the left as well as on the right. Let σ be a permutation on [s] := {1,··· , s}. We
`can associate a graph to such a permutation by connecting the i-th socket on the left to the σ(i)-th
`
`in˜ρi
`
`(cid:10) i
`
`= (1 − r(λ, ρ))
`
`ρi(cid:9)
`
`n
`
`(cid:10) i
`
`=
`
`= (1 − r(λ, ρ))
`
`n
`
`(cid:10) i
`
`in˜λi =
`
`(cid:10) i
`
`s :=
`
`socket on the right. Letting σ run over the set of permutations on [s] generates a set of graphs.
`Endowed with the uniform probability distribution this is the ensemble Cn(λ, ρ). Therefore, if in
`the future we choose a graph at random from the ensemble Cn(λ, ρ) then the underlying probability
`distribution is the uniform one.
`It remains to associate a code to every element of Cn(λ, ρ). We will do so by associating a parity
`
`4
`
`

`

`check matrix to each graph. At first glance it seems natural to define the parity check matrix
`associated to a given element in Cn(λ, ρ) as the {0, 1}-matrix which has a non-zero entry at row i
`and column j iff the i-th right node is connected to the j-th left node. Unfortunately the possible
`
`presence of multiple edges between pairs of nodes requires a more careful definition. Since the
`encoding is done over the field GF(2), we define the parity check matrix H as that {0, 1}-matrix
`which has a non-zero entry at row i and column j iff the i-th right node is connected to the j-th
`
`left node an odd number of times. As we will see the encoding is accomplished in two steps, a
`
`preprocessing step, which is an offline calculation performed once only for the given code, and the
`
`actual encoding step which is the only data dependent part. For the preprocessing step it is more
`
`natural to work with matrices which contain the multiplicities of edges and therefore we define the
`extended parity check matrix ˆH as that matrix which has an entry d at row i and column j iff the
`i-th right node is connected to the j-th left node by d edges. Clearly, H is equal to ˆH modulo
`2. In the sequel we will also refer to these two matrices as the adjacency matrix and the extended
`
`adjacency matrix of the bipartite graph. Since for every graph there is an associated code, we will
`use these two terms interchangeably so we will, e.g., refer to codes as elements of Cn(λ, ρ).
`
`Variable Nodes
`
`Check Nodes
`
`Figure 1: Graphical representation of a (3, 6)-regular LDPC code of length 12. The left nodes
`represent the variable nodes whereas the right nodes represent the check nodes.
`
`Most often LDPC codes are used in conjunction with message-passing decoders. Recall that there is
`
`a received message associated to each variable node which is the result of passing the corresponding
`
`bit of the codeword through the given channel. The decoding algorithm proceeds in rounds. At
`
`5
`
`

`

`each round a message is sent from each variable node to each neighboring check node, indicating
`
`some estimate of the associated bit’s value. In turn, each check node collects its incoming messages
`
`and, based on this information, sends messages back to the incident variable nodes. Care must be
`
`taken to send out only extrinsic information, i.e., the outgoing message along a given edge must
`
`not depend on the incoming message along the same edge. As we will see, the preprocessing step
`
`for the encoding is closely related to the message-passing decoder for the BEC. We will therefore
`
`review this particular decoder in more detail.
`Assume we are given a code in Cn(λ, ρ) and assume that we use this code to transmit over a BEC
`with an erasure probability of α. An asymptotic expected fraction α of the variable nodes will be
`erasures and the remaining fraction (1− α) will be known. We first formulate the iterative decoder
`not as a message-passing decoder but in a language which is more suitable for our current purpose,
`
`see [9]:
`
`Decoder for the Binary Erasure Channel:
`
`0. [Initialization]
`
`1. [Stop or Extend] If there is no known variable node and no check node of degree one then
`
`output the (partial) codeword and stop. Otherwise, all known variable nodes and all their
`
`adjacent edges are deleted.
`
`2. [Declare Variables as Known] Any variable node which is connected to a degree one check
`
`node is declared to be known. Goto 1.
`
`This decoder can equivalently be formulated as a message-passing decoder. Messages are from the
`set {0, 1} with a 0 indicating that the corresponding bit has not been determined yet (along the
`given edge). We will call a 0 message an erasure message. At a variable node the outgoing message
`
`along an edge e is the erasure message if the received message associated to this node is an erasure
`and if all incoming messages (excluding the incoming message along edge e) are erasure messages,
`otherwise the outgoing message is a 1. At a check node the outgoing message along an edge e is the
`
`erasure message if at least one of the incoming messages (excluding the incoming message along
`edge e) is the erasure message, and a 1 otherwise. If we declare that an originally erased variable
`
`node becomes known as soon as it has at least one incoming message which is not an erasure then
`
`one can check that at any time the set of known variable nodes is indeed identical under both
`
`descriptions.
`
`6
`
`

`

`It was shown in [16] that (asymptotically in n) the expected fraction α(cid:5) of erasure messages after
`the (cid:7)-th decoding round is given by
`
`(3)
`
`α(cid:5) = αλ(1 − ρ(1 − α(cid:5)−1)),
`∗(λ, ρ), called the threshold of the degree distribution pair, be defined as
`where α0 = α. Let α
`(λ, ρ) := sup{0 ≤ α ≤ 1 : α(cid:5)(α) (cid:5)→∞−→ 0 where α(cid:5)(α) := αλ(1 − ρ(1 − α(cid:5)−1)); α0 = α}.
`∗
`(4)

`Note first that the function f(x, y) := yλ(1 − ρ(1 − x)) is increasing in both its arguments for
`It follows by finite induction that if α(cid:5)(α) (cid:5)→∞→ 0 then α(cid:5)(α(cid:6)) (cid:5)→∞→ 0 for any α
`
`(cid:6) ≤
`x, y ∈ [0, 1].
`∗(λ, ρ), then the expected fraction of erasure messages converges to zero.
`α. If we choose α < α
`Consequently, the decoder will be successful with high probability in this case. If, on the other
`∗(λ, ρ) then with high probability the decoding process will not succeed.
`hand, we choose α > α
`We will see shortly that, correctly interpreted, this decoding procedure constitutes the basis for all
`
`preprocessing algorithms that we consider in this paper.
`
`Example 2. [(3, 6)-regular code] Let (λ(x), ρ(x)) = (x2, x5). Then r(x2, x5) = 1
`2. The exact
`∗(x2, x5) was determined in [17] and can be expressed as follows: Let σ be given by
`threshold α
`(cid:12)
`(cid:11)
`(cid:3)
`162 + a − b +
`2
`(cid:15)−85 + 3
`√
`
`2916
`
`685
`
`25
`324
`
`−a+b
`
`,
`
`(cid:16)(cid:16) 1
`
`
`24465
`
`∗(x2, x5) := 1−σ3 . Then α
`(1−σ5)2
`
`∼
`
`25
`
`+
`(cid:15)
`
`52
`
`25
`
`324 − a + b
`2
`
`(cid:14) 1
`3 and b = 1
`27
`
`+
`
`1 3
`
`6
`
`σ =
`
`√
`2
`−85+3
`
`24465
`
`(cid:13)
`
`275 2
`where a = 22
`0.42944.
`
`3
`
`2 Efficient Encoders Based on Approximate Lower Triangulations
`
`In this section we shall develop an algorithm for constructing efficient encoders for LDPC codes.
`
`The efficiency of the encoder arises from the sparseness of the parity check matrix H and the
`
`algorithm can be applied to any (sparse) H. Although our example is binary, the algorithm applies
`
`generally to matrices H whose entries belong to a field F. We assume throughout that the rows of
`
`H are linearly independent. If the rows are linearly dependent, then the algorithm which constructs
`
`the encoder will detect the dependency and either one can choose a different matrix H or one can
`
`eliminate the redundant rows from H in the encoding process.
`
`7
`
`

`

`Assume we are given an m × n parity-check matrix H over F. By definition the associated code
`consists of the set of n-tuples x over F such that
`
`HxT = 0T .
`
`Probably the most straightforward way of constructing an encoder for such a code is the following.
`
`By means of Gaussian elimination bring H into an equivalent lower triangular form as shown in
`Fig. 2. Split the vector x into a systematic part s, s ∈ Fn−m, and a parity part p, p ∈ Fm, such that
`n − m
`
`m
`
`1
`
`1
`
`1
`
`n
`
`1
`
`1
`
`1
`
`0
`
`1
`
`1
`
`m
`
`1
`
`1
`
`Figure 2: An equivalent parity-check matrix in lower triangular form.
`
`x = (s, p). Construct a systematic encoder as follows: (i) Fill s with the (n−m) desired information
`symbols. (ii) Determine the m parity-check symbols using back-substitution. More precisely, for
`l ∈ [m] calculate
`
`n−m(cid:10)
`
`pl =
`
`Hl,jsj +
`
`l−1(cid:10)
`
`Hl,j+kpj.
`
`j=1
`
`j=1
`
`What is the complexity of such an encoding scheme? Bringing the matrix H into the desired form
`requires O(n3) operations of preprocessing. The actual encoding then requires O(n2) operations
`
`since, in general, after the preprocessing the matrix will no longer be sparse. More precisely, we
`expect that we need about n2 r(1−r)
`XOR operations to accomplish this encoding, where r is the rate
`of the code.
`
`2
`
`Given that the original parity-check matrix H is sparse one might wonder if encoding can be
`
`accomplished in O(n). As we will show, for codes which allow transmission at rates close to capacity,
`
`linear time encoding is indeed possible. And for those codes for which our encoding scheme still
`leads to quadratic encoding complexity the constant factor in front of the n2 term is typically very
`small so that the encoding complexity stays manageable up to very large blocklengths.
`
`8
`
`

`

`Our proposed encoder is motivated by the above example. Assume that by performing row and
`
`column permutations only we can bring the parity check matrix into the form indicated in Fig. 3.
`
`We say that H is in approximate lower triangular form. Note that since this transformation was
`m − g
`n − m
`0
`1
`
`g
`
`B
`
`1
`
`A
`
`m − g
`
`1
`
`1
`
`1
`
`1
`
`m
`
`T
`
`C
`
`D
`
`E
`
`g
`
`n
`
`Figure 3: The parity-check matrix in approximate lower triangular form.
`
`accomplished solely by permutations, the matrix is still sparse. More precisely, assume that we
`(cid:17)
`(cid:18)
`
`bring the matrix in the form
`
`A B T
`H =
`(5)
`,
`C D E
`where A is (m − g) × (n − m), B is (m − g) × g, T is (m − g) × (m − g), C is g × (n − m), D is
`g × g and finally E is g × (m − g). Further all these matrices are sparse2 and T is lower triangular
`with ones along the diagonal. Multiplying this matrix from the left by
`(cid:18)
`(cid:17)
`
`I
`−ET
`
`−1
`
`0
`I
`
`,
`
`we get
`
`(cid:17)
`
`−ET
`
`A
`−1A + C −ET
`
`B
`T
`−1B + D 0
`
`(cid:18)
`
`.
`
`(6)
`
`(7)
`
`Let x = (s, p1, p2) where s denotes the systematic part, p1 and p2 combined denote the parity part,
`p1 has length g and p2 has length (m − g). The defining equation HxT = 0T splits naturally into
`two equations, namely
`
`
`AsT + BpT1 + T pT2 = 0, and
`
`−1A + C)sT + (−ET
`(−ET
`−1B + D)pT
`1 = 0.
`2More precisely, each matrix contains at most O(n) elements.
`
`(8)
`
`(9)
`
`9
`
`

`

`Operation
`(cid:19)
`(cid:20)
`AsT
`(cid:19)
`(cid:20)
`−1
`AsT
`T
`−E
`−1AsT
`T
`(cid:19)−ET
`(cid:20)
`(cid:19)
`(cid:20)
`CsT
`(cid:19)−ET
`−1AsT
`+
`CsT
`−φ
`−1AsT + CsT
`−1
`
`(cid:20)
`
`= yT ⇔ (cid:19)
`
`Comment
`(cid:19)
`(cid:20)
`(cid:20)
`Multiplication by sparse matrix
`−1
`= T yT
`AsT
`AsT
`T
`Multiplication by sparse matrix
`Multiplication by sparse matrix
`Addition
`Multiplication by dense g × g matrix
`
`Complexity
`O(n)
`O(n)
`O(n)
`O(n)
`O(n)
`O(g2)
`
`
`Table 1: Efficient computation of pT
`
`1 = −φ−1(−ET
`
`−1A + C)sT .
`
`Operation
`AsT
`(cid:20)
`(cid:20)
`(cid:19)
`(cid:19)
`BpT
`(cid:19)
`1
`+
`BpT
`AsT
`−T
`1
`−1
`AsT + BpT
`1
`
`(cid:20)
`
`Comment
`Multiplication by sparse matrix
`Multiplication by sparse matrix
`(cid:19)
`(cid:20)
`Addition
`−T
`−1
`AsT + BpT
`1
`
`AsT + BpT
`1
`
`= yT ⇔ −(cid:19)
`2 = −T
`Table 2: Efficient computation of pT
`
`Complexity
`O(n)
`O(n)
`O(n)
`O(n)
`
`(cid:20)
`
`= T yT
`
`−1(AsT + BpT
`1 ).
`
`Define φ := −ET
`−1B + D and assume for the moment that φ is non-singular. We will discuss the
`general case shortly. Then from equation (9) we conclude that
`
`1 = −φ−1(−ET
`−1A + C)sT .
`pT
`−1(−ET
`Hence, once the g × (n− m) matrix −φ
`−1A + C) has been precomputed the determination
`of p1 can be accomplished in complexity O(g × (n − m)) simply by performing a multiplication
`with this (generically dense) matrix. This complexity can be further reduced as shown in Table 1.
`−1(−ET
`Rather than precomputing −φ
`−1A + C) and then multiplying with sT we can determine
`p1 by breaking the computation into several smaller steps, each of which is efficiently computable.
`
`To this end, we first determine AsT , which has complexity O(n) since A is sparse. Next, we multiply
`−1. Since T
`−1[AsT ] = yT is equivalent to the system [AsT ] = T yT this can also
`be accomplished in O(n) by back-substitution, since T is lower triangular and also sparse. The
`
`the result by T
`
`remaining steps are fairly straightforward. It follows that the overall complexity of determining p1
`2 = −T
`−1(AsT + BpT
`is O(n + g2). In a similar manner, noting from equation (8) that pT
`1 ), we can
`accomplish the determination of p2 in complexity O(n) as shown step by step in Table 2.
`
`A summary of the proposed encoding procedure is given in Table 3.
`
`It entails two steps: A
`
`preprocessing step and the actual encoding step. In the preprocessing step we first perform row
`
`and column permutations to bring the parity check matrix into approximate lower triangular form
`
`10
`
`

`

`(cid:17)
`(cid:18)
`Preprocessing: Input: Non-singular parity-check matrix H. Output: An equivalent parity check
`such that −ET
`−1B + D is non-singular.
`A B T
`matrix of the form
`C D E
`
`1. [Triangulation] Perform row and column permutations to bring the parity check matrix H
`(cid:17)
`(cid:18)
`into approximate lower triangular form
`
`H =
`
`A B T
`C D E
`
`with as small a gap g as possible. We will see in subsequent sections how this can be accom-
`plished efficiently.
`2. [Check Rank] Use Gaussian elimination to effectively perform the pre-multiplication
`(cid:18)(cid:17)
`(cid:18)
`(cid:17)
`(cid:18)
`(cid:17)
`
`=
`
`,
`
`(10)
`
`0
`A B T
`B
`T
`A
`I
`−1A + C −ET
`−ET
`−1
`−1B + D 0
`I
`C D E
`ET
`in order to check that −ET
`−1B + D is non-singular, performing further column permutations
`if necessary to ensure this property. (Singularity of H can be detected at this point.)
`(cid:17)
`(cid:18)
`
`such that −ET
`−1B + D is
`A B T
`Encoding: Input: Parity-check matrix of the form
`C D E
`non-singular and a vector s ∈ Fn−m. Output: The vector x = (s, p1, p2), p1 ∈ Fg, p2 ∈ Fm−g, such
`that HxT = 0T .
`
`1. Determine p1 as shown in Table 1.
`
`2. Determine p2 as shown in Table 2.
`
`Table 3: Summary of the proposed encoding procedure. It entails two steps: A preprocessing step
`and the actual encoding step.
`
`with as small a gap g as possible. We will see in subsequent sections how this can be accomplished
`(cid:18)
`(cid:17)
`efficiently. We also need to check whether φ := −ET
`−1B + D is non-singular. Rather than pre-
`0
`I
`−ET
`this task can be accomplished efficiently by Gaussian
`multiplying by the matrix
`−1
`I
`elimination. If, after clearing the matrix E the resulting matrix φ is seen to be singular we can
`
`simply perform further column permutations to remove this singularity. This is always possible
`
`when H is not rank deficient, as assumed. The actual encoding then entails the steps listed in
`
`Tables 1 and 2.
`
`We will now demonstrate this procedure by means of our running example.
`
`Example 3. [Parity Check Matrix of (3, 6)-regular code of length 12] For this example if we simply
`
`reorder the columns such that, according to the original order, we have the ordering 1, 2, 3, 4, 5, 6,
`
`11
`
`

`

`7, 10, 11, 12, 8, 9, then we put the parity check matrix into an approximate lower triangular form
`
`(11)
`
`⎞⎟⎟⎟⎟⎟⎠
`
`.
`
`1 1
`1 1
`0 0
`1 0
`0 1
`0 0
`
`1
`1
`0
`0
`0
`1
`
`0 0
`1 1
`0 0
`1 0
`1 1
`0 1
`
`1 1
`0 0
`1 1
`0 0
`0 1
`1 0
`
`0
`0
`0 1 0
`0
`0
`0 1 0
`1
`1
`1 1 0
`1
`0
`1
`1 1
`0
`0
`0
`1
`1
`1
`1
`0
`0
`1
`
`⎛⎜⎜⎜⎜⎜⎝
`
`(cid:17)
`
`(cid:18)
`
`A B T
`C D E
`
`=
`
`with g = 2,
`
`⎞⎟⎟⎟⎟⎟⎠
`
`.
`
`⎛⎜⎜⎜⎜⎜⎝
`
`exchange e.g. column 5 with column 8 which gives φ =
`
`. In terms of the original order
`
`1 0
`1 1
`the final column order is then 1, 2, 3, 4, 10, 6, 7, 5, 11, 12, 8, 9 and the resulting equivalent parity
`
`(12)
`
`⎞⎟⎟⎟⎟⎟⎠
`
`.
`
`1 1
`1 1
`0 0
`1 0
`0 1
`0 0
`
`1
`1
`0
`0
`0
`1
`
`0 0
`1 0
`0 1
`1 1
`1 0
`0 1
`
`1 1
`0 0
`1 1
`0 0
`0 1
`1 0
`
`0
`0
`0 1 0
`0
`1
`0 1 0
`0
`1
`1 1 0
`0
`0
`1
`1 1
`1
`0
`0
`1
`1
`1
`1
`0
`0
`1
`
`⎛⎜⎜⎜⎜⎜⎝
`
`(cid:17)
`
`(cid:18)
`
`A B T
`C D E
`
`=
`
`check matrix is
`
`Assume now we choose s = (1, 0, 0, 0, 0, 0). To determine p1 we follow the steps listed in Table
`−1[AsT ] = (1, 1, 0, 0)T , −E[T
`−1AsT ] = (0, 1)T , CsT = (0, 0)T ,
`1. We get AsT = (1, 1, 0, 1)T , T
`
`−1AsT ] + [CsT ] = (0, 1)T , and φ−1[−ET
`[−ET
`−1AsT + CsT ] = (0, 1)T = pT
`1 . In a similar manner
`
`we execute the steps listed in Table 2 to determine p2. We get BpT1 = (0, 1, 0, 0)T , [AsT ] +
`−1[AsT + BpT
`1 ] = (1, 0, 1, 0)T = pT2 . Therefore the codeword is equal to
`1 ] = (1, 0, 0, 1)T and T
`
`[BpT
`(s, p1, p2) = (1, 0, 0, 0, 0, 0, 0, 1, 1, 0, 1, 0). A quick check verifies that HxT = 0T , as required.
`
`3 Approximate Upper-Triangulation Via Greedy Algorithms
`
`We saw in the previous section that the encoding complexity is of order n+ g2, where g is the gap of
`the approximate triangulation. Hence, for a given parity check matrix we are interested in finding
`
`an approximate lower triangulation with as small a gap as possible. Given that we are interested in
`
`large block-lengths, there is little hope of finding the optimal row and column permutation which
`
`12
`
`We now use Gaussian elimination to clear E. This results in
`0
`0
`1 1
`1 0
`0 1
`1 0 1 0
`1 1
`1 1
`1 0
`0
`0 0
`0 1 0
`0 0
`0 0
`0 1
`1 1
`1
`1 1 0
`1 0
`0 1
`0 0
`0 1
`0
`1
`1 1
`0 0
`1 1
`0 0
`1 1
`0
`0
`0
`0
`1 0
`1 1
`1 0
`1 1
`0
`0
`0
`0
`−1B + D =
`
`We see that φ := −ET
`
`(cid:18)
`
`(cid:17)
`
`1 1
`1 1
`
`(cid:17)
`(cid:18)
`is singular. This singularity can be removed if we
`
`

`

`results in the minimum gap. So we will limit ourselves to greedy algorithms. As discussed in the
`
`previous section, the following greedy algorithms work on the extended adjacency matrices since
`
`these are, except for the ordering of the sockets, in one-to-one correspondence with the underlying
`
`graphs.
`
`To describe the algorithms we first need to extend some of our previous definitions. Recall that for
`
`a given pair (μ, ν) of degree distributions we associate to it two important parameters. The first
`
`parameter r(μ, ν) is the rate of the degree distribution pair and is defined in (2). Note that
`1 − r(μ, ν) =
`1
`1 − r(ν, μ) .
`(13)
`∗(μ, ν) is called the threshold of the degree distribution pair and is defined
`The second parameter α
`If r(μ, ν) ≥ 0, as we have tacitly assumed so far, then we can think of (μ, ν) as the
`degree distribution pair of an ensemble of LDPC codes of rate r(μ, ν). Further, as discussed in
`∗(μ, ν) is the threshold of this ensemble
`the introduction, in this case it was shown in [9] that α
`when transmitting over the BEC assuming a belief propagation decoder. In general, r(μ, ν) may
`
`in (4).
`
`be negative and, hence, the degree distribution pair does not correspond to an ensemble of LDPC
`
`codes. Nevertheless, the definitions are still meaningful.
`In this case we have r(x5, x2) = −1 and using the
`Example 4. Let (μ(x), ν(x)) = (x5, x2).
`∼ 0.945851.
`∗(x5, x2) = 318
`techniques described in [17] the threshold can be determined to be α
`
`21755
`
`In a similar way, the definition of the ensemble Cl(μ, ν) as well as the association of (extended)
`adjacency matrices to elements of Cl(μ, ν) carry over to the case r(μ, ν) ≤ 0. Assume now that,
`for a given ensemble Cl(μ, ν), we create a new ensemble by simply exchanging the roles of left and
`right nodes. This new ensemble is equivalent to the ensemble C(1−r(μ,ν))l(ν, μ) = C
`1−r(ν,μ) l(ν, μ)
`where we have used (13). For the associated (extended) adjacency matrices this simply amounts
`
`1
`
`to transposition.
`Assume we are given a matrix A of dimension (1 − r)l × l with elements in N, where r is some
`real valued parameter with r ≤ 1. We will say that a row and a column are connected if the
`corresponding entry in A is non-zero. Furthermore, we will say that a row (column) has degree i
`
`if its row (column) sum equals i. Assume now that we want to bring A into approximate lower
`
`triangular form. The class of greedy algorithms that we will consider is based on the following
`simple procedure. Given the (1 − r)l × l matrix A and a fixed integer k, k ≤ (1 − r)l, permute,
`
`13
`
`

`

`if possible, the rows and columns in such a way that the first row has its last non-zero entry at
`position (l − k + 1). If this first step was successful then fix the first row and permute, if possible,
`the remaining rows and all columns in such a way that the second row has its last non-zero entry
`at position (l − k + 2). In general, assuming that the first i − 1 steps were successful, permute at
`the i-th step, if possible, the last (1 − r)l − (i − 1) rows and all columns in such a way that the i-th
`row has its last non-zero entry at position (l − k + i). If this procedure does not terminate before
`the k-th step then we accomplished an approximate lower triangulation of the matrix A. We will
`say that A is in approximate lower triangular form with row gap (1 − r)l − k and column gap l − k,
`as shown in Fig. 4.
`
`l-k
`
`k
`
`(1-r)l
`
`0
`
`k
`
`(1-r)l-k
`
`l
`
`Figure 4: Approximate lower triangulation of the matrix A with row gap (1 − r)l − k and column
`gap l − k achieved by a greedy algorithm.
`
`3.1 Greedy Algorithm A
`
`We will now give a precise description of the greedy algorithm A. The core of the algorithm is the
`
`diagonal extension step.
`
`Diagonal Extension Step. Assume we are given a matrix A and a subset of the columns which
`
`are classified as known.
`In all cases of interest to us either none of these known columns are
`connected to rows of degree one or all of them are. Assume the latter case. Let c1, . . . , ck denote
`the known columns and let r1, . . . , rk be degree one rows such that ci is connected to ri.3 Reorder,
`if necessary, the rows and columns of A such that r1, . . . , rk form the leading k rows of A and such
`that c1, . . . , ck form the leading k columns of A as shown in Fig. 5, where ˜A denotes the sub-matrix
`of A which results from deleting the rows and columns indexed by r1, . . . , rk and c1, . . . , ck. Note
`
`3ri may not be determined uniquely.
`
`14
`
`

`

`that after this reordering the top-left k × k sub-matrix of A has diagonal form and that the top k
`rows of A have only this one non-zero entry.
`
`c1, . . . , ck
`
`0
`
`0
`
`r1
`
`rk
`
`0
`
`˜A
`
`Figure 5: Given the matrix A let c1, . . . , ck denote those columns which are connected to rows of
`degree one and let r1, . . . , rk be degree one rows such that ci is connected to ri. Reorder the rows
`and columns in such a way such that r1, . . . , rk form the first k rows and such that c1, . . . , ck form
`the first k columns. Note that the top-left k × k sub-matrix has diagonal form and that the first k
`rows have only this one non-zero entry.
`
`By a diagonal extension step we will mean the following: As input we are given the matrix A and
`
`a set of known columns. The algorithm performs some row and column permutations and specifies
`a residual matrix ˜A. More precisely, to implement the algorithm one does the following. If none
`of the known columns are connected to rows of degree one then perform a column permutation so
`
`that all the known columns form the leading columns of the matrix A. Furthermore, delete these
`known columns from the original matrix and declare the resulting matrix to be ˜A. If, on the other
`hand, all known columns are connected to rows of degree one then perform a row and column
`
`permutation to bring A into the form depicted in Fig. 5. Furthermore, delete the known columns
`c1, . . . , ck and the rows r1, . . . , rk from the original matrix and declare the resulting matrix

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket