throbber
Development of Rate-Compatible Structured LDPC CODEC
`Algorithms and Hardware IP
`
`Project Final Report
`
`School of Electrical and Computer Engineering
`Georgia Institute of Technology
`December 2006
`
`Jaehong Kim
`Demijan Kline
`Woonhaing Hur
`Dr. Aditya Ramamoorthy
`Dr. Sunghwan Kim
`Dr. Steven W. McLaughlin
`
`Table o f C o n t e n t s
`
`List of Abbreviations
`
`CHAPTER I : Introduction
`
`CHAPTER I I : Background Research
`
`2.1 Low-Density Parity-Check Codes
`
`2.2 Iterative Decoding Algorithm
`
`2.3 Efficient Encoding Method
`
`2.4 Irregular Repeat Accumulate Codes
`
`2.5 Rate-Compatible Puncturing Algorithm
`
`CHAPTER I I I : Effkiently-Encodable Rate-Compatible Codes
`
`3.1 New Class of Irregular LDPC Codes
`
`3.2 Low-Rate Code Design
`
`3.3 Efficient Encoder Implementation
`
`3.4 Simulation Results
`
`3.5 Conclusions
`
`'»
`
`1
`
`3
`
`5
`
`9
`
`19
`
`23
`
`25
`
`40
`
`41
`
`54
`
`57
`
`6 6
`
`7-*
`
`CHAPTER IV : Rate-Compatible LDPC Codes For Incremental
`
`Redundancy Hybrid ARQ Systems
`
`4.1 Incremental Redundancy Hybrid ARQ Systems
`
`4.2 System Model
`
`4.3 Simulation Results
`
`4.4 Conclusions
`
`75
`
`76
`
`7 7
`
`7 8
`
`^
`
`CALTECH - EXHIBIT 2010
`Apple Inc. v. California Institute of Technology
`IPR2017-00701
`
`

`

`CHAPTER V : Wire-tap Channel Application
`
`5.1 Encoding Algorithm for the Wire-tap Channel
`
`CHAPTER V I: Optimization of Degree Distributions
`
`References
`
`84
`
`84
`
`92
`
`96
`
`ii
`
`L i st of A b b r e v i a t i o ns
`
`Acknowledgement
`A Posteriori Probability
`Automatic Repeat reQuest
`Additive White Gaussian Noise
`Binary Erasure Channel
`Bit Error Rate
`Belief Propagation
`Binary Phase Shift Keying
`Binary Symmetric Channel
`Cyclic Redundancy Check
`Channel State Information
`Efficiently-Encodable Rate-Compatible
`extended IRA
`Forward Error Correction
`Frame Error Rate
`Hybrid Automatic Repeat reQuest
`Integrated Circuit
`Incremental Redundancy
`Irregular Repeat Accumulate
`Low-Density Parity-Check
`Log Likelihood Ratio
`Maximum A Posteriori
`Minimum Mean Square Error
`Negative Acknowledgement
`Progressive Edge Growth
`Quasi-Cyclic
`Quadrature Phase Shift Keying
`Rate-Compatible Punctured Codes
`Vertical Bell Labs Layered Space-Time
`Very Large Scale Integration
`Word Error Rate
`
`ACK
`APP
`ARQ
`AWGN
`BEC
`BER
`BP
`BPSK
`BSC
`CRC
`CSI
`E2RC
`eIRA
`FEC
`FER
`HARQ
`IC
`IR
`IRA
`LDPC
`LLR
`MAP
`MMSE
`NACK
`PEG
`QC
`QPSK
`RCPC
`V-BLAST
`VLSI
`WER
`
`

`

`CHAPTER I
`
`INTRODUCTION
`
`Low-density parity-check (LDPC) codes by Gallager [1] had been forgotten for several
`
`decades in spite of their excellent properties, since the implementation of these codes
`
`seemed to be impossible at that time. These codes were rediscovered in the middle of the
`
`1990s [2] and were shown to achieve Shannon limit within 0.0045dB [3]. LDPC codes
`
`are now considered good candidates for the next-generation forward error correction
`
`(FEC) technique in high throughput wireless and recording applications. Their excellent
`
`performance and iterative decoder make them appropriate for technologies such as DVB-
`
`S2, IEEE 802.16e [4], and IEEE 802.1 In [5], [6].
`
`While semiconductor
`
`technology has progressed
`
`to an extent where the
`
`implementation of LDPC codes has become possible, many practical issues still remain.
`
`First and foremost, there is a need to reduce complexity without sacrificing performance.
`
`Second, for applications such as wireless LANs, the system throughput depends upon the
`
`channel conditions and hence the code needs to have the ability to operate at different
`
`rates. Third, while the LDPC decoder can operate in linear time, it may be hard to
`
`perform low-complexity encoding of these codes. In particular, the class of irregular
`
`LDPC codes introduced by Richardson el al. [7] may have high memory and processing
`
`requirements, especially at short block lengths. While the encoding time can be reduced
`
`substantially using the techniques presented in [8] at long block lengths, their techniques
`
`may be hard to apply at short block lengths. The other option is to resort quasi-cyclic
`
`(QC) LDPC or algebraic constructions that can be encoded by shift registers [9].
`
`Irregular repeat-accumulate (IRA) codes were introduced by Jin el al. [10]. These
`
`codes have a linear-time encoder and their performance is almost as good as irregular
`
`LDPC codes. This class of codes was extended, called extended IRA (eIRA) codes, by
`
`Yang el al. [11], where they demonstrated high-rate codes with very low error floors.
`
`A popular technique for achieving rate adaptation in a system is through the use of
`
`rate-compatible puncturing. A rate-compatible punctured code (RCPC) is suitable for
`
`applying to incremental redundancy (1R) hybrid automatic repeat request (HARQ)
`
`systems, since the parity bit set of a higher rate code is a subset of the parity bit set of a
`
`lower rate code [12]. The RCPC scheme has another advantage in that it has the same
`
`encoder and decoder while operating at different rates. The number of parity bits that the
`
`transmitter sends depends on the rate requirement. At the decoder end, parity bits that are
`
`not transmitted are treated as erasures. Thus, puncturing provides a low-complexity
`
`solution to the rate-adaptation problem.
`
`Motivated by these observations, this report first proposes the puncturing algorithm for
`
`LDPC codes with short block lengths. Based on the puncturing algorithm, a new class of
`
`codes is proposed that can be efficiently encoded as well as can be punctured in a rate-
`
`compatible fashion. The proposed LDPC codes will be shown to have a linear-time
`
`encoder and have good performance under puncturing for a wide range of rates. Finally,
`
`we verify that the proposed codes show good throughput performance when they are
`
`applied to 1R-HARQ systems over time-varying channels.
`
`2
`
`

`

`CHAPTER II
`
`BACKGROUND RESEARCH
`
`Channel coding is an essential technique to cope with errors occurring in channels of
`communication systems and storage systems. Channel coding has flourished in two
`branches. Channel errors can be corrected with forward error correction (FEC) codes.
`On the other hand, a receiver may request retransmission of the previous data if it fails to
`recover them, which is called automatic repeat request (ARQ). FEC codes can be
`classified into block codes, such as cyclic codes and LDPC codes, and tree codes, such as
`convolutional codes and Turbo codes. In this chapter, we briefly explain the block codes
`where LDPC codes are specified.
`Let us consider linear block codes over the binary field F2 ! ({0,1}, +, x) Let F2 be
`the TV-dimensional vector space over F2 Then, an (N, K) linear block code C is defined
`as AT-dimensional subspace of F2 , where K is a data word length and N is a codeword
`length. Since C is a subspace of dimension K, there are K linearly independent vectors
`i SKI which span C
`. Let m = [m0,ml,"
`,/nA-_,] be the data word and
`c = [c0, c,," , cv_,] be the corresponding codeword in the code C. The mapping m —> c
`is thus naturally written as c = m„g„ + m[gl +" + mK_igK_l
`This relationship can be
`represented in the matrix form c = mG , where G is a K x N matrix;
`
`So'S\>"
`
`G = \
`
`We call the matrix G the generator matrix for C. In fact, C is the row space of G
`The encoding process can be viewed as an injective mapping that maps vectors from the
`K -dimensional vector space into vectors from the N -dimensional vector space. The
`ratio
`
`R--
`
`a:
`
`N
`
`rate.
`
`is called code
`On the other hand, the null space C1 of C has dimension N — K and is spanned by
`N-K
`linearly independent vectors h„,h,," ,hv.k-_. Since each h, eC'1, we should
`have for any ceC that
`
`h, c^O, V/
`This relationship can be represented in the matrix form as H cT = 0, where the matrix
`H is the so-called parity-check
`matrix defined as
`
`h0
`
`i
`
`<y-K
`
`A low-density parity-check code is so called because the parity-check matrix H has a
`low density of Is. We address the details of LDPC codes in the following chapter.
`
`4
`
`

`

`2.1 Low-Density Parity-Check Codes
`Every LDPC code is uniquely specified by its parity-check matrix H or, equivalently,
`by means of the Tanner
`graph [13], as illustrated in Figure 2.1. The Tanner graph
`consists of two types of nodes: variable
`nodes and check nodes, which are connected by
`edges. Since there can be no direct connection between any two nodes of the same type,
`the Tanner graph is said to be bipartite. Consider an LDPC code defined by its
`corresponding Tanner graph. Each variable node, depicted by a circle, represents one bit
`of a codeword, and every check node, depicted by a square, represents one parity-check
`equation.
`
`10 10 10 1
`1 1 0 0 0 1 0
`0 1 1 10 0 1
`0 0 0 1 1 1 1
`
`Figure 2 1 A parity-check matrix and its Tanner graph; Thick lines in the graph implies cycle 4.
`
`Since we are considering N codeword length and K data word length, the Tanner graph
`contains N variable nodes and M check nodes, where M = N — K Let us denote the
`. Then, the ;-th check node is connected to the j-
`parity-check matrix H = (ht] j
`th variable node if and only if h -\ For example, 1 in column f and row D in the
`parity-check matrix in Figure 2.1 corresponds to an edge connection between variable
`
`node f and check node D in the Tanner graph. If there are d edges emanating from a node,
`variable or check node, we say that node has degree d. In Figure 2.1, variable node f has
`degree 2 and check node D has degree 4. Tanner graphs can also serve as a nice
`visualization tool for a variety of issues concerning LDPC codes.
`
`Definition
`
`2.1' A cycle of length / in a Tanner graph is a path comprised of / edges
`that begins and ends at the same node, whereby every edge has been traversed only once.
`
`The length of a cycle is the number of edges in that path. Usually, LDPC codes contain
`many cycles of different lengths in their Tanner graph.
`
`Definition
`
`2.2: The girth in a Tanner graph is the minimum cycle length of the graph.
`
`The girth has a great importance for the code's performance. Since Tanner graphs are
`bipartite, the smallest girth has length 4, as shown by the thick line in Figure 2.1.
`However, it is desirable to avoid short cycles in designing LDPC codes since such cycles
`can cause poor performance.
`An ensemble of LDPC codes is defined by two generating polynomials of the degree
`distributions, called a degree
`pair, for the variable and check nodes. That is,
`
`distribution
`
`^ ) = | > - ',
`
`i = 2
`
`J,
`
`p(x)=TiP>x"1'
`
`where /., is the fraction of edges emanating from variable nodes of degree /, p, is the
`6
`
`

`

`f r a c t i on of e d g es
`
`e m a n a t i ng
`
`from
`
`c h e ck n o d es of d e g r ee
`
`/, a nd dv a nd dc d e n o te t he
`
`m a x i m um v a r i a b le n o de a nd c h e ck n o de d e g r e e s, r e s p e c t i v e l y.
`
`As a s p e c i al c a se w h en e a ch of A(x)
`
`a nd p(x)
`
`is m o n o m i a l, an L D PC c o de is s a id to
`
`be regular.
`
`In f a c t, an L D PC c o de d e f i n ed w i th a p a r i t y - c h e ck m a t r ix t h at c o n t a i ns t he
`
`s a me n u m b er of Is in e a ch c o l u mn (dy) a nd t he s a me n u m b er of Is in e a ch r ow (dc)
`
`is
`
`s a id to be (dv, dc)
`
`r e g u l ar L D PC c o d e s. T he n u m b er of a ll Is in / / is e q u al to Mdc,
`
`a nd
`
`a l so to Ndv.
`
`H e n c e, t he c o de r a te R of a r e g u l ar L D PC c o de c an be e x p r e s s ed as
`
`=N -M
`
`R
`
`N
`
`N
`
`dc
`
`It is s h o wn in [1] t h at t he r e g u l ar L D PC c o d es w i th t he b e st p e r f o r m a n ce h a ve d, = 3.
`
`In g e n e r al
`
`c a s e s, w h e re t he n u m b er of Is p er c o l u mn or r ow is n ot c o n s t a nt
`
`in t he
`
`p a r i t y - c h e ck m a t r ix H, an L D PC c o de
`
`is s a id to be irregular.
`
`A s s u me
`
`t h at
`
`t h e re a re N
`
`v a r i a b le n o d es a nd M c h e ck n o d e s. T h e n, t he n u m b er of v a r i a b le n o d es of d e g r ee / is
`
`X. LI
`
`Hi
`
`a nd t he t o t al n u m b er of e d g es in t he T a n n er g r a ph f r om t he v a r i a b le n o de p o i nt is
`
`^lx{x)dx
`
`N
`
`lA(x)dx
`
`L i k e w i s e, t he t o t al n u m b er of e d g es f r om c h e ck n o de p o i nt is
`
`M
`
`E = -
`\\p{x)dx
`
`T h u s, t he c o de r a te of an i r r e g u l ar L D PC c o de c an be o b t a i n ed as
`
`N
`
`jp{*)d*
`
`\\^{x)dx
`
`S o m e t i m es it is c o n v e n i e nt to h a ve v a r i a b le a nd c h e ck n o de d i s t r i b u t i o ns
`
`from
`
`t he n o de
`
`p e r s p e c t i v e. T h at i s, t he f r a c t i o ns of v a r i a b le a nd c h e ck n o d es of e a ch d e g r ee a re
`
`P.
`
`,
`
`w h e re At a nd pt a re f r a c t i o ns of v a r i a b le a nd c h e ck n o d es w i th d e g r ee i, r e s p e c t i v e l y.
`
`I r r e g u l ar L D PC
`
`c o d es a re m o re
`
`f l e x i b le
`
`in
`
`t h e ir d e s i gn
`
`b e c a u se of t he
`
`r e l a x ed
`
`c o n s t r a i n ts a nd h a ve p r o v ed to p e r f o rm m u ch b e t t er t h en t he r e g u l ar L D PC c o d es
`
`[7].
`
`8
`
`

`

`2.2
`
`IT E R A T I VE DE C O D I NG AL G O R I T HM
`
`This section summarizes the iterative decoding of LDPC codes based on the belief
`propagation (BP) algorithm. The decoding problem consists of finding the most likely
`vector x such that H • x = 0, where H is a parity-check matrix defining an LDPC code.
`We consider binary phase shift keying (BPSK) modulated input data over the additive
`white Gaussian noise (AWGN) channel. Let N -tuple vector x = [x,,! , xv] be a BPSK
`modulated codeword at the transmitter, where xl e {-1, 1} . The codeword bits are
`modulated according to
`
`*.=(-')"
`= l-2c-,,
`
`where c, is the i-th bit of the codeword c = [c,,! ,cv]. Then, the received vector at the
`receiver can be expressed as
`
`y = x + n,
`], y, e R, and the noise vector n = [«,, ! , nv ] is composed of N
`where y = [>•,,! , yv
`independent additive noise nt chosen from zero-mean Gaussian distribution with the
`standard deviation cr
`
`Much like the optimal maximum a posteriori (MAP) symbol-by-symbol decoding of
`trellis codes, we try to compute the a posteriori probability (APP) that c, equals 1, given
`the received sequence y and the fact that c must satisfy some constraints. Without loss
`of generality, we focus on decoding the j-th bit of the codeword. First, let us think about
`
`the following APP ratio:
`
`Pr[c-,=0|y,S,]
`Pr[c,=l|y,i',]'
`where S, is the event that the bits in c satisfy the dc parity-check constraints involving
`c,. If c( is 0, the remaining (dc - 1) bits in a given parity-check equation involving c, must
`contain an even number of Is for S, to occur. On the other hand, if c, is 1, each parity-
`check constraint involving c, must contain an odd number of Is. The following Lemma
`will be helpful for further analysis.
`
`Lemma
`
`2.1 [1]: Consider a sequence of m independent bits a = [«,,! ,om] with
`
`Pr[o, = l] = Pt. The probability that a contains an even number of Is is
`
`Proof: We prove this by induction. If a sequence of m independent bits a = [o,,! , am]
`contains an even number of Is, the modulo-2 sum of all bits in a , designated as A„, is 0.
`For m = 2, we can have
`Pr[^ =0] = Pr[o, +a7 =0]
`=^+0-^.)0-A)
`= I + I(l-2/>)(l-2/>)
`
`,I+Ino-2/:).
`
`Assume that the equation holds for m = I, -1 •
`
`10
`
`

`

`* K .I= 0] = I + ! f j ( l - 2 A0
`
`Then, for m = L , we get
`
`Pr[^=0] = Pr[^i.1+fli=0]
`
`= i + I ( l - 2 P r K . , = l ] ) ( l - 2 ?i)
`
`= i + i ( l - 2 ( l - P r [ 4 _ , = 0 ] ) ) ( l - 2 / i)
`
`From the Lemma 2.1, we are ready to get the APP ratio for c,
`
`Theorem
`
`2.1
`
`[1]: Assume that the received samples in the received vector y are
`
`statistically independent. Let 5, be the event that the bits in c satisfy the dt parity-
`
`check constraints involving ct. Then, the APP ratio for c, given y and S, is
`
`n
`
`i+ n 0 -2 Prr^=•!>'*, i)
`
`^ - . . MJ
`
`^ = .
`
`|
`
`, Jn
`
`i_n
`
`l_2
`
`(
`
`P
`
`r
`
`[
`
`s
`
`=
`
`1
`
`|^]}
`
`where ci; and y^ are the £-th bit in they-th parity-check equation involving c, and the
`
`received sample corresponding ck j, respectively.
`
`Proof. By applying Bayes' rule, we have
`
`Pr[S, 1 c, = 0,y]/Pr[S,]
`Pr[c, = 0|y,S,]_Pr[c, =0\y,]
`Pr [c, = 11 y, S, ] ~ Pr [c, = 11 y, ] ' Pr [S, \ c, = 1, y ]/Pr [S, ]
`
`_ Pr[c, = 0 | x] Pr[5, |c, =0,y]
`~~ Pr[c, = 11 x ] Pr[i', k, =l,y]'
`
`From Lemma 2.1, the probability of an odd number of Is in the other dc -1 bits of the
`
`y-th parity-check equation is
`
`Since y is assumed to be statistically independent, the probability that all dc parity-
`
`check constraints are satisfied is the product of all such probabilities:
`
`p r„.
`
`n
`
`i ni+ n
`
`o - 2 ^ = ' i ^ ])
`
`PrfS, |c = l,yl
`
`.
`
`„
`
`,
`
`r
`
`The computation of the APP ratio as given by the formula in the above Theorem 2.1 is
`
`complex. Gallager instead provided an iterative algorithm that is exactly the BP based
`
`decoding approach nowadays. Before we give the iterative decoding algorithm, we will
`
`need the following result.
`
`Lemma
`
`2.2: Suppose y, = x: + nl, where nt ~
`
`!
`
`(0, a1) and Pr[xt = +1] = Pr[x: = -1]
`
`= 1/2, then
`
`Pr[*,=*|y]=
`
`1 +e
`
`12
`
`

`

`Proof.
`
`additional information from the neighboring check nodes is available, they send the
`
`PNX, = x\y \=
`
`T-T^
`p(y)
`
`-
`
`2
`
`2
`
`2
`
`ey "' + e'y
`
`_1
`
`1
`
`message along adjacent edges:
`
`1
`
`\ + e~
`
`l + e •
`
`Subsequently, the messages are iteratively exchanged between check nodes and
`
`variable nodes. In Figure 2.2, we see a check node connected to dc variable nodes. In
`
`each iteration, the check node will receive messages from its neighboring variable nodes,
`
`process that information, and pass the updated message back to the neighboring variable
`
`•
`
`nodes:
`
`^ ( o ) 4 4 n o - 2 <,
`
`,( o ).
`
`With these results, we formulate an iterative decoding algorithm for LDPC codes,
`
`which is known as the message
`
`passing
`
`algorithm.
`
`The information is iteratively
`
`exchanged between the neighboring nodes in the Tanner graph by passing messages
`
`along the edges. Each message can be associated to the codeword bit corresponding to
`
`the variable node incident to the edge carrying the message. A message sent from either
`
`check or variable node along an adjacent edge should not depend on the message
`
`previously received along that edge.
`
`A message from the variable node / to the check node j in the /-th iteration, carrying the
`
`probability that the value of the ;'-th bit is k, is denoted by q ^ i k) On the other hand, a
`
`message from the check node j to the variable node / in the /-th iteration, carrying the
`
`probability that the value of the /-th bit is k, is r ^ i k)
`
`Initially, the variable nodes only
`
`have information about the channel output values of their corresponding bits. Since no
`
`Figure 2.2
`
`The Check node message update.
`
`13
`
`

`

`F u r t h e r m o r e, t he m e s s a ge p a s s i ng
`
`f r om t he v a r i a b le n o de
`
`is as s h o wn
`
`in F i g u re 2 . 3.
`
`S i m i l a r l y,
`
`e a ch
`
`v a r i a b le n o de
`
`c o l l e c ts m e s s a g es
`
`f r om
`
`i ts n e i g h b o r i ng
`
`c h e ck
`
`n o d e s,
`
`A f t er r e c e i v i ng t he c h e ck n o de m e s s a g e s, a v a r i a b le n o de / c a l c u l a t es t he p r o b a b i l i t y,
`
`c a l c u l a t es t he p r o b a b i l i ty
`
`t h at i ts c o r r e s p o n d i ng b it is I a nd s e n ds it to t he n e i g h b o r i ng
`
`P r [ c, = 1| y, S:] by
`
`t a k i ng
`
`i n to
`
`c o n s i d e r a t i on a ll
`
`i n c o m i ng
`
`c h e ck n o de m e s s a g e s.
`
`If
`
`c h e ck n o d e s:
`
`H c = 0, w h e re
`
`1-^(0
`
`i-gHP * R( w />
`
`if P r [ c, =l|y,S,]>-i
`
`[ 0,
`
`o t h e r w i s e,
`
`w h e r e by t he m e s s a ge r e c e i v ed
`
`f r om t he c h e ck n o de
`
`j w as l e ft o u t, s i n ce t he u p d a t ed
`
`or
`
`if t he m a x i m um
`
`n u m b er of
`
`i t e r a t i o ns h as b e en
`
`r e a c h e d,
`
`t he a l g o r i t hm
`
`s t o p s;
`
`m e s s a ge h as to d e p e nd s o l e ly on e x t r i n s ic
`
`i n f o r m a t i o n. T h e n, we e a s i ly g et
`
`o t h e r w i s e, a n ew i t e r a t i on is s t a r t e d.
`
`^(o)^r(o)-n^(°).
`keC, j
`C(0=<C0)TK'(i)
`keC.'J
`
`1
`
`2
`
`•
`
`•
`
`•
`
`\dv
`
`q/>(k)
`
`F i g u re 2 .3
`
`V a r i a b le n o de m e s s a ge u p d a t e.
`
`15
`
`T h is a l g o r i t hm w i ll c o n v e r ge to t he t r ue m a x i m um A PP w i th t he g r o w i ng n u m b er of
`
`i t e r a t i o ns o n ly if t he m e s s a g es a re s t a t i s t i c a l ly
`
`i n d e p e n d e n t, w h i ch is t he c a se o n ly if t he
`
`g r a ph
`
`c o r r e s p o n d i ng
`
`to H
`
`is c y c l e - f r e e.
`
`H o w e v e r,
`
`t he g r a p hs of p r a c t i c al c o d es w i ll
`
`n e v er be c o m p l e t e ly
`
`c y c l e - f r e e.
`
`T h e r e f o r e, t he a l g o r i t hm w i ll g i ve us an a p p r o x i m a te
`
`s o l u t i on f or t he A P P, w h i ch
`
`f o r t u n a t e ly s t i ll y i e l ds a r e m a r k a b le p e r f o r m a n c e.
`
`Up to t h is p o i n t, t he d e c o d er a n a l y s is h as b e en t r e a t ed in t he p r o b a b i l i ty d o m a i n.
`
`In t he
`
`a l g o r i t h m, we c an n o t i ce a s u b s t a n t i al n u m b er of m u l t i p l i c a t i o n s, w h i ch
`
`t e nd to b e c o me
`
`n u m e r i c a l ly u n s t a b le a nd a re h a r d er to i m p l e m e nt in h a r d w a re c o m p a r ed to t he a d d i t i o n s.
`
`To s i m p l i fy t h o se e q u a t i o n s, we i n t r o d u ce t he f o l l o w i ng n o t a t i o n:
`
`L (c, )-\og—%
`
`f,
`
`c a l l ed t he log-likelihood
`
`ratio
`
`(JAM). T he p r o b a b i l i ty d i s t r i b u t i on f or a b i n a ry
`
`r a n d om
`
`v a r i a b le is u n i q u e ly
`
`s p e c i f i ed by L{ct).
`
`I ts s i gn
`
`i n d i c a t es t he m o st
`
`l i k e ly v a l ue f or c(,
`
`w h i le i ts m a g n i t u de is a m e a s u re of c e r t a i n ty f or t h at d e c i s i o n. A l s o, l et us d e f i ne
`
`16
`
`

`

`A N D L ( , „ ) ! L O G^
`
`T H E N, THE INITIALIZATION STEP B E C O M ES
`
`= LOG^
`
`'-
`
`a1
`
`L ( RJ,) =
`
`2 T A N H - ' | N T A N H ^ I ( ^ )J
`
`T HE PROBLEM WITH THIS EXPRESSION IS THAT WE ARE STILL LEFT WITH A PRODUCT. WE CAN REMEDY
`
`THIS BY CONSIDERING L[qlt)
`
`AS SIGN AND MAGNITUDE SEPARATELY. LET US REWRITE L^qIJ)
`
`AS
`
`WHERE a:j
`
`! sign^L^q^
`
`AND filt !
`
`| / - ( 9 „ ) |-
`
`T H E N, THE PREVIOUS CHECK NODE UPDATE RESULTS CAN BE REWRITTEN AS
`
`WHERE a DENOTES THE STANDARD DEVIATION OF THE ZERO-MEAN WHITE G A U S S I AN NOISE. T HE
`
`T H E N, WE HAVE
`
`CONSTANT OF PROPORTIONALITY 2 / C2 IS CALLED THE channel
`
`reliability.
`
`LET US CONSIDER THE
`
`FOLLOWING RELATIONSHIP:
`
`'
`
`'M =
`
`I!
`
`" J - 2 T A N H -' Tl
`
`T A N H F I ^L
`
`a„
`
`^ T A N H - ' L O G "' X
`
`L O G T A N H F I / ^1
`
`TANH |J L O G - ^J =
`
`/ > , , - / >,
`
`= L - 2 F L.
`
`U S I NG THIS EQUATION, WE HAVE
`
`T A N H [ I L ( RY, )J = T A N H ^ I L O G^
`
`= '"2'-,(I)
`
`-N
`
`0 - 2 * * 0 ))
`
`= N ;a n h( ^ U ) )-
`
`T H U S, THE CHECK NODE UPDATE EQUATION CAN BE
`
`WHERE WE HAVE DEFINED
`
`(x)
`
`! - L OG T A N H ^ XJ = L O G ^7I Y.
`
`WE HAVE SHOWN H OW THE UPDATED CHECK NODE MESSAGE CAN BE CALCULATED IN THE LOG
`
`D O M A I N. ON THE OTHER H A N D, THE FORMULA FOR A VARIABLE N O DE M E S S A GE UPDATE IN THE LOG
`
`D O M A IN CAN BE EASILY DERIVED AS
`
`18
`
`

`

`COMPLEXITY
`
`IF H
`
`IS BROUGHT TO AN APPROXIMATE LOWER TRIANGULAR FORM.
`
`ALTERNATIVELY,
`
`ENCODING CAN BE SIMPLIFIED VIA ALGEBRAIC AND COMBINATORIAL CODE CONSTRUCTION METHODS.
`
`S U CH "STRUCTURED" CODES CAN BE REALIZED WITH SIMPLE ENCODERS BASED ON SHIFT-REGISTER
`
`CIRCUITS. T H IS SECTION BRIEFLY INTRODUCES THE EFFICIENT ENCODING METHOD IN [ 8 ],
`
`S U P P O SE A GIVEN PARITY-CHECK MATRIX H
`
`IS M
`
`x N
`
`, AND THE ASSOCIATED CODEWORD C
`
`SUCH THAT H-cT
`
`=0
`
`LET US DENOTE THE SIZE OF M E S S A GE SYMBOLS K = N~M
`
`T HE
`
`STRAIGHTFORWARD WAY OF CONSTRUCTING AN ENCODER FOR SUCH A CODE IS TO CHANGE H
`
`INTO AN
`
`EQUIVALENT LOWER TRIANGULAR FORM WITH G A U S S I AN ELIMINATION, AS SHOWN IN FIGURE 2 . 4.
`
`= '•('.)+ I 40
`
`T HE FIRST TERM ON THE RIGHT-HAND SIDE REPRESENTS THE CONTRIBUTION FROM THE /-TH CHANNEL
`
`OUTPUT, WHILE THE SECOND TERM REPRESENTS M E S S A G ES RECEIVED FROM THE NEIGHBORING CHECK
`
`NODES. H E R E, ALL INCOMING CHECK NODE M E S S A G ES ARE TAKEN INTO ACCOUNT. AFTER EACH
`
`ITERATION, THE DECODER HAS TO EVALUATE THE L LR VALUES FOR EACH VARIABLE NODE AND CHECK IF
`
`ALL THE PARITY-CHECK CONSTRAINTS ARE FULFILLED BY VERIFYING IF H
`
`• C = 0, WHERE
`
`^ J L,
`
`I F £ ( C , ) < 0,
`
`[ 0,
`
`OTHERWISE.
`
`A G A I N,
`
`IF ALL PARITY-CHECK CONSTRAINTS ARE FULFILLED OR IF THE M A X I M UM N U M B ER OF
`
`ITERATIONS HAS BEEN REACHED, THE DECODER STOPS; OTHERWISE, A N EW ITERATION IS STARTED.
`
`2 .3 E F F I C I E NT E N C O D I NG M E T H OD
`
`IN GENERAL, ENCODER FOR L D PC CODES CAN BE DIFFICULT TO
`
`IMPLEMENT EFFICIENTLY.
`
`IMPLEMENTING AN L D PC ENCODER WITH A CONVENTIONAL WAY USING A GENERATOR MATRIX G HAS
`
`A COMPLEXITY QUADRATIC IN BLOCK LENGTH.
`
`IF THE PARITY-CHECK MATRIX H
`
`IS SPARSE, USUALLY
`
`G
`
`IS DENSE, M E A N I NG THAT IT CONTAINS A SIGNIFICANT N U M B ER OF IS AND THAT IT REQUIRES MORE
`
`X OR OPERATORS TO IMPLEMENT.
`
`TO ATTACK THIS PROBLEM, RICHARDSON et
`
`al.
`
`PROPOSE AN
`
`LET US DENOTE THE SYSTEMATIC PART OF THE CODEWORD C AS M AND THE NON-SYSTEMATIC PART
`
`APPROACH
`
`IN
`
`[ 8] WHERE THEY SHOW HOW L D PC CODES CAN BE ENCODED WITH LINEAR
`
`OF THE CODEWORD AS P SUCH THAT C = ( M | P ).
`
`FOR K DESIRED M E S S A GE SYMBOLS, M PARITY
`
`19
`
`20
`
`

`

`s y m b o ls c an be d e t e r m i n ed u s i ng b a c k w a rd s u b s t i t u t i o n. T h at i s,
`
`K
`
`/-I
`
`
`
`P, = YJH: ISI+H ,HK , * K P,
`
`, w h e re 0 < I <M
`
`.
`
`H o w e v e r, t he c o m p l e x i ty of s u ch an e n c o d i ng s c h e me is h u g e. T h at i s, c o n v e r t i ng t he
`
`m a t r ix H
`
`i n to
`
`t he d e s i r ed
`
`f o rm
`
`r e q u i r es 0 { N^ o p e r a t i o n s, a nd t he a c t u al
`
`e n c o d i ng
`
`r e q u i r es O^N1}
`
`o p e r a t i o n s.
`
`R i c h a r d s on a nd U r b a n ke
`
`p r o p o s ed
`
`t he
`
`l o w - c o m p l e x i ty
`
`e n c o d i ng m e t h o d, w h i ch r e q u i r es 0[N)
`
`o p e r a t i o ns [ 8 ]. We c an c o n v e rt t he g i v en p a r i t y-
`
`c h e ck m a t r ix
`
`to
`
`t he
`
`f o rm
`
`s h o wn
`
`in F i g u re
`
`2 .5 by p e r f o r m i ng
`
`r ow a nd
`
`c o l u mn
`
`H = \A
`[C
`
`"
`
`D
`
`T\
`
`E)
`
`w h e re A
`
`is (M-G)XK,
`
`B
`
`is (M
`
`- G)X G, T
`
`is (M-G)X(M-G),
`
`C
`
`is G*K,
`
`D
`
`is GXG, AND E
`
`is g x ( M - g ). As in F i g u re 2 . 5, T
`
`is l o w er t r i a n g u l ar w i th Is a l o ng t he
`
`d i a g o n a l, a nd t h e se m a t r i c es a re s p a r s e. L et us c o n s i d er t he f o l l o w i ng m a t r i x:
`
`f
`
`{-ET'1
`
`'
`
`I
`
`I)
`
`a nd m u l t i p ly t h is m a t r ix to t he l e ft of H . T h e n, we h a ve
`
`(A
`
`B
`
`p e r m u t a t i o n s.
`
`{-ER]A
`
`+ C
`
`-ET~LB
`
`+ D
`
`0 J
`
`K
`
`~
`
`g
`
`— M-g
`
`A
`
`C
`
`^\ °
`
`T
`
`\.
`
`E
`
`B
`
`D
`
`N
`
`F i g u re 2 .5
`
`A p p r o x i m a te l o w er t r i a n g u l ar f o r m.
`
`L et C = ( m, PU P2), w h e re M d e n o t es
`
`t he s y s t e m a t ic p a rt of a c o d e w o rd c, a nd t he
`
`p a r i ty p a rt s p l i ts i n to t wo p a r t s, n a m e l y, PX of l e n g th G a nd P2 of l e n g th [M-G)
`
`F r om
`
`t he e q u a t i on H • CT = 0. we h a ve t he f o l l o w i ng t wo e q u a t i o n s:
`
`AMT
`
`+BP[
`
`+TPT
`
`2
`
`=0
`
`(-ER'A
`
`+ C)*!7
`
`+(-ET'1B
`
`+ D)PL = 0.
`
`T h e n, we c an o b t a in PT a nd P2 as f o l l o w s:
`
`P\ =-</>-'{-ET-'A+
`
`C ) MT
`
`PT
`
`2=-T-'(AM7
`
`+Bp\),
`
`w h e re we d e f i ne $ = -ET'LB
`
`+ D a nd a s s u me
`
`f or t he m o m e nt
`
`t h at <J> is n o n s i n g u l a r.
`
`R a t h er
`
`t h an p r e c o m p u t i ng —4T*{—ET~LA
`
`+ C'} a nd t h en m u l t i p l y i ng w i th MR
`
`, we c an
`
`r e d u ce t he c o m p l e x i ty m o re by b r e a k i ng t he c o m p u t a t i on
`
`i n to s e v e r al s m a l l er s t e p s. By
`
`S u p p o se we b r i ng t he m a t r ix in t he f o rm
`
`d o i ng s o, we c an a c c o m p l i sh t he e n c o d i ng s t ep in c o m p l e x i ty
`
`O(N)
`
`21
`
`22
`
`

`

`2.4 IRREGULAR REPEAT ACCUMULATE CODES
`2.4
`I R R E G U L AR R E P E AT A C C U M U L A TE C O D ES
`
`Jin et af. introduced a promising class of codes, called Imegular Repeat Accumulate
`J in el al.
`i n t r o d u c ed a p r o m i s i ng c l a ss of c o d e s, c a l l ed
`I r r e g u l ar R e p e at A c c u m u l a te
`
`(IRA) codes, which has several strong points [10]. First, IRA codes can be encoded in
`( I R A) c o d e s, w h i ch h as s e v e r al
`s t r o ng p o i n ts
`[ 1 0 ]. F i r s t, I RA c o d es c an be e n c o d ed
`in
`
`linear time like Turbo codes. Second, their performance is superior to turbo codes of
`l i n e ar
`t i me
`l i ke T u r bo c o d e s.
`S e c o n d,
`t h e ir p e r f o r m a n ce
`is s u p e r i or
`to
`t u r bo c o d es
`of
`
`comparable complexity and as good as best-known irregular; LDPC codes.
`In addition,
`c o m p a r a b le c o m p l e x i ty a nd as g o od as b e s t - k n o wn
`i r r e g u l ar L D PC c o d e s.
`In a d d i t i o n,
`
`they have a simple structure, that is, the panty part of IRA codes has a bi-diagonal
`t h ey h a ve a
`s i m p le
`s t r u c t u r e,
`t h at
`i s,
`t he p a r i ty p a rt of
`I RA c o d es h as a b i - d i a g o n al
`
`structure, Wlustrated in the Figure 2.6, and their message part adopts arbitrary permutation
`s t r u c t u r e, i l l u s t r a t ed
`in t he F i g u re 2 . 6, a nd t h e ir m e s s a ge p a rt a d o p ts a r b i t r a ry p e r m u t a t i on
`
`to mainiam the check node degree concentrated.
`to m a i n t a in t he c h e ck n o de d e g r ee
`c o n c e n t r a t e d.
`
`The elRA codes by Yang et al. extended the IRA codes [II]. The elRA codes achieve
`T he e l RA c o d es by Y a ng et al.
`e x t e n d ed
`t he I RA c o d es
`[ 1 1 ]. T he e l RA c o d es
`a c h i e ve
`
`good performance by assigning degree-2 nodes to nonsystematic bits and ensuring that
`g o od p e r f o r m a n ce by a s s i g n i ng d e g r e e -2 n o d es
`to n o n s y s t e m a t ic b i ts a nd e n s u r i ng
`t h at
`
`the degree-2 nodes do not forn a cycle amongst themselves. Furthermore, they avoid
`t he d e g r e e -2 n o d es do n ot
`f o rm a c y c le a m o n g st
`t h e m s e l v e s.
`F u r t h e r m o r e,
`t h ey
`a v o id
`
`eycles of length-4 and make the systematic bits comespond to variable nodes of degree
`c y c l es of
`l e n g t h -4 a nd m a ke
`t he s y s t e m a t ic b i ts c o r r e s p o nd
`to v a r i a b le n o d es of d e g r ee
`
`higher than two. They ensure efficient encoding by forming the parity in the bi-diagonal
`h i g h er t h an t w o. T h ey e n s u re e f f i c i e nt e n c o d i ng by
`f o r m i ng
`t he p a r i ty
`in t he b i - d i a g o n al
`
`structure like IRA codes as shownin Figure 2.6.
`s t r u c t u re
`l i ke I RA c o d es as s h o wn
`in F i g u re 2 . 6.
`
`1
`
`1
`
`1
`
`1
`
`1
`
`1
`
`1
`
`1
`
`Let H; and H; be the systematic part and nonsystematic part of the parity-check matrix
`L et Hi
`be t he s y s t e m a t ic p a rt a nd n o n s y s t e m a t ic p a rt of t he p a r i t y - c h e ck m a t r ix
`a nd H2
`
`ofeIRA codes. The systematic generator matrix G is given by
`of e l RA c o d e s. T he s y s t e m a t ic g e n e r a t or m a t r ix G
`is g i v en by
`
`H=[Ht\H2)
`H=([4,14,]
`
`G = [ / J / > ],
`G=[A IP],
`
`where /, denotes an 4A identity matnix and P denotes panty part. Since
`w h e re 4 d e n o t es an k x k
`i d e n t i ty m a t r ix a nd P d e n o t es p a r i ty p a r t.
`S i n ce
`
`G-HJ
`
`=[lk\P)-
`
`GH! =[), PieHy
`
`ML
`
`HI
`
`= Hf +P-H{
`= 0,
`HAT 4P-HT <0,
`
`T h e n,
`Then,
`
`t he
`s y s t e m a t ic
`c o d e w o rd
`is
`r e p r e s e n t ed
`by
`the systernatic codeword is represented by
`
`we
`c an
`g et
`• H~T
`P = H\
`we can get P= Hi -Hy"
`G = m• [Jk | P] =
`rJ
`• H2
`camGem-{i,|P]= [mlm] “Hy! |
`
`c-m•
`
`J\jn\m-H[
`
`We
`
`c an
`
`i m p l e m e nt
`
`We can implement a simple encoder as
`
`a
`
`s i m p le
`
`e n c o d er
`
`as
`
`shown in Figure 2.7. The encoding complexity can be made low if the multiplication
`s h o wn
`in F i g u re 2 . 7.
`T he e n c o d i ng c o m p l e x i ty
`c an be m a de
`l ow
`if t he m u l t i p l i c a t i on
`
`with Hy? can be implemented efficiently. For e[RA codes, the multiplication with Hy?
`w i th H{T
`c an be
`i m p l e m e n t ed e f f i c i e n t l y.
`F or e l RA c o d e s, t he m u l t i p l i c a t i on w i th
`H{T
`
`can be implemented with a differential encoder whose transfer function is 1
`l
`is — !—
`\<BI)
`
`i m p l e m e n t ed w i th a d i f f e r e n t i al e n c o d er w h o se t r a n s f er f u n c t i on
`
`c an be
`
`
`m
`
`
`
`H,7
`
`
`
`m
`
`
`
`H-> H->
`
`H,t }-—
`
`D
`
`[m\p]
`C =
`C=[m|p]
`
`Figure 27?
`Encoder exampie of eIRA codes.
`F i g u re 2 7
`E n c o d er e x a m p le of e l RA c o d e s.
`Figure2.6—Bi-diagonalstructure in [RA for elRA} codes.
`F i g u re 2 .6
`B i - d i a g o n al s t r u c t u re in I RA ( or e l R A) c o d e s.
`
`23
`23
`
`24
`24
`
`

`

`2 .5 R A T E - C O M P A T I B LE P

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket