`
`1111111111111111111111111111111111111111111111111111111111111
`US007421032B2
`
`c12) United States Patent
`Jin et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,421,032 B2
`Sep.2,2008
`
`(54) SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`(75)
`
`Inventors: Hui Jin, Glen Gardner, NJ (US); Aamod
`Khandekar, Pasadena, CA (US);
`Robert J. McEliece, Pasadena, CA (US)
`(73) Assignee: Callifornia Institute of Technology,
`Pasadena, CA (US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`( *) Notice:
`
`(21) Appl. No.: 11/542,950
`
`(22) Filed:
`
`Oct. 3, 2006
`
`(65)
`
`Prior Publication Data
`
`US 2007/0025450 AI
`
`Feb. 1, 2007
`
`Related U.S. Application Data
`
`(63)
`
`Continuation of application No. 09/861,102, filed on
`May 18, 2001, now Pat. No. 7,116,710, and a continu(cid:173)
`ation-in-part of application No. 09/922,852, filed on
`Aug. 18, 2000, now Pat. No. 7,089,477.
`
`(60)
`
`Provisional application No. 60/205,095, filed on May
`18, 2000.
`
`(51)
`
`Int. Cl.
`H04L 5112
`(2006.01)
`(52) U.S. Cl. ....................... 375/262; 375/265; 375/348;
`714/755; 714/786; 714/792; 341/52; 341/102
`(58) Field of Classification Search . ... ... ... ... .. .. 3 7 5/259,
`375/262, 265, 285, 296, 341, 346, 348; 714/746,
`714/752, 755, 756, 786, 792, 794-796; 341/51,
`341/52, 56, 102, 103
`See application file for complete search history.
`
`2/1995 Rhines eta!.
`5,392,299 A
`5,530,707 A * 6/1996 Lin ............................ 714/792
`5,751,739 A
`5/1998 Seshadri et a!.
`5,802,115 A * 9/1998 Meyer ........................ 375/341
`3/1999 Wang eta!.
`5,881,093 A
`6,014,411 A
`1/2000 Wang
`212000 Divsalar et a!.
`6,023,783 A
`6,031,874 A
`212000 Chennakeshu et a!.
`6,032,284 A
`212000 Bliss
`6,044,116 A
`3/2000 Wang
`6,094,739 A * 7/2000 Miller eta!. ................ 714/792
`
`(Continued)
`
`OTHER PUBLICATIONS
`
`Appendix A.1 "Structure of Parity Check Matrices of Standardized
`LDPC Codes," Digital Video Broadcasting (DVB) User guidelines
`for the second generation system for Broadcasting, Interactive Ser(cid:173)
`vices, News Gathering and other broadband satellite applications
`(DVB-S2) ETSI TR 102 376 Vl.l.l. (Feb. 2005) Technical Report,
`pp. 64.
`
`(Continued)
`
`Primary Examiner-Dac V. Ha
`(74) Attorney, Agent, or Firm-Fish & Richardson P.C.
`
`(57)
`
`ABSTRACT
`
`A serial concatenated coder includes an outer coder and an
`inner coder. The outer coder irregularly repeats bits in a data
`block according to a degree profile and scrambles the
`repeated bits. The scrambled and repeated bits are input to an
`inner coder, which has a rate substantially close to one.
`
`23 Claims, 5 Drawing Sheets
`
`Check Node
`degree a
`
`Variable Node
`Fraction of nodes
`degree i
`, ... -,,
`·---!-:0<:..
`;ut.
`\
`' . '
`' . '
`·---\£J.d--
`302-/\---:_/~
`
`f2
`
`306
`
`!3
`
`fj
`
`e
`
`L
`
`3?:~
`' .
`'
`'
`' .
`:
`\
`'
`·---~~
`' ... __ , .
`'
`'
`
`/'-,,,/_
`·---f~
`'
`'
`iutJ!i: i
`',
`'
`:
`·---\
`\ .. __ ~/
`
`Hughes, Exh. 1003, p. 1
`
`
`
`US 7,421,032 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`6,396,423 B1
`6,437,714 B1
`6,859,906 B2 *
`2001/0025358 A1
`
`5/2002 Laumen et a!.
`8/2002 Kim et a!.
`2/2005 Hammons eta!. ........... 714/786
`9/2001 Eidson eta!.
`
`OTHER PUBLICATIONS
`
`Benedetto eta!., "A Soft-Input Soft-Output Maximum A Posteriori
`(MAP) Module to Decode Parallel and Serial Concatenated Codes,"
`The Telecommunications and Data Acquisition (TDA) Progress
`Report 42-127 for NASA and California Institute of Technology Jet
`Propulsion Laboratory, Jospeh H. Yuen, Ed., pp. 1-20 (Nov. 15,
`1996).
`Benedetto et a!., "Bandwidth efficient parallel concatenated coding
`schemes," Electronics Letters 31(24): 2067-2069 (Nov. 23, 1995).
`Benedetto et a!., "Design of Serially Concatenated Interleaved
`Codes," ICC 97, Montreal, Canada, pp. 710-714, (Jun. 1997).
`Benedetto eta!., "Parallel Concatenated Trellis Coded Modulation,"
`ICC '96, IEEE, pp. 974-978, (Jun. 1996).
`Benedetto et a!., "Serial Concatenated Trellis Coded Modulation
`with Iterative Decoding," Proceedings from the IEEE 1997 Interna(cid:173)
`tional Symposium on Information Theory (ISIT), Ulm, Germany, p.
`8, Jun. 29-Jul. 4, 1997.
`Benedetto eta!., "Serial Concatenation oflnterleaved Codes: Perfor(cid:173)
`mance Analysis, Design, and Iterative Decoding," The Telecommu(cid:173)
`nications and Data Acquisition (TDA) Progress Report 42-126 for
`NASA and California Institute of Technology Jet Propulsion Labo(cid:173)
`ratory, Jospeh H. Yuen, Ed., pp. 1-26 (Aug. 15, 1996).
`Benedetto eta!., "Serial Concatenation of interleaved codes: perfor(cid:173)
`mance analysis, design, and iterative decoding," Proceedings from
`the IEEE 1997 International Symposium on Information Theory
`(ISIT), Ulm, Germany, p. 106, Jun. 29-Jul. 4, 1997.
`Benedetto eta!., "Soft-output decoding algorithms in iterative decod(cid:173)
`ing of turbo codes," The Telecommunications and Data Acquisition
`(TDA) Progress Report 42-124 for NASA and California Institute of
`Technology Jet Propulsion Laboratory, Jospeh H. Yuen, Ed., pp.
`63-87 (Feb. 15, 1996).
`Benedetto, S. et al., "A Soft-Input Soft-Output APP Module for
`Iterative Decoding of Concatenated Codes," IEEE Communications
`Letters 1(1): 22-24 (Jan. 1997).
`Berrou et a!., "Near Shannon Limit Error-Correcting Coding and
`Decoding: Turbo Codes," ICC pp. 1064-1070 (1993).
`Digital Video Broadcasting (DVB) User guidelines for the second
`generation system for Broadcasting, Interactive Services, News
`Gathering and other broadband satellite applications (DVB-S2) ETSI
`TR 102 376Vl.l.l. (Feb. 2005)Technical Report, pp. 1-104 (Feb. 15,
`2005).
`
`Divsalaret a!., "Coding Theorems for 'Turbo-Like' Codes," Proceed(cid:173)
`ings of the 361h Annual Allerton Conference on Communication,
`Control, and Computing, Sep. 23-25, 1998, Allerton House,
`Monticello, Illinois, pp. 201-210 ( 1998).
`Divsalar et a!., "Effective free distance of turbo codes," Electronics
`Letters 32(5): 445-446 (Feb. 29, 1996).
`Divsalar, D. eta!., "Hybrid Concatenated Codes and Iterative Decod(cid:173)
`ing," Proceedings from the IEEE 1997 International Symposium on
`Information Theory (ISIT), Ulm, Germany, p. 10 (Jun. 29-Jul. 4,
`1997).
`Divsalar, D. eta!., "Low-rate turbo codes for Deep Space Commu(cid:173)
`nications," Proceedings from the 1995 IEEE International Sympo(cid:173)
`sium on Information Theory, Sep. 17-22, 1995, Whistler, British
`Columbia, Canada, pp. 35.
`Divsalar, D. eta!., "Multiple Turbo Codes for Deep-Space Commu(cid:173)
`nications," The Telecommunications and Data Acquisition (TDA)
`Progress Report 42-121 for NASA and California Institute of Tech(cid:173)
`nology Jet Propulsion Laboratory, Jospeh H. Yuen, Ed., pp. 60-77
`(May 15, 1995).
`Divsalar, D. eta!., "Multiple Turbo Codes," MILCOM 95, San Diego,
`CA pp. 279-285 (Nov. 5-6, 1995).
`Divsalar, D. eta!., "On the Design of Turbo Codes," The Telecom(cid:173)
`munications and Data Acquisition (TDA) Progress Report 42-123 for
`NASA and California Institute of Technology Jet Propulsion Labo(cid:173)
`ratory, Jospeh H. Yuen, Ed., pp. 99-131 (Nov. 15, 1995).
`Divsalar, D. et al., "Serial Turbo Trellis Coded Modulation with
`Rate-1 Inner Code," Proceedings from the IEEE 2000 International
`Symposium on Information Theory (ISIT), Italy, pp. 1-14 (Jun.
`2000).
`Divsalar, D. et a!., "Turbo Codes for PCS Applications," ICC 95,
`IEEE, Seattle, WA, pp. 54-59 (Jun. 1995).
`Jin eta!., "Irregular Repeat-Accumulate Codes," 2nd International
`Symposium on Turbo Codes & Related Topics, Sep. 4-7, 2000, Brest,
`France, 25 slides, (presented on Sep. 4, 2000).
`Jin et al., "Irregular Repeat-Accumulate Codes," 2nd International
`Symposium on Turbo Codes & Related Topics, Sep. 4-7, 2000, Brest,
`France, pp. 1-8 (2000).
`Richardson et a!., "Design of capacity approaching irregular low
`density parity check codes," IEEE Trans. Inform. Theory 47: 619-637
`(Feb. 2001).
`Richardson, T. and R. Urbanke, "Efficient encoding of low-density
`parity check codes," IEEE Trans. Inform. Theory 47: 638-656 (Feb.
`2001).
`Wilberg, et al., "Codes and Iteratie Decoding on General Graphs",
`1995 Inti. Symposium on Information Theory, Sep. 1995, p. 468.
`* cited by examiner
`
`Hughes, Exh. 1003, p. 2
`
`
`
`U.S. Patent
`
`Sep.2,2008
`
`Sheet 1 of 5
`
`US 7,421,032 B2
`
`~ .,._
`\.._
`
`N
`UJ
`Cl
`0
`0
`UJ
`Cl
`
`\.._ ~
`w
`Cl
`0
`0
`UJ
`Cl
`
`"'
`
`~
`
`i ~
`
`C\Jr-
`.,._
`.,._
`\.._
`
`~
`<::::>
`T"--
`\.._
`
`(.0
`.,._
`<::::>
`\.._
`
`N
`UJ
`0
`0
`0
`
`r-
`
`c...
`
`\.._
`
`13NN\tH8
`
`T"--
`
`~
`\..
`
`cr-
`
`T"--
`T"--
`
`\.._
`
`~
`UJ
`Cl
`0
`a
`
`I'
`
`~ '
`
`Hughes, Exh. 1003, p. 3
`
`
`
`U.S. Patent
`US. Patent
`
`Sep.2,2008
`Sep. 2, 2008
`
`Sheet 2 of 5
`Sheet 2 of5
`
`US 7,421,032 B2
`US 7,421,032 B2
`
`~
`
`00<
`
`c..:>
`c..:>
`1
`<(
`
`" c: '
`=
`
`V?
`
`5%
`
`_)
`
`cc
`L.U z
`z
`-
`
`~ ,;;::
`
`c..
`
`_)
`
`0d
`
`a
`
`" c::: ['>
`
`cc
`L.U
`1-
`::::::>
`0
`
`LJ
`
`x
`
`2C
`
`~
`(.!j
`Cl
`l
`.....J
`
`DQ_
`
`::-..,=:J
`
`~ f'=:J
`
`x
`).
`~
`
`~ '
`
`Hughes, Exh. 1003, p. 4
`
`Hughes, Exh. 1003, p. 4
`
`
`
`U.S. Patent
`
`Sep.2,2008
`
`Sheet 3 of 5
`
`US 7,421,032 B2
`
`Variable Node
`Fraction of nodes
`degree i
`
`Check Node
`degree a
`
`'
`""'
`I
`\
`I~
`·---f~
`'ut
`,
`
`•
`
`•
`•
`
`I
`I
`I
`I
`I
`\
`
`\
`I
`I
`I
`I
`I
`
`3;;~
`
`302~""-',
`•---L I
`
`I
`
`\
`
`I
`I
`I
`I
`I
`I
`\
`
`•
`e
`•
`
`\
`I
`I
`I
`I
`I
`
`\
`
`I
`
`/
`
`·---~~
`'
`,_ ...
`•
`•
`
`f2
`
`f3
`
`fj
`
`·---f~
`• ~~u~: ;
`.
`
`I
`I
`I
`
`\
`
`\
`I
`
`:
`
`·---""\
`,_ ...
`'
`"
`
`\
`
`I
`
`FIG. 3
`
`306
`
`----·
`
`----·
`
`•
`•
`
`• •
`
`Hughes, Exh. 1003, p. 5
`
`
`
`U.S. Patent
`
`Sep.2,2008
`
`Sheet 4 of 5
`
`US 7,421,032 B2
`
`@---------
`
`FIG. 5A
`
`304
`
`w
`
`v
`
`304
`
`FIG. 58
`
`Hughes, Exh. 1003, p. 6
`
`
`
`U.S. Patent
`US. Patent
`
`Sep.2,2008
`Sep. 2, 2008
`
`Sheet50f5
`Sheet 5 of 5
`
`US 7,421,032 B2
`US 7,421,032 B2
`
`--- -------
`
`Cl
`
`Cl
`
`r
`\-. v
`- - r------;
`
`<:::>
`<:::> c:.o
`
`a...
`
`cc:
`-
`
`
`
`- - - ~----}
`!
`
`Cl
`
`I
`
`r i\
`"-v
`
`a...
`
`L __ ..: ------
`- - - r------
`~
`
`Cl
`
`I
`
`r r\
`"-v
`
`a...
`
`---1------
`
`cc:
`-
`
`Hughes,Exh.1003,p.7
`
`Hughes, Exh. 1003, p. 7
`
`
`
`US 7,421,032 B2
`
`1
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a continuation of U.S. application Ser.
`No. 09/861,102, filed May 18,2001, now U.S. Pat. No. 7,116,
`710, which claims the priority ofU.S. provisional application
`Ser. No. 60/205,095, filed May 18, 2000, and is a continua(cid:173)
`tion-in-part of U.S. application Ser. No. 09/922,852, filed
`Aug. 18, 2000, now U.S. Pat. No. 7,089,477.
`
`5
`
`2
`repeater with a variable rate and an interleaver. Alternatively,
`the outer coder may be a low-density generator matrix
`(LDGM) coder.
`The repeated and scrambled bits are input to an inner coder
`that has a rate substantially close to one. The inner coder may
`include one or more accumulators that perform recursive
`modulo two addition operations on the input bit stream.
`The encoded data output from the inner coder may be
`transmitted on a channel and decoded in linear time at a
`10 destination using iterative decoding techniques. The decod(cid:173)
`ing techniques may be based on a Tanner graph representation
`of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`GOVERNMENT LICENSE RIGHTS
`
`15
`
`The U.S. Govermnent has a paid-up license in this inven(cid:173)
`tion and the right in limited circumstances to require the
`patent owner to license others on reasonable terms as pro(cid:173)
`vided for by the terms of Grant No. CCR-9804793 awarded 20
`by the National Science Foundation.
`
`BACKGROUND
`
`FIG. 1 is a schematic diagram of a prior "turbo code"
`system.
`FIG. 2 is a schematic diagram of a coder according to an
`embodiment.
`FIG. 3 is a Tanner graph for an irregular repeat and accu(cid:173)
`mulate (IRA) coder.
`FIG. 4 is a schematic diagram of an IRA coder according to
`an embodiment.
`FIG. SA illustrates a message from a variable node to a
`check node on the Tanner graph of FIG. 3.
`FIG. SB illustrates a message from a check node to a
`variable node on the Tanner graph of FIG. 3.
`FIG. 6 is a schematic diagram of a coder according to an
`alternate embodiment.
`FIG. 7 is a schematic diagram of a coder according to
`another alternate embodiment.
`
`DETAILED DESCRIPTION
`
`Properties of a channel affect the amount of data that can be 25
`handled by the channel. The so-called "Shannon limit"
`defines the theoretical limit of the amount of data that a
`channel can carry.
`Different techniques have been used to increase the data
`rate that can be handled by a channel. "Near Shannon Limit
`Error-Correcting Coding and Decoding: Turbo Codes," by
`Berrou eta!. ICC, pp 1064-1070, (1993), described a new
`"turbo code" technique that has revolutionized the field of
`error correcting codes. Turbo codes have sufficient random-
`FIG. 2 illustrates a coder 200 according to an embodiment.
`ness to allow reliable cormnunication over the channel at a
`The coder 200 may include an outer coder 202, an interleaver
`204, and inner coder 206. The coder may be used to format
`high data rate near capacity. However, they still retain suffi-
`blocks of data for transmission, introducing redundancy into
`cient structure to allow practical encoding and decoding alga-
`the stream of data to protect the data from loss due to trans-
`rithms. Still, the technique for encoding and decoding turbo
`codes can be relatively complex.
`40 mission errors. The encoded data may then be decoded at a
`destination in linear time at rates that may approach the chan-
`A standard turbo coder 100 is shown in FIG. 1. A block of
`k information bits is input directly to a first coder 102. A k bit
`nel capacity.
`interleaver 106 also receives the k bits and interleaves them
`The outer coder 202 receives the uncoded data. The data
`prior to applying them to a second coder 104. The second
`may be partitioned into blocks of fixed size, say k bits. The
`coder produces an output that has more bits than its input, that 45 outer coder may be an (n,k) binary linear block coder, where
`is, it is a coderwithratethatis less than 1. The coders 102,104
`n>k. The coder accepts as input a block u ofk data bits and
`are typically recursive convolutional coders.
`produces an output block v ofn data bits. The mathematical
`Three different items are sent over the channel 150: the
`relationship between u and vis v=T0u, where T0 is an nxk
`original k bits, first encoded bits 110, and second encoded bits
`matrix, and the rate of the coder is k/n.
`112. At the decoding end, two decoders are used: a first 50
`The rate of the coder may be irregular, that is, the value of
`constituent decoder 160 and a second constituent decoder
`T 0 is not constant, and may differ for sub-blocks of bits in the
`162. Each receives both the original k bits, and one of the
`data block. In an embodiment, the outer coder 202 is a
`encoded portions 110, 112. Each decoder sends likelihood
`repeater that repeats the k bits in a block a number of times q
`estimates of the decoded bits to the other decoders. The esti-
`to produce a block with n bits, where n=qk. Since the repeater
`mates are used to decode the uncoded information bits as 55
`has an irregular output, different bits in the block may be
`corrupted by the noisy channel.
`repeated a different number of times. For example, a fraction
`of the bits in the block may be repeated two times, a fraction
`of bits may be repeated three times, and the remainder of bits
`may be repeated four times. These fractions define a degree
`60 sequence, or degree profile, of the code.
`The inner coder 206 may be a linear rate-1 coder, which
`means that then-bit output block x can be written as x=Tiw,
`where Tiis anonsingularnxnmatrix. The innercoder210 can
`have a rate that is close to 1, e.g., within 50%, more preferably
`65 10% and perhaps even more preferably within 1% of 1.
`In an embodiment, the inner coder 206 is an accumulator,
`which produces outputs that are the modulo two (mod-2)
`
`30
`
`35
`
`SUMMARY
`
`A coding system according to an embodiment is config(cid:173)
`ured to receive a portion of a signal to be encoded, for
`example, a data block including a fixed number of bits. The
`coding system includes an outer coder, which repeats and
`scrambles bits in the data block. The data block is apportioned
`into two or more sub-blocks, and bits in different sub-blocks
`are repeated a different number of times according to a
`selected degree profile. The outer coder may include a
`
`Hughes, Exh. 1003, p. 8
`
`
`
`US 7,421,032 B2
`
`3
`partial sums of its inputs. The accumulator may be a truncated
`rate-1 recursive convolutional coder with the transfer func(cid:173)
`tion 1/(1 +D). Such an accumulator may be considered a block
`coder whose input block [x1, ... ,xn] and output block
`[y 1, ... ,y nl are related by the formula
`
`4
`(x1, ... ,xr), as follows. Each of the information bits IS
`associated with one of the information nodes 302, and each of
`the parity bits is associated with one of the parity nodes 306.
`The value of a parity bit is determined uniquely by the con(cid:173)
`dition that the mod-2 sum of the values of the variable nodes
`connected to each of the check nodes 304 is zero. To see this,
`set x0 =0. Then if the values of the bits on the ra edges coming
`out the permutation box are (v1, ... , vra), then we have the
`10 recursive formula
`
`l
`
`Xj = Xj-1 + .L V(j-l)A+i
`
`i=l
`
`15
`
`where "EB" denotes mod-2, or exclusive-OR (XOR), addition.
`An advantage of this system is that only mod-2 addition is 20
`necessary for the accumulator. The accumulator may be
`embodied using only XOR gates, which may simplify the
`design.
`The bits output from the outer coder 202 are scrambled
`before they are input to the inner coder 206. This scrambling
`may be performed by the interleaver 204, which performs a
`pseudo-random permutation of an input block v, yielding an
`output block w having the same length as v.
`The serial concatenation of the interleaved irregular repeat 30
`code and the accumulate code produces an irregular repeat
`and accumulate (IRA) code. An IRA code is a linear code, and
`as such, may be represented as a set of parity checks. The set
`of parity checks may be represented in a bipartite graph,
`called the Tanner graph, of the code. FIG. 3 shows a Tanner 35
`graph 300 of an IRA code with parameters (f1, ... , ~; a),
`where f,~O, ~,f,=1 and "a" is a positive integer. The Tanner
`graph includes two kinds of nodes: variable nodes (open
`circles) and check nodes (filled circles). There are k variable
`nodes 302 on the left, called information nodes. There are r 40
`variable nodes 306 on the right, called parity nodes. There are
`r=(U,if,)/a check nodes 304 connected between the informa(cid:173)
`tion nodes and the parity nodes. Each information node 3 02 is
`connected to a number of check nodes 304. The fraction of
`information nodes connected to exactly i check nodes is f,.
`For example, in the Tanner graph 300, each of the f2 informa(cid:173)
`tion nodes are connected to two check nodes, corresponding
`to a repeat of q=2, and each of the f3 information nodes are
`connected to three check nodes, corresponding to q=3.
`Each check node 304 is connected to exactly "a" informa(cid:173)
`tion nodes 302. In FIG. 3, a=3. These connections can be
`made in many ways, as indicated by the arbitrary permutation
`of the ra edges joining information nodes 302 and check
`nodes 304 in permutation block 310. These connections cor(cid:173)
`respond to the scrambling performed by the interleaver 204. 55
`In an alternate embodiment, the outer coder 202 may be a
`low-density generator matrix (LDGM) coder that performs
`an irregular repeat of the k bits in the block, as shown in FIG.
`4. As the name implies, an LDGM code has a sparse (low(cid:173)
`density) generator matrix. The IRA code produced by the 60
`coder 400 is a serial concatenation of the LDGM code and the
`accumulator code. The interleaver 204 in FIG. 2 may be
`excluded due to the randonmess already present in the struc(cid:173)
`ture of the LDGM code.
`If the permutation performed in permutation block 310 is 65
`fixed, the Tanner graph represents a binary linear block code
`with k information bits (u1, ... , uk) and r parity bits
`
`50
`
`For example, regular repeat and accumulate (RA) codes
`can be considered nonsystematic IRA codes with a=1 and
`exactly one f, equal to 1, say fq = 1, and the rest zero, in which
`45 case Rnsys simplifies to R=1/q.
`The IRA code may be represented using an alternate nota(cid:173)
`tion. Let "A, be the fraction of edges between the information
`nodes 302 and the check nodes 304 that are adjacent to an
`information node of degree i, and let p, be the fraction of such
`edges that are adjacent to a check node of degree i+2 (i.e., one
`that is adjacent to i information nodes). These edge fractions
`may be used to represent the IRA code rather than the corre(cid:173)
`sponding node fractions. Define "A(x)=~,"A,x'- 1 and p(x)=~,p,
`x'-1 to be the generating functions of these sequences. The
`pair ("A, p) is called a degree distribution. For L(x)=~,f,x,,
`
`for j=1, 2, ... , r. This is in effect the encoding algorithm.
`Two types of IRA codes are represented in FIG. 3, a non(cid:173)
`systematic version and a systematic version. The nonsystem(cid:173)
`atic version is an (r,k) code, in which the codeword corre(cid:173)
`sponding to the information bits (uu ... ,uk) is (x1, ... 'xr).
`The systematic version is a (k+r, k) code, in which the code-
`
`The rate of the nonsystematic code is
`
`a
`R - - -
`nsys- 2;: iJi
`
`The rate of the systematic code is
`
`a
`R - - - (cid:173)
`sys-a+Lifi
`;
`
`!..; I i
`f;= 2:,/..i/j
`
`L(x) = Lx /..(t) dit / L1
`
`/..(t) dit
`
`Hughes, Exh. 1003, p. 9
`
`
`
`US 7,421,032 B2
`
`5
`The rate of the systematic IRA code given by the degree
`distribution is given by
`
`1-1
`'\'
`LJ Pi /j
`Rate = 1 + - 1
`[
`-· - -
`L /..1/j
`j
`
`6
`relies on its input, andy is the output of the channel code bit
`u, then m 0 (u)=log(p(u=Oiy)/p(u=11y)). After this initializa(cid:173)
`tion, the decoding process may run in a fully parallel and local
`manner. In each iteration, every variable/check node receives
`5 messages from its neighbors, and sends back updated mes(cid:173)
`sages. Decoding is terminated after a fixed number of itera(cid:173)
`tions or detecting that all the constraints are satisfied. Upon
`termination, the decoder outputs a decoded sequence based
`on the messages m(u)=~wm(w---;.u).
`Thus, on various channels, iterative decoding only differs
`in the initial messages m 0 (u). For example, consider three
`memoryless channel models: a binary erasure channel
`(BEC); a binary symmetric channel (BSC); and an additive
`white Gaussian noise (AGWN) channel.
`In the BEC, there are two inputs and three outputs. When 0
`is transmitted, the receiver can receive either 0 or an erasure E.
`An erasure E output means that the receiver does not know
`how to demodulate the output. Similarly, when 1 is transmit(cid:173)
`ted, the receiver can receive either 1 or E. Thus, for the BEC,
`20 yE{O,E,1},and
`
`10
`
`15
`
`"Belief propagation" on the Tanner Graph realization may
`be used to decode IRA codes. Roughly speaking, the belief
`propagation decoding technique allows the messages passed
`on an edge to represent posterior densities on the bit associ(cid:173)
`ated with the variable node. A probability density on a bit is a
`pair of non-negative real numbers p(O), p(1) satisfYing p(O)+ 25
`p(1)=1, where p(O) denotes the probability of the bit being 0,
`p(1) the probability of it being 1. Such a pair can be repre(cid:173)
`sented by its log likelihood ratio, m=log(p(O)/p(1 )). The out(cid:173)
`going message from a variable node u to a check node v
`represents information about u, and a message from a check 30
`node u to a variable node v represents information about u, as
`shown in FIGS. SA and SB, respectively.
`The outgoing message from a node u to a node v depends
`on the incoming messages from all neighbors w of u except v.
`Ifu is a variable message node, this outgoing message is
`
`35
`
`+oo
`
`if y = 0
`
`m0 (u) =
`
`-Ooo
`
`{
`
`if y = E
`if y = 1
`
`In the BSC, there are two possible inputs (0,1) and two
`possible outputs (0, 1 ). The BSC is characterized by a set of
`40 conditional probabilities relating all possible outputs to pos(cid:173)
`sible inputs. Thus, for the BSC y E { 0, 1},
`
`45
`
`50
`
`mo(u) =
`
`j log~ if y=O
`
`p
`1-p
`-log-- if y = 1
`p
`
`m(u--+ v) = ~ m(w--+ u) +m0 (u)
`WI''
`
`where m 0 (u) is the log-likelihood message associated with
`u. Ifu is a check node, the corresponding formula is
`
`m(u--+v) n m(w--+u)
`
`=
`
`.w,
`
`tanh--
`-
`2
`
`tanh--
`-
`2
`
`55 and
`In theAWGN, the discrete-time input symbols X take their
`values in a finite alphabet while channel output symbols Y can
`take any values along the real line. There is assumed to be no
`distortion or other effects other than the addition of white
`60 Gaussian noise. In an AWGN with a Binary Phase Shift
`Keying (BPSK) signaling which maps 0 to the symbol with
`amplitude yEs and 1 to the symbol with amplitude -yEs,
`output y E R, then
`
`Before decoding, the messages m(w---;.u) and m(u---;.v) are
`initialized to be zero, and m 0 (u) is initialized to be the log- 65
`likelihood ratio based on the channel received information. If
`the channel is memoryless, i.e., each channel output only
`
`where N0/2 is the noise power spectral density.
`
`Hughes, Exh. 1003, p. 10
`
`
`
`US 7,421,032 B2
`
`8
`
`l
`
`Xj = Xj-1 + .L V(j-l)A+i•
`
`i=l
`
`where
`"x1_ 1 " is the value of a parity bit ')-1," and
`
`10
`
`15
`
`a
`
`\\.L V(j-l)a+l"
`
`i=l
`
`7
`The selection of a degree profile for use in a particular
`transmission channel is a design parameter, which may be
`affected by various attributes of the channel. The criteria for
`selecting a particular degree profile may include, for example,
`the type of channel and the data rate on the channel. For
`example, Table 1 shows degree profiles that have been found
`to produce good results for an A WGN channel model.
`
`a
`
`"A2
`"A3
`"AS
`M
`"AlO
`"All
`"Al2
`"Al3
`"Al4
`"Al6
`"A27
`"A28
`Rate
`aGA
`a*
`(Eb/NO) * (dB)
`S.L. (dB)
`
`TABLE 1
`
`2
`
`0.139025
`0.2221555
`
`0.638820
`
`0.078194
`0.128085
`0.160813
`0.036178
`
`0.108828
`0.487902
`
`0.333364
`1.1840
`1.1981
`0.190
`-0.4953
`
`0.333223
`1.2415
`1.2607
`-0.250
`-0.4958
`
`4
`
`0.054485
`0.104315
`
`0.126755
`0.229816
`0.016484
`
`0.450302
`0.017842
`0.333218
`1.2615
`1.2780
`-0.371
`-0.4958
`
`20
`
`25
`
`is the value of a sum of"a" randomly chosen irregular repeats
`of the message bits; and
`making the sequence of parity bits available for transmis(cid:173)
`sion in a transmission data stream.
`2. The method of claim 1, wherein the sequence of parity
`bits is generated is in accordance with "a" being constant.
`3. The method of claim 1, wherein the sequence of parity
`bits is generated is in accordance with "a" varying for differ(cid:173)
`ent parity bits.
`4. The method of claim 1, wherein generating the sequence
`of parity bits comprises performing recursive modulo two
`addition operations on the random sequence of bits.
`5. The method of claim 1, wherein generating the sequence
`of parity bits comprises:
`generating a random sequence of bits that repeats each of
`the message bits one or more times with the repeats of
`the message bits being distributed in a random sequence,
`wherein different fractions of the message bits are each
`repeated a different number of times and the number of
`repeats for each message bit is irregular; and
`XOR summing in linear sequential fashion a predecessor
`parity bit and "a" bits of the random sequence of bits.
`6. The method of claim 5, wherein generating the random
`sequence of bits comprises coding the collection of message
`bits using a low-density generator matrix (LDGM) coder.
`7. The method of claim 5, wherein generating the random
`sequence of bits comprises:
`producing a block of data bits, wherein different message
`bits are each repeated a different number of times in a
`sequence that matches the first sequence; and
`randomly permuting the different bits to generate the ran(cid:173)
`dom sequence.
`8. The method of claim 1, further comprising transmitting
`55 the sequence of parity bits.
`9. The method of claim 8, wherein transmitting the
`sequence of parity bits comprises transmitting the sequence
`of parity bits as part of a nonsystematic code.
`10. The method of claim 8, wherein transmitting the
`60 sequence of parity bits comprises transmitting the sequence
`of parity bits as part of a systematic code.
`11. A device comprising:
`an encoder configured to receive a collection of message
`bits and encode the message bits to generate a collection
`of parity bits in accordance with the following Tanner
`graph:
`
`35
`
`Table 1 shows degree profiles yielding codes of rate
`approximately lj3 for the AWGN channel and with a=2, 3, 4.
`For each sequence, the Gaussian approximation noise thresh- 30
`old, the actual sum-product decoding threshold and the cor(cid:173)
`responding energy per bit (E6)-noise power (N0 ) ratio in dB
`are given. Also listed is the Shannon limit (S.L.).
`As the parameter "a" is increased, the performance
`improves. For example, for a=4, the best code found has an
`iterative decoding threshold of E6/N0=-0.371 dB, which is
`only 0.12 dB above the Shannon limit.
`The accumulator component of the coder may be replaced
`by a "double accumulator" 600 as shown in FIG. 6. The 40
`double accumulator can be viewed as a truncated rate 1 con(cid:173)
`volutional coder with transfer function 1/(1 +D+D2
`).
`Alternatively, a pair of accumulators may be the added, as
`shown in FIG. 7. There are three component codes: the
`"outer" code 700, the "middle" code 702, and the "inner" 45
`code 704. The outer code is an irregular repetition code, and
`the middle and inner codes are both accumulators.
`IRA codes may be implemented in a variety of channels,
`including memory less channels, such as the BEC, BSC, and 50
`AWGN, as well as channels having non-binary input, non(cid:173)
`symmetric and fading channels, and/or channels with
`memory.
`A number of embodiments have been described. Neverthe(cid:173)
`less, it will be understood that various modifications may be
`made without departing from the spirit and scope of the
`invention. Accordingly, other embodiments are within the
`scope of the following claims.
`
`The invention claimed is:
`1. A method comprising:
`receiving a collection of message bits having a first
`sequence in a source data stream;
`generating a sequence of parity bits, wherein each parity bit
`"x/ in the sequence is in accordance with the formula
`
`65
`
`Hughes, Exh. 1003, p. 11
`
`
`
`US 7,421,032 B2
`
`10
`message passing decoder comprising two or more
`check/variable nodes operating in parallel to receive
`messages from neighboring check/variable nodes and
`send updated messages to the neighboring variable/
`check nodes, wherein the message passing decoder is
`configured to decode the received data stream that has
`been encoded in accordance with the following Tanner
`graph:
`
`~J£.
`~/0-----·
`
`V1 9>6-----·
`OX;-----·
`S50-----·
`OSo-----·
`
`~-----·
`o:::_-;
`- v-----··
`
`*·--------'--
`
`\o--i--
`,
`\:
`~:' ......
`.... _ ....
`
`: o-~--
`.
`.
`, .......
`*·--------\-
`:;--
`\
`.... _ ....
`'
`'
`'
`,
`
`.... , ..... , ..
`
`.
`.
`\Uol )--
`..........
`,
`, .. .
`*·----- ---1--
`::.
`
`l
`\
`\
`
`"'1,
`
`I
`,'
`
`.... _ ....
`\
`
`9
`
`\ o)··
`
`.. ..,:
`*" ------ --~-
`\: ___ _,.:"'·
`
`, .. -....
`/0: \---
`.
`*·--------,:-
`::~~---
`'
`: ..
`, .
`.
`:o-~--
`*·--------'"-
`\,
`~"/· .....
`' ... _ ....
`
`-r--
`
`f
`
`~J£.
`~/0-----·
`
`VI 9>6-----·
`OX;-----·
`S50-----·
`OSo-----·
`
`~-----·
`o:::_-;
`- v-----··
`
`10
`
`15
`
`20
`
`25
`
`30
`
`12. The device of claim 11, wherein the encoder is config- 35
`ured to generate the collection of parity bits as if a number of
`inputs into nodes v, was not constant.
`13. The device of claim 11, wherein the encoder comprises:
`a low-density generator matrix (LDGM) coder configured
`to perform an irregular repeat on message bits having a 40
`first sequence in a source data stream to output a random
`sequence of repeats of the message bits; and
`an accumulator configured to XOR sum in linear sequen(cid:173)
`tial fashion a predecessor parity bit and "a" bits of the
`random sequence of repeats of the message bits.
`14. The device of claim 12, wherein the accumulator com(cid:173)
`prises a recursive convolutional coder.
`15. The device of claim 14, wherein the recursive convo(cid:173)
`lutional coder comprises a truncated rate-1 recursive convo(cid:173)
`lutional coder.
`16. The device of claim 14, wherein the recursive convo(cid:173)
`lutional coder has a transfer function of 1/(1+D).
`17. The d