throbber
111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US008284833B2
`
`c12) United States Patent
`Jin et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 8,284,833 B2
`Oct. 9, 2012
`
`(54) SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`(75)
`
`Inventors: Hui Jin, Glen Gardner, NJ (US); Aamod
`Khandekar, Pasadena, CA (US);
`Robert J. McEliece, Pasadena, CA (US)
`
`(73) Assignee: California Institute of Technology,
`Pasadena, CA (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`(21) Appl. No.: 13/073,947
`
`(22) Filed:
`
`Mar. 28, 2011
`
`(65)
`
`Prior Publication Data
`
`US 2011/0264985 Al
`
`Oct. 27, 2011
`
`Related U.S. Application Data
`
`(63) Continuation of application No. 12/165,606, filed on
`Jun. 30, 2008, now Pat. No. 7,916,781, which is a
`continuation of application No. 11/542,950, filed on
`Oct. 3, 2006, now Pat. No. 7,421,032, which is a
`continuation of application No. 09/861,102, filed on
`May 18, 2001, now Pat. No. 7,116,710, which is a
`continuation-in-part of application No. 09/922,852,
`filed on Aug. 18, 2000, now Pat. No. 7,089,477.
`
`(60) Provisional application No. 60/205,095, filed on May
`18, 2000.
`
`(51)
`
`Int. Cl.
`H04B 1166
`(2006.01)
`(52) U.S. Cl. ........ 375/240; 375/285; 375/296; 714/801;
`714/804
`
`(58) Field of Classification Search .................. 375/240,
`375/240.24, 254, 285, 295, 296, 260; 714/755,
`714/758,800,801,804,805
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`5,181,207 A
`111993 Chapman
`2/1995 Rhines eta!.
`5,392,299 A
`6/1996 Lin
`5,530,707 A
`511998 Seshadri et a!.
`5,751,739 A
`9/1998 Meyer
`5,802,115 A
`3/1999 Wang eta!.
`5,881,093 A
`5,956,351 A *
`9/1999 Bossen et al .................. 714/757
`6,014,411 A
`1/2000 Wang
`6,023,783 A
`212000 Divsalar eta!.
`212000 Chennakeshu eta!.
`6,031,874 A
`212000 Bliss
`6,032,284 A
`3/2000 Wang
`6,044,116 A
`7/2000 Miller et a!.
`6,094,739 A
`2/2001 Fang et al.
`6,195,396 B1
`5/2002 Laumen et al.
`6,396,423 B1
`8/2002 Kim et al.
`6,437,714 B1
`5/2004 McEwen et al.
`6,732,328 B1
`(Continued)
`
`OTHER PUBLICATIONS
`
`Aji, S.M., eta!., "The Generalized Distributive Law," IEEE Transac(cid:173)
`tions on Information Theory, 46(2):325-343, Mar. 2000.
`
`(Continued)
`
`Primary Examiner- Dac Ha
`(74) Attorney, Agent, or Firm- Perkins Coie LLP
`
`ABSTRACT
`(57)
`A serial concatenated coder includes an outer coder and an
`inner coder. The outer coder irregularly repeats bits in a data
`block according to a degree profile and scrambles the
`repeated bits. The scrambled and repeated bits are input to an
`inner coder, which has a rate substantially close to one.
`
`14 Claims, 5 Drawing Sheets
`
`Variable Node
`Fraction of nodes
`degreei
`
`Check Node
`degree a
`
`f2
`
`f3
`
`fj
`
`3?~~
`: . \
`' . '
`'
`'
`' . '
`'
`·---\-a+.
`~ .... .'
`
`'
`
`'
`
`'
`·---f
`:,,_: 1
`'
`·---~~
`\,_ .. /
`
`306
`
`~:::::
`
`Hughes, Exh. 1007, p. 1
`
`

`
`US 8,284,833 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`6,859,906 B2
`2/2005 Hammons eta!.
`7,089,477 B1
`8/2006 Divsalar et al.
`10/2006 Jin et al.
`7,116,710 B1
`7,421,032 B2
`9/2008 Jin et al.
`7,916,781 B2
`3/2011 Jin et al.
`7,934,146 B2 *
`4/2011 Stolpman .
`2001/0025358 A1
`9/2001 Eidson eta!.
`2007/0025450 A1
`2/2007 Jin et al.
`2008/0263425 A1 *
`10/2008 Lakkis
`2008/0294964 A1
`1112008 Jin et al.
`
`OTHER PUBLICATIONS
`
`714/800
`
`714/752
`
`Benedetto, S., et a!., "A Soft-Input Soft-Output APP Module for
`Iterative Decoding of Concatenated Codes," IEEE Communications
`Letters, 1(1):22-24, Jan. 1997.
`Benedetto, S., eta!., "A Soft-Input Soft-Output Maximum a Poste(cid:173)
`riori (MAP) Module to Decode Parallel and Serial Concatenated
`Codes," The Telecommunications and Data Acquisition Progress
`Report (TDA PR42-127), pp. 1-20, Nov. 1996.
`Benedetto, S., et al., "Bandwidth efficient parallel concatenated cod(cid:173)
`ing schemes," Electronics Letters, 31 (24):2067-2069, Nov. 1995.
`Benedetto, S., eta!., "Design of Serially Concatenated Interleaved
`Codes," ICC 97, vol. 2, pp. 710-714, Jun. 1997.
`Benedetto, S., eta!., "Parallel Concatenated Trellis Coded Modula(cid:173)
`tion," ICC 96, vol. 2, pp. 974-978, Jun. 1996.
`Benedetto, S., eta!., "Serial Concatenated Trellis Coded Modulation
`with Iterative Decoding," Proceedings 1997 IEEE International Sym(cid:173)
`posium on Information Theory (ISIT), Ulm, Germany, p. 8, Jun.
`29-Jul. 4, 1997.
`Benedetto, S., et al., "Serial Concatenation of Interleaved Codes:
`Performace Analysis, Design, and Iterative Decoding," The Telecom(cid:173)
`munications and Data Acquisition Progress Report (TDA PR
`42-126), pp. 1-26, Aug. 1996.
`Benedetto, S., et a!., "Serial concatenation of interleaved codes:
`performance analysis, design, and iterative decoding," Proceedings
`1997 IEEE International Symposium on Information Theory (ISIT),
`Ulm, Germany, p. 106, Jun. 29-Jul. 4, 1997.
`Benedetto, S., et al., "Soft-Output Decoding Algorithms in Iterative
`Decoding of Turbo Codes," The Telecommunications and Data
`Acquisition Progress Report (TDAPR42-124), pp. 63-87, Feb. 1996.
`Berrou, C., eta!., "Near Shannon Limit Error--Correcting Coding
`and Decoding: Turbo Codes," ICC 93, vol. 2, pp. 1064-1070, May
`1993.
`Digital Video Broadcasting (DVB )-User guidelines for the second
`generation system for Broadcasting, Interactive Services, News
`
`Gathering and other broadband satellite applications (DVB-S2),
`ETSI TR 102 376 Vl.l.1 Technical Report, pp. 1-104 (p. 64), Feb.
`2005.
`Divsalar, D., et al., "Coding Theorems for 'Turbo-Like' Codes,"
`Proceedings of the 36th Annual Allerton Conference on Communi(cid:173)
`cation, Control, and Computing, Monticello, Illinois, pp. 201-210,
`Sep. 1998.
`Divsalar, D., et al., "Effective free distance of turbo codes," Electron(cid:173)
`ics Letters, 32(5):445-446, Feb. 1996.
`Divsalar, D., et al., "Hybrid Concatenated Codes and Iterative Decod(cid:173)
`ing," Proceedings 1997 IEEE International Symposium on Informa(cid:173)
`tion Theory (ISIT), Ulm, Germany, p. 10, Jun. 29-Jul. 4, 1997.
`Divsalar, D., eta!., "Low-Rate Turbo Codes for Deep-Space Com(cid:173)
`munications," Proceedings 1995 IEEE International Symposium on
`Information Theory (ISIT), Whistler, BC, Canada, p. 35, Sep. 1995.
`Divsalar, D., eta!., "Multiple Turbo Codes for Deep-Space Commu(cid:173)
`nications," The Telecommunications and Data Acquisition Progress
`Report (TDA PR42-121), pp. 66-77, May 1995.
`Divsalar, D., et al., "Multiple Turbo Codes," MILCOM '95, vol. 1, pp.
`279-285, Nov. 1995.
`Divsalar, D., eta!., "On the Design of Turbo Codes," The Telecom(cid:173)
`munications and Data Acquisition Progress Report (TDA PR
`42-123), pp. 99-121, Nov. 1995.
`Divsalar, D., et a!., "Serial Turbo Trellis Coded Modulation with
`Rate-1 Inner Code," Proceedings 2000 IEEE International Sympo(cid:173)
`sium on Information Theory (ISIT), Sorrento, Italy, pp. 194, Jun.
`2000.
`Divsalar, D., eta!., "Turbo Codes for PCS Applications," IEEE ICC
`'95, Seattle, WA, USA, vol. 1, pp. 54-59, Jun. 1995.
`Jin, H., et a!., "Irregular Repeat-Accumulate Codes," 2nd Interna(cid:173)
`tional Symposium on Turbo Codes, Brest, France, 25 pages, Sep.
`2000.
`Jin, H., et a!., "Irregular Repeat-Accumulate Codes," 2nd Interna(cid:173)
`tional Symposium on Turbo Codes & Related Topics, Brest, France,
`p. 1-8, Sep. 2000.
`Richardson, T.J., eta!., "Design of Capacity-Approaching Irregular
`Low-Density Parity-Check Codes," IEEE Transactions on Informa(cid:173)
`tion Theory, 47(2):619-637, Feb. 2001.
`Richardson, T.J., eta!., "Efficient Encoding of Low-Density Parity(cid:173)
`IEEE Transactions on Information Theory,
`Check Codes,"
`47(2):638-656, Feb. 2001.
`Tanner, R.M., "A Recursive Approach to Low Complexity Codes,"
`IEEE Transactions on Information Theory, 27 (5):533-547, Sep.
`1981.
`Wiberg, N., et a!., "Codes and Iterative Decoding on General
`Graphs," Proceedings 1995 IEEE International Symposium on Infor(cid:173)
`mation Theory (ISIT), Whistler, BC, Canada, p. 468, Sep. 1995.
`* cited by examiner
`
`Hughes, Exh. 1007, p. 2
`
`

`
`U.S. Patent
`
`Oct. 9, 2012
`
`Sheet 1 of 5
`
`US 8,284,833 B2
`
`~ .,._
`\...
`
`C\J
`L.U
`0
`0
`(.)
`L.U
`0
`
`\...
`
`r-
`L.U
`0
`0
`(.)
`L.U
`0
`
`J
`
`i ~
`
`.,._
`C\J
`.,._
`\..
`
`~ a
`.,._
`\...
`
`co a .,._
`\...
`
`~
`
`C\J
`L.U
`0
`0
`(.)
`
`~
`
`a...
`
`\...
`
`l3NNVH8
`
`~
`.,._
`\...
`
`a
`.,._
`.,._
`\..
`
`r-
`L.U
`0
`0
`(.)
`
`.:2 '
`
`Hughes, Exh. 1007, p. 3
`
`

`
`U.S. Patent
`
`Oct. 9, 2012
`
`Sheet 2 of 5
`
`US 8,284,833 B2
`
`<o
`C)
`C\J
`_)
`
`a:
`Ll.J z
`z -
`
`2 '5
`
`_)
`
`CL
`
`2 '>
`
`a:
`Ll.J
`I-
`::::::>
`0
`
`_)
`
`(..)
`(..)
`<(
`
`2 '
`
`:2:
`(.!)
`Cl
`_J
`
`'\
`r---:::::1
`
`~ ':::I
`
`~
`
`'
`
`~ '
`
`Hughes, Exh. 1007, p. 4
`
`

`
`U.S. Patent
`
`Oct. 9, 2012
`
`Sheet 3 of 5
`
`US 8,284,833 B2
`
`Variable Node
`Fraction of nodes
`degree i
`1"--,
`,
`'
`I~
`·----:-~
`~ut
`I
`e
`I
`I
`I
`I
`e
`I
`I
`e
`I
`I
`I
`I
`
`f2
`
`·---\~
`__./\,- ,i~
`302
`',_/
`
`3?~~
`
`I
`I
`I
`I
`I
`I
`I
`
`I
`I
`I
`
`f3
`
`fj
`
`e
`e
`e
`
`\
`
`I
`
`I
`I
`I
`I
`I
`I
`I
`
`I
`I
`I
`
`Check Node
`degree a
`
`306
`
`----·
`
`----·
`
`•
`•
`
`•
`•
`
`FIG. 3
`
`·---~-()eE
`,
`... _ ....
`'
`• •
`·---f~
`e
`~~·I
`·---~ . :
`.
`
`\
`\
`
`I
`
`... _ ....
`"
`'
`
`Hughes, Exh. 1007, p. 5
`
`

`
`U.S. Patent
`
`Oct. 9, 2012
`
`Sheet 4 of 5
`
`US 8,284,833 B2
`
`@--------
`
`FIG. 5A
`
`304
`
`w
`
`v
`
`304
`
`FIG. 58
`
`Hughes, Exh. 1007, p. 6
`
`

`
`U.S. Patent
`
`Oct. 9, 2012
`
`Sheet 5 of 5
`
`US 8,284,833 B2
`
`--- - - - - - - -
`
`~
`
`Cl
`
`Cl
`
`(
`\.. ../
`
`- -r-------r
`
`a a
`<o
`
`CL
`
`a:
`-
`
`- - - _____ }
`~
`
`Cl
`
`( ' I
`
`\.._ ../
`
`CL
`
`L ___ - - - - - -
`- - - - - - - - -
`~
`
`Cl
`
`( ' I
`
`\.._ ../
`
`CL
`
`----1------
`
`a:
`-
`
`Hughes, Exh. 1007, p. 7
`
`

`
`US 8,284,833 B2
`
`1
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a continuation of U.S. application Ser.
`No. 12/165,606, filed Jun. 30, 2008 now U.S. Pat. No. 7,916,
`781, which is a continuation of U.S. application Ser. No.
`11/542,950, filed Oct. 3, 2006, now U.S. Pat. No. 7,421,032,
`which is a continuation of U.S. application Ser. No. 09/861,
`102, filed May 18,2001, now U.S. Pat. No. 7,116,710, which
`claims the priority of U.S. Provisional Application Ser. No.
`60/205,095, filed May 18, 2000, and is a continuation-in-part
`ofU.S. application Ser. No. 09/922,852, filed Aug. 18, 2000,
`now U.S. Pat. No. 7,089,477. The disclosures of the prior
`applications are considered part of (and are incorporated by
`reference in) the disclosure of this application.
`
`GOVERNMENT LICENSE RIGHTS
`
`2
`coding system includes an outer coder, which repeats and
`scrambles bits in the data block. The data block is apportioned
`into two or more sub-blocks, and bits in different sub-blocks
`are repeated a different number of times according to a
`5 selected degree profile. The outer coder may include a
`repeater with a variable rate and an interleaver. Alternatively,
`the outer coder may be a low-density generator matrix
`(LDGM) coder.
`The repeated and scrambled bits are input to an inner coder
`10 that has a rate substantially close to one. The inner coder may
`include one or more accumulators that perform recursive
`modulo two addition operations on the input bit stream.
`The encoded data output from the inner coder may be
`transmitted on a channel and decoded in linear time at a
`15 destination using iterative decoding techniques. The decod(cid:173)
`ing techniques may be based on a Tanner graph representation
`of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`20
`
`The U.S. Government has a paid-up license in this inven(cid:173)
`tion and the right in limited circumstances to require the
`patent owner to license others on reasonable terms as pro- 25
`vided for by the terms of Grant No. CCR-9804793 awarded
`by the National Science Foundation.
`
`BACKGROUND
`
`FIG. 1 is a schematic diagram of a prior "turbo code"
`system.
`FIG. 2 is a schematic diagram of a coder according to an
`embodiment.
`FIG. 3 is a Tanner graph for an irregular repeat and accu(cid:173)
`mulate (IRA) coder.
`FIG. 4 is a schematic diagram of an IRA coder according to
`an embodiment.
`FIG. SA illustrates a message from a variable node to a
`30 check node on the Tanner graph of FIG. 3.
`FIG. SB illustrates a message from a check node to a
`variable node on the Tanner graph of FIG. 3.
`FIG. 6 is a schematic diagram of a coder according to an
`alternate embodiment.
`FIG. 7 is a schematic diagram of a coder according to
`another alternate embodiment.
`
`Properties of a channel affect the amount of data that can be
`handled by the channel. The so-called "Shannon limit"
`defines the theoretical limit of the amount of data that a
`channel can carry.
`Different techniques have been used to increase the data 35
`rate that can be handled by a channel. "Near Shannon Limit
`Error-Correcting Coding and Decoding: Turbo Codes," by
`Bcrrou eta!. ICC, pp 1064-1070, (1993), described a new
`"turbo code" technique that has revolutionized the field of
`FIG. 2 illustrates a coder 200 according to an embodiment.
`error correcting codes. Turbo codes have sufficient random- 40
`The coder 200 may include an outer coder 202, an interleaver
`ness to allow reliable communication over the channel at a
`204, and inner coder 206. The coder may be used to format
`high data rate near capacity. However, they still retain suffi-
`blocks of data for transmission, introducing redundancy into
`cient structure to allow practical encoding and decoding alga-
`rithms. Still, the technique for encoding and decoding turbo
`the stream of data to protect the data from loss due to trans-
`45 mission errors. The encoded data may then be decoded at a
`codes can be relatively complex.
`A standard turbo coder 100 is shown in FIG. 1. A block of
`destination in linear time at rates that may approach the chan-
`k information bits is input directly to a first coder 102. A k bit
`nel capacity.
`The outer coder 202 receives the uncoded data. The data
`interleaver 106 also receives the k bits and interleaves them
`prior to applying them to a second coder 104. The second
`may be partitioned into blocks of fixed size, say k bits. The
`coder produces an output that has more bits than its input, that 50 outer coder may be an (n,k) binary linear block coder, where
`is, it is a coderwithratethatis less than 1. The coders 102,104
`n>k. The coder accepts as input a block u ofk data bits and
`produces an output block v ofn data bits. The mathematical
`are typically recursive convolutional coders.
`Three different items are sent over the channel 150: the
`relationship between u and vis v=T0u, where T0 is an nxk
`original kbits, first encoded bits 110, and second encoded bits
`matrix, and the rate of the coder is k/n.
`112. At the decoding end, two decoders are used: a first 55
`The rate of the coder may be irregular, that is, the value of
`constituent decoder 160 and a second constituent decoder
`T 0 is not constant, and may differ for sub-blocks of bits in the
`162. Each receives both the original k bits, and one of the
`data block. In an embodiment, the outer coder 202 is a
`encoded portions 110, 112. Each decoder sends likelihood
`repeater that repeats the k bits in a block a number of times q
`estimates of the decoded bits to the other decoders. The esti-
`to produce a block with n bits, where n=qk. Since the repeater
`has an irregular output, different bits in the block may be
`mates are used to decode the uncoded information bits as 60
`repeated a different number of times. For example, a fraction
`corrupted by the noisy channel.
`of the bits in the block may be repeated two times, a fraction
`of bits may be repeated three times, and the remainder of bits
`may be repeated four times. These fractions define a degree
`sequence, or degree profile, of the code.
`The inner coder 206 may be a linear rate-1 coder, which
`means that then-bit output block x can be written as x=Tiw,
`
`DETAILED DESCRIPTION
`
`SUMMARY
`
`A coding system according to an embodiment is config- 65
`ured to receive a portion of a signal to be encoded, for
`example, a data block including a fixed number of bits. The
`
`Hughes, Exh. 1007, p. 8
`
`

`
`US 8,284,833 B2
`
`3
`where T I is a nonsingular nxn matrix. The inner coder 210 can
`have a rate that is close to 1, e.g., within 50%, more preferably
`10% and perhaps even more preferably within 1% of I.
`In an embodiment, the inner coder 206 is an accumulator,
`which produces outputs that are the modulo two (mod-2)
`partial sums of its inputs. The accumulator may be a truncated
`rate-1 recursive convolutional coder with the transfer func(cid:173)
`tion 1/(1 +D). Such an accumulator may be considered a block
`coder whose input block [xv ... , xn] and output block
`[y 1 , . . . , y nl are related by the formula
`
`4
`(x 1 , . . . , xr), as follows. Each of the information bits is
`associated with one of the information nodes 302, and each of
`the parity bits is associated with one of the parity nodes 306.
`The value of a parity bit is determined uniquely by the con(cid:173)
`dition that the mod-2 sum of the values of the variable nodes
`connected to each of the check nodes 304 is zero. To see this,
`set x0 =0. Then if the values of the bits on the ra edges coming
`out the permutation box are (v1 , . . . , vra), then we have the
`recursive formula for j=1, 2, ... , r. This is in effect the
`10 encoding algorithm.
`Two types of IRA codes are represented in FIG. 3, a non(cid:173)
`systematic version and
`
`n
`
`15
`
`Xj = Xj-1 + .L V(j-l)a+i
`
`a
`
`i=l
`
`a systematic version. The nonsystematic version is an (r,k)
`20 code, in which the codeword corresponding to the informa(cid:173)
`tion bits (uv ... , uk) is (xv ... , xr). The systematic version
`is a (k+r, k) code, in which the codeword is (uv ... , uk;
`
`where "EB" denotes mod-2, or exclusive-OR (XOR), addition.
`An advantage of this system is that only mod-2 addition is
`necessary for the accumulator. The accumulator may be
`embodied using only XOR gates, which may simplify the
`design.
`The bits output from the outer coder 202 are scrambled 25
`before they are input to the inner coder 206. This scrambling
`may be performed by the interleaver 204, which performs a
`pseudo-random permutation of an input block v, yielding an
`output block w having the same length as v.
`The serial concatenation of the interleaved irregular repeat 30
`code and the accumulate code produces an irregular repeat
`and accumulate (IRA) code. An IRA code is a linear code, and
`as such, may be represented as a set of parity checks. The set
`of parity checks may be represented in a bipartite graph,
`called the Tanner graph, of the code. FIG. 3 shows a Tanner 35
`graph 300 of an IRA code with parameters (f1 , . . . , ~; a),
`where f,~O, ~,f,=1 and "a" is a positive integer. The Tanner
`graph includes two kinds of nodes: variable nodes (open
`circles) and check nodes (filled circles). There are k variable
`nodes 302 on the left, called information nodes. There are r 40
`variable nodes 306 on the right, called parity nodes. There are
`r=(U,if,)/a check nodes 304 connected between the informa(cid:173)
`tion nodes and the parity nodes. Each information node 3 02 is
`connected to a number of check nodes 304. The fraction of
`information nodes connected to exactly i check nodes is f,.
`For example, in the Tanner graph 300, each of the f2 informa(cid:173)
`tion nodes are connected to two check nodes, corresponding
`to a repeat of q=2, and each of the f3 information nodes are
`connected to three check nodes, corresponding to q=3.
`Each check node 304 is connected to exactly "a" informa- 50
`tion nodes 302. In FIG. 3, a=3. These connections can be
`made in many ways, as indicated by the arbitrary permutation
`of the ra edges joining information nodes 302 and check
`nodes 304 in permutation block 310. These connections cor(cid:173)
`respond to the scrambling performed by the interleaver 204. 55
`In an alternate embodiment, the outer coder 202 may be a
`low-density generator matrix (LDGM) coder that performs
`an irregular repeat of the k bits in the block, as shown in FIG.
`4. As the name implies, an LDGM code has a sparse (low(cid:173)
`density) generator matrix. The IRA code produced by the 60
`coder 400 is a serial concatenation of the LDGM code and the
`accumulator code. The interleaver 204 in FIG. 2 may be
`excluded due to the randonmess already present in the struc(cid:173)
`ture of the LDGM code.
`If the permutation performed in permutation block 310 is 65
`fixed, the Tanner graph represents a binary linear block code
`with k information bits (u1 , . . . , uk) and r parity bits
`
`a
`R - -
`nsys- .2;: iJi
`
`The rate of the nonsystematic code is
`The rate of the systematic code is
`
`a
`R - - - (cid:173)
`sys-a+Lifi
`;
`
`For example, regular repeat and accumulate (RA) codes
`can be considered nonsystematic IRA codes with a=1 and
`exactly one f, equal to 1, say fq = 1, and the rest zero, in which
`case Rnsys simplifies to R=1/q.
`The IRA code may be represented using an alternate nota(cid:173)
`tion. Let "A, be the fraction of edges between the information
`45 nodes 302 and the check nodes 304 that are
`
`lc;/i
`/;= 'f.lcj/j
`j
`
`adjacent to an information node of degree i, and let p, be the
`fraction of such edges that are adjacent to a check node of
`degree i+2 (i.e., one that is adjacent to i information nodes).
`These edge fractions may be used to represent the IRA code
`rather than the corresponding node fractions. Define "A(x)=
`~,"A,x'- 1 and p(x)=~,p,x'- 1 to be the generating fnnctions of
`these sequences. The pair ("A, p) is called a degree distribution.
`For L(x)=~,f,x,,
`The rate of the systematic IRA code given by the degree
`distribution is given by
`
`L(x) = Lx lc(t)d/t / L1
`
`lc(t)d/t
`
`Hughes, Exh. 1007, p. 9
`
`

`
`5
`-continued
`1-1
`"'
`LJPi/j
`Rate = 1 + - 1
`[
`-· - -
`'f.:'cJ /j
`j
`
`US 8,284,833 B2
`
`6
`
`m0 (u) =
`
`{
`
`+co if y = 0
`if y = E
`0
`-co if y = 1
`
`The BSC is characterized by a set of conditional probabilities
`relating all possible outputs to possible inputs. Thus, for the
`BSC yE{O, 1 }, and
`
`mo(u) =
`
`log--
`
`l 1- p
`
`p
`1-p
`-log-(cid:173)
`p
`
`if y = 0
`
`if y = 1
`
`"Belief propagation" on the Tanner Graph realization may
`be used to decode IRA codes. Roughly speaking, the belief
`propagation decoding technique allows the messages passed 10
`on an edge to represent posterior densities on the bit associ(cid:173)
`ated with the variable node. A probability density on a bit is a
`pair of non-negative real numbers p(O), p(1) satisfYing p(O)+
`p(1)=1, where p(O) denotes the probability of the bit being 0,
`p(1) the probability of it being 1. Such a pair can be repre- 15
`sented by its log likelihood ratio, m=log(p(O)/p(1 )). The out(cid:173)
`going message from a variable node u to a check node v
`represents information about u, and a message from a check
`node u to a variable node v represents information about u, as
`shown in FIGS. SA and SB, respectively.
`The outgoing message from a node u to a node v depends
`on the incoming messages from all neighbors w of u except v.
`Ifu is a variable message node, this outgoing message is
`
`20
`
`In theAWGN, the discrete-time input symbols X take their
`values in a finite alphabet while channel output symbols Y can
`take any values along the real line. There is assumed to be no
`distortion or other effects other than the addition of white
`Gaussian noise. In an AWGN with a Binary Phase Shift
`Keying (BPSK) signaling which maps 0 to the symbol with
`amplitude v'Es and 1 to the symbol with amplitude -v'Es,
`25 output yER, then
`
`m(u--+v)= ~m(w--+u)+m0 (u)
`"'*'
`
`where m 0 (u) is the log-likelihood message associated with u.
`Ifu is a check node, the
`
`m(u--+v) n m(w--+u)
`-
`tanh--
`2
`"'*'
`
`=
`
`-
`tanh--
`2
`
`m0(u)~4WEJNo
`where N0/2 is the noise power spectral density.
`The selection of a degree profile for use in a particular
`transmission channel is a design parameter, which may be
`affected by various attributes of the channel. The criteria for
`selecting a particular degree profile may include, for example,
`the type of channel and the data rate on the channel. For
`example, Table 1 shows degree profiles that have been found
`to produce good results for an A WGN channel model.
`
`30
`
`35
`
`corresponding formula is
`Before decoding, the messages m(w---;.u) and m(u---;.v) are 40
`initialized to be zero, and m 0 (u) is initialized to be the log(cid:173)
`likelihood ratio based on the channel received information. If
`the channel is memoryless, i.e., each channel output only
`relies on its input, andy is the output of the channel code bit 45
`u, then m 0 (u)=log(p(u=Oiy)/p(u=11y)). After this initializa(cid:173)
`tion, the decoding process may run in a fully parallel and local
`manner. In each iteration, every variable/check node receives
`messages from its neighbors, and sends back updated mes(cid:173)
`sages. Decoding is terminated after a fixed number of itera- so
`tions or detecting that all the constraints are satisfied. Upon
`termination, the decoder outputs a decoded sequence based
`on the messages m(u)=~wm(w---;.u).
`Thus, on various channels, iterative decoding only differs
`in the initial messages m 0 (u). For example, consider three 55
`memoryless channel models: a binary erasure channel
`(BEC); a binary symmetric channel (BSC); and an additive
`white Gaussian noise (AGWN) channel.
`In the BEC, there are two inputs and three outputs. When 0
`is transmitted, the receiver can receive either 0 or an erasure E.
`An erasure E output means that the receiver does not know
`how to demodulate the output. Similarly, when 1 is transmit(cid:173)
`ted, the receiver can receive either 1 or E. Thus, for the BEC,
`yE{O, E, 1}, and
`In the BSC, there are two possible inputs (0,1) and two
`possible outputs (0, 1 ).
`
`TABLE 1
`
`a
`
`2
`
`"A2
`"A3
`"AS
`M
`"A10
`"All
`"A12
`"A13
`"A14
`"A16
`"A27
`"A28
`Rate
`oGA
`a*
`(Eb/NO) * (dB)
`S.L. (dB)
`
`0.139025
`0.2221555
`
`0.638820
`
`0.078194
`0.128085
`0.160813
`0.036178
`
`0.108828
`0.487902
`
`0.333364
`1.1840
`1.1981
`0.190
`-0.4953
`
`0.333223
`1.2415
`1.2607
`-0.250
`-0.4958
`
`4
`
`0.054485
`0.104315
`
`0.126755
`0.229816
`0.016484
`
`0.450302
`0.017842
`0.333218
`1.2615
`1.2780
`-0.371
`-0.4958
`
`Table 1 shows degree profiles yielding codes of rate
`approximately 1;3 for the AWGN channel and with a=2, 3, 4.
`For each sequence, the Gaussian approximation noise thresh(cid:173)
`old, the actual sum-product decoding threshold and the cor-
`60 responding energy per bit (E6)-noise power (N0 ) ratio in dB
`are given. Also listed is the Shannon limit (S.L.).
`As the parameter "a" is increased, the performance
`improves. For example, for a=4, the best code found has an
`iterative decoding threshold of E6/N0=-0.371 dB, which is
`65 only 0.12 dB above the Shannon limit.
`The accumulator component of the coder may be replaced
`by a "double accumulator" 600 as shown in FIG. 6. The
`
`Hughes, Exh. 1007, p. 10
`
`

`
`US 8,284,833 B2
`
`7
`double accumulator can be viewed as a truncated rate 1 con(cid:173)
`volutional coder with transfer function 1/(1 +D+D2
`).
`Alternatively, a pair of accumulators may be the added, as
`shown in FIG. 7. There are three component codes: the
`"outer" code 700, the "middle" code 702, and the "inner"
`code 704. The outer code is an irregular repetition code, and
`the middle and inner codes are both accumulators.
`IRA codes may be implemented in a variety of channels,
`including memory less channels, such as the BEC, BSC, and
`AWGN, as well as channels having non-binary input, non(cid:173)
`symmetric and fading channels, and/or channels with
`memory.
`A number of embodiments have been described. Neverthe-
`less, it will be understood that various modifications may be 15
`made without departing from the spirit and scope of the
`invention. Accordingly, other embodiments are within the
`scope of the following claims.
`
`10
`
`20
`
`8
`5. The apparatus of claim 4, wherein the accumulator is
`configured to perform the accumulation operation to at least 2
`consecutive indices of the second set of memory locations.
`6. The apparatus of claim 1, wherein the permutation mod(cid:173)
`ule further comprises a permutation information module to
`generate pairs of an index of the first set of memory locations
`and an index of the second set of memory locations.
`7. The apparatus of claim 6, wherein at least one index of
`the second set of memory locations is used twice.
`8. A method of performing encoding operations, the
`method comprising:
`receiving a sequence of information bits from a first set of
`memory locations;
`performing an encoding operation using the received
`sequence of information bits as an input, said encoding
`operation comprising:
`reading a bit from the received sequence of information
`bits, and
`combining the read bit to a bit in a second set of memory
`locations based on a corresponding index of the first
`set of memory locations for the received sequence of
`information bits and a corresponding index of the
`second set of memory locations; and
`accumulating the bits in the second set of memory loca(cid:173)
`tions,
`wherein two or more memory locations of the first set of
`memory locations are read by the permutation module
`different times from one another.
`9. The method of claim 8, wherein performing the combine
`operation comprises performing mod-2 or exclusive-OR
`sum.
`10. The method of claim 9, wherein performing the com(cid:173)
`bine operation comprises writing the sum to the second set of
`memory locations based on a corresponding index.
`11. The method of claim 8, wherein performing the accu(cid:173)
`mulation operation comprises performing a mod-2 or exclu(cid:173)
`sive-OR sum of the bit stored ina prior index to a bit stored in
`a current index based on a corresponding index of the second
`set of memory locations.
`12. The method of claim 8, wherein the accumulation
`operation is performed to at least 2 consecutive indices of the
`40 second set of memory locations.
`13. The method of claim 8, wherein the combining opera(cid:173)
`tion comprises generating pairs of an index of the first set of
`memory locations and an index of the second set of memory
`locations.
`14. The method of claim 13, wherein at least one index of
`the second set of memory locations is used twice.
`* * * * *
`
`What is claimed is:
`1. An apparatus for performing encoding operations, the
`apparatus comprising:
`a first set of memory locations to store information bits;
`a second set of memory locations to store parity bits;
`a permutation module to read a bit from the first set of 25
`memory locations and combine the read bit to a bit in the
`second set of memory locations based on a correspond(cid:173)
`ing index of the first set of memory locations and a
`corresponding index of the second set of memory loca-
`tions; and
`an accumulator to perform accumulation operations on the
`bits stored in the second set of memory locations,
`wherein two or more memory locations of the first set of
`memory locations are read by the permutation module
`different times from one another.
`2. The apparatus of claim 1, wherein the permutation mod(cid:173)
`ule is configured to perform the combine operation to include
`performing mod-2 or exclusive-OR sum.
`3. The apparatus of claim 2, wherein the permutation mod(cid:173)
`ule is configured to perform the combining operation to fur(cid:173)
`ther include writing the sum to the second set of memory
`locations based on a corresponding index.
`4. The apparatus of claim 1, wherein the accumulator is
`configured to perform the accumulation operation to include
`a mod-2 or exclusive-OR sum of the bit stored in a prior index
`to a bit stored in a current index based on a corresponding
`index of the second set of memory locations.
`
`30
`
`35
`
`45
`
`Hughes, Exh. 1007, p. 11
`
`

`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`CERTIFICATE OF CORRECTION
`
`PATENT NO.
`APPLICATION NO.
`DATED
`INVENTOR(S)
`
`: 8,284,833 B2
`: 13/073947
`: October 9, 2012
`: Hui Jin et al.
`
`Page 1 of 1
`
`It is certified that error appears in the above-identified patent and that said Letters Patent is hereby corrected as shown below:
`
`On the Title Page, in the Figures, insert Referral Tag -- 300 --.
`
`On Title Page 2, Item (56), under "OTHER PUBLICATIONS", Line 19, delete "Performace" and
`insert -- Performance --, therefor.
`
`In Fig. 3, Sheet 3 of 5, insert Referral Tag -- 300 --.
`
`In Column 1, Line 38, delete "Bcrrou" and insert -- Berrou --, therefor.
`
`In Column 3, Line 3, delete "1% of I." and insert -- 1% of 1. --, therefor.
`
`In Column 4, Line 31, delete "The rate of the nonsystematic code is" and insert the same at Line 25 as
`a new line.

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket