throbber
111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US007916781B2
`
`c12) United States Patent
`Jin et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,916,781 B2
`Mar.29,2011
`
`f54)
`
`(75)
`
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`Inventors: Hui Jin, Glen Gardner, NJ (US); Aamod
`Khandekar, Pasadena, CA (US);
`Robert J. McEliece, Pasadena, CA (US)
`
`(73)
`
`Assignee: California Institute of Technology,
`Pasadena, CA (US)
`
`( *)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 424 days.
`
`(21)
`
`Appl. No.: 12/165,606
`
`(22)
`
`Filed:
`
`Jun. 30, 2008
`
`(65)
`
`Prior Publication Data
`
`US 2008/0294964 AI
`
`Nov. 27, 2008
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`5,181,207 A *
`111993 Chapman ...................... 714/755
`2/1995 Rhines eta!.
`5,392,299 A
`6/1996 Lin
`5,530,707 A
`511998 Seshadri et a!.
`5,751,739 A
`9/1998 Meyer
`5,802,115 A
`3/1999 Wang eta!.
`5,881,093 A
`1/2000 Wang
`6,014,411 A
`212000 Divsalar eta!.
`6,023,783 A
`212000 Chennakeshu eta!.
`6,031,874 A
`212000 Bliss
`6,032,284 A
`3/2000 Wang
`6,044,116 A
`7/2000 Miller et a!.
`6,094,739 A
`6,195,396 B1 *
`2/2001 Fang et al ..................... 375/261
`6,396,423 B1
`5/2002 Laumen et al.
`6,437,714 B1
`8/2002 Kim et al.
`6,732,328 B1 *
`5/2004 McEwen et al ............... 714/795
`6,859,906 B2
`2/2005 Hanunons eta!.
`7,089,477 B1
`8/2006 Divsalar eta!.
`(Continued)
`
`Related U.S. Application Data
`(63) Continuation of application No. 11/542,950, filed on
`Oct. 3, 2006, now Pat. No. 7,421,032, which is a
`continuation of application No. 09/861,102, filed on
`May 18, 2001, now Pat. No. 7,116,710, which is a
`continuation-in-part of application No. 09/922,852,
`filed on Aug. 18, 2000, now Pat. No. 7,089,477.
`
`(60) Provisional application No. 60/205,095, filed on May
`18, 2000.
`
`(51)
`
`Int. Cl.
`H04B 1166
`(2006.01)
`(52) U.S. Cl. ........ 375/240; 375/285; 375/296; 714/801;
`714/804
`(58) Field of Classification Search . ... ... ... ... .. ... 3 7 5/240,
`375/240.24, 254, 285, 295, 296, 260; 714/755,
`714/758,800,801,804,805
`See application file for complete search history.
`
`OTHER PUBLICATIONS
`
`Benedetto, S., et a!., "A Soft-Input Soft-Output APP Module for
`Iterative Decoding of Concatenated Codes," IEEE Communications
`Letters, 1(1):22-24, Jan. 1997.
`
`(Continued)
`
`Primary Examiner- Dac V Ha
`(74) Attorney, Agent, or Firm- Perkins Coie LLP
`
`(57)
`
`ABSTRACT
`
`A serial concatenated coder includes an outer coder and an
`inner coder. The outer coder irregularly repeats bits in a data
`block according to a degree profile and scrambles the
`repeated bits. The scrambled and repeated bits are input to an
`inner coder, which has a rate substantially close to one.
`
`22 Claims, 5 Drawing Sheets
`
`200~
`
`k '/ u
`
`k/
`u
`
`OUTER
`
`n/
`v
`\___ 202
`
`p
`
`n/
`w
`
`INNER
`
`\___ 204
`
`\___ 206
`
`Hughes, Exh. 1005, p. 1
`
`

`
`US 7,916,781 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`2001/0025358 A1
`9/2001 Eidson eta!.
`
`OTHER PUBLICATIONS
`
`Benedetto, S., eta!., "A Soft-Input Soft-Output Maximum A Poste(cid:173)
`riori (MAP) Module to Decode Parallel and Serial Concatenated
`Codes," The Telecommunications and Data Acquisition Progress
`Report (TDA PR 42-127), pp. 1-20, Nov. 1996.
`Benedetto, S., et al., "Bandwidth efficient parallel concatenated cod(cid:173)
`ing schemes," Electronics Letters, 31(24):2067-2069, Nov. 1995.
`Benedetto, S., eta!., "Design of Serially Concatenated Interleaved
`Codes," ICC 97, vol. 2, pp. 710-714, Jun. 1997.
`Benedetto, S., eta!., "Parallel Concatenated Trellis Coded Modula(cid:173)
`tion," ICC 96, vol. 2, pp. 974-978, Jun. 1996.
`Benedetto, S., eta!., "Serial Concatenated Trellis Coded Modulation
`with Iterative Decoding," Proceedings 1997 IEEE International
`Symposium on Information Theory (!SIT), Ulm, Germany, p. 8, Jun.
`29-Jul. 4, 1997.
`Benedetto, S., et al., "Serial Concatenation of Interleaved Codes:
`Performace Analysis, Design, and Iterative Decoding," The Telecom(cid:173)
`munications andDataAcquisitionProgress Report(TDAPR 42126),
`pp. 1-26, Aug. 1996.
`Benedetto, S., et a!., "Serial concatenation of interleaved codes:
`performance analysis, design, and iterative decoding," Proceedings
`1997 IEEE International Symposium on Information Theory (!SIT),
`Ulm, Germany, p. 106, Jun. 29-Jul. 4, 1997.
`Benedetto, S., et al., "Soft-Output Decoding Algorithms in Iterative
`Decoding of Turbo Codes," The Telecommunications and Data
`Acquisition Progress Report ( TDA P R 4 2-12 4), pp. 63-87, Feb. 1996.
`Berrou, C., eta!., "Near Shannon Limit Error--Correcting Coding
`and Decoding: Turbo Codes," ICC 93, vol. 2, pp. 1064-1070, May
`1993.
`Digital Video Broadcasting (DVB )-User guidelines for the second
`generation system for Broadcasting, Interactive Services, News
`Gathering and other broadband satellite applications (DVB-S2),
`ETSI TR 102 376 Vl.l.1 Technical Report, pp. 1-104 (p. 64), Feb.
`2005.
`Divsalar, D., et al., "Coding Theorems for 'Turbo-Like' Codes,"
`
`Proceedings of the 361h Annual Allerton Conference on Communica(cid:173)
`tion, Control, and Computing, Monticello, Illinois, pp. 201-210, Sep.
`1998.
`
`Divsalar, D., eta!., "Effective free distance of turbo codes," Electron(cid:173)
`ics Letters, 32(5):445-446, Feb. 1996.
`Divsalar, D., et al., "Hybrid Concatenated Codes and Iterative Decod(cid:173)
`ing," Proceedings 1997 IEEE International Symposium on Informa(cid:173)
`tion Theory (!SIT), Ulm, Germany, p. 10, Jun. 29-Jul. 4, 1997.
`Divsalar, D., eta!., "Low-Rate Turbo Codes for Deep-Space Com(cid:173)
`munications," Proceedings 1995 IEEE International Symposium on
`Information Theory (!SIT), Whistler, BC, Canada, p. 35, Sep. 1995.
`Divsalar, D., eta!., "Multiple Turbo Codes for Deep-Space Commu(cid:173)
`nications," The Telecommunications and Data Acquisition Progress
`Report (TDA PR 42-121), pp. 66-77, May 1995.
`Divsalar, D., eta!., "Multiple Turbo Codes," MIL COM '95, vol. 1, pp.
`279-285, Nov. 1995.
`Divsalar, D., et al., "On the Design of Turbo Codes," The Telecom(cid:173)
`munications and Data Acquisition Progress Report (TDA PR
`42-123), pp. 99-121, Nov. 1995.
`Divsalar, D., et a!., "Serial Turbo Trellis Coded Modulation with
`Rate-1 Inner Code," Proceedings 2000 IEEE International Sympo(cid:173)
`sium on Information Theory (!SIT), Sorrento, Italy, pp. 194, Jun.
`2000.
`Divsalar, D., eta!., "Turbo Codes for PCS Applications," IEEE ICC
`'95, Seattle, WA, USA, vol. 1, pp. 54-59, Jun. 1995.
`Jin, H., eta!., "Irregular Repeat-Accumulate Codes," 2nd Interna(cid:173)
`tional Symposium on Turbo Codes, Brest, France, 25 pages, Sep.
`2000.
`Jin, H., et al., "Irregular Repeat-Accumulate Codes," 2"d Interna(cid:173)
`tional Symposium on Turbo Codes &Related Topics, Brest, France, p.
`1-8, Sep. 2000.
`Richardson, T.J., eta!., "Design of Capacity-Approaching Irregular
`Low-Density Parity-Check Codes," IEEE Transactions on Informa(cid:173)
`tion Theory, 47(2):619-637, Feb. 2001.
`Richardson, T.J., eta!., "Efficient Encoding of Low-Density Parity(cid:173)
`Check Codes," IEEE Transactions on
`Information Theory,
`47(2):638-656, Feb. 2001.
`Wiberg, N., et a!., "Codes and Iterative Decoding on General
`Graphs," Proceedings 1995 IEEE International Symposium on Infor(cid:173)
`mation Theory (!SIT), Whistler, BC, Canada, p. 468, Sep. 1995.
`Aji, S.M., eta!., "The Generalized Distributive Law," IEEE Trans(cid:173)
`actions on Information Theory, 46(2):325-343, Mar. 2000.
`Tanner, R.M., "A Recursive Approach to Low Complexity Codes,"
`IEEE Transactions on Information Theory, 27(5):533-547, Sep.
`1981.
`* cited by examiner
`
`Hughes, Exh. 1005, p. 2
`
`

`
`U.S. Patent
`
`Mar.29,2011
`
`Sheet 1 of 5
`
`US 7,916,781 B2
`
`~ ,._
`\...
`
`N
`UJ
`Cl
`0
`(..)
`UJ
`Cl
`
`i ~
`
`l3NN\fH8
`
`<:::>
`,._
`,._
`\...
`
`~
`,._
`
`"
`
`.--
`UJ
`Cl
`0
`(..)
`
`C\J~
`,._
`,._
`
`"
`
`N
`UJ
`Cl
`0
`(..)
`
`~
`
`a...
`
`~ <:::>
`,._
`\...
`
`<o
`<:::>
`,._
`\...
`..
`
`"
`
`.--
`UJ
`Cl
`0
`(..)
`UJ
`Cl
`
`~
`
`"
`
`~
`
`~ '
`
`(
`.,._
`
`<:::)
`<:::)
`
`Hughes, Exh. 1005, p. 3
`
`

`
`U.S. Patent
`
`Mar.29,2011
`
`Sheet 2 of 5
`
`US 7,916,781 B2
`
`_)
`
`cc
`UJ z
`z
`-
`
`~ '5
`
`_)
`
`a...
`
`~ '>
`
`cc
`UJ
`I-
`::::::>
`0
`
`_)
`
`~1'::1
`
`~ ,::1
`
`~
`
`'
`
`~
`
`(..)
`(..)
`<(
`
`~
`
`~ '
`
`:2:
`(!)
`Cl
`_J
`
`~
`
`~ '
`
`Hughes, Exh. 1005, p. 4
`
`

`
`U.S. Patent
`
`Mar.29,2011
`
`Sheet 3 of 5
`
`US 7,916,781 B2
`
`Variable Node
`Fraction of nodes
`degree i
`, .... -,
`,
`' 1:0<
`·---t lu 1
`e
`•
`•
`·-~ \
`302
`' .... _./
`< a
`j::::
`3?:~
`i::5
`::::::,
`~
`•
`ffi
`•
`•
`0....
`~
`·---~-(}EE
`a
`,
`.... _,.
`'
`~
`•
`•
`~
`•
`, .... -,
`,
`'
`
`f2
`
`f3
`
`fj
`
`·---· I
`·---~~
`.
`
`I
`I
`I
`I
`
`•
`
`•
`
`I
`I
`I
`I
`
`\
`\
`
`I
`
`,
`.... _,.
`'
`
`I
`I
`I
`I
`I
`
`I
`I
`I
`I
`I
`I
`I
`
`I
`
`I
`I
`I
`I
`I
`I
`
`I
`I
`I
`I
`I
`I
`I
`
`\
`
`I
`
`I
`
`Check Node
`degree a
`
`306
`
`----·
`
`----·
`
`•
`•
`
`•
`•
`
`FIG. 3
`
`Hughes, Exh. 1005, p. 5
`
`

`
`U.S. Patent
`
`Mar. 29, 2011
`
`Sheet 4 of 5
`
`US 7,916,781 B2
`
`®--------
`
`FIG. 5A
`
`304
`
`w
`
`v
`
`304
`
`FIG. 58
`
`Hughes, Exh. 1005, p. 6
`
`

`
`U.S. Patent
`
`Mar.29,2011
`
`Sheet 5 of 5
`
`US 7,916,781 B2
`
`- - - - - - - - - -
`
`~
`
`0
`
`0
`
`L_
`\... IJ
`- -------7
`
`C)
`C)
`<.o
`
`a...
`
`a:
`-
`
`- - - r-----1
`~
`
`0
`
`I
`
`( ~.
`\.. ~
`
`a...
`
`L __ - - - - - -
`- - - r - - - - - -
`~
`
`0
`
`( ~ I
`\.. ~
`
`a...
`
`---1------
`
`a:
`-
`
`Hughes, Exh. 1005, p. 7
`
`

`
`US 7,916,781 B2
`
`1
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application is a continuation of U.S. application Ser.
`No. 11/542,950, filed Oct. 3, 2006 now U.S. Pat. No. 7,421,
`032, which is a continuation of U.S. application Ser. No.
`09/861,102, filed May 18,2001, now U.S. Pat. No. 7,116,710,
`which claims the priority ofU.S. Provisional Application Ser.
`No. 60/205,095, filed May 18, 2000, and is a continuation(cid:173)
`in-part ofU.S. application Ser. No. 09/922,852, filed Aug. 18,
`2000, now U.S. Pat. No. 7,089,477. The disclosure of the
`prior applications are considered part of (and are incorporated
`by reference in) the disclosure of this application.
`
`GOVERNMENT LICENSE RIGHTS
`
`The U.S. Government has a paid-up license in this inven(cid:173)
`tion and the right in limited circumstances to require the
`patent owner to license others on reasonable terms as pro(cid:173)
`vided for by the terms of Grant No. CCR-9804793 awarded
`by the National Science Foundation.
`
`BACKGROUND
`
`5
`
`2
`into two or more sub-blocks, and bits in different sub-blocks
`are repeated a different number of times according to a
`selected degree profile. The outer coder may include a
`repeater with a variable rate and an interleaver. Alternatively,
`the outer coder may be a low-density generator matrix
`(LDGM) coder.
`The repeated and scrambled bits are input to an inner coder
`that has a rate substantially close to one. The inner coder may
`include one or more accumulators that perform recursive
`10 modulo two addition operations on the input bit stream.
`The encoded data output from the inner coder may be
`transmitted on a channel and decoded in linear time at a
`destination using iterative decoding techniques. The decod-
`15 ing techniques may be based on a Tanner graph representation
`of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`20
`
`FIG. 1 is a schematic diagram of a prior "turbo code"
`system.
`FIG. 2 is a schematic diagram of a coder according to an
`embodiment.
`FIG. 3 is a Tanner graph for an irregular repeat and accu-
`25 mulate (IRA) coder.
`FIG. 4 is a schematic diagram of an IRA coder according to
`an embodiment.
`FIG. SA illustrates a message from a variable node to a
`check node on the Tanner graph of FIG. 3.
`FIG. SB illustrates a message from a check node to a
`variable node on the Tanner graph of FIG. 3.
`FIG. 6 is a schematic diagram of a coder according to an
`alternate embodiment.
`FIG. 7 is a schematic diagram of a coder according to
`35 another alternate embodiment.
`
`DETAILED DESCRIPTION
`
`Properties of a channel affect the amount of data that can be
`handled by the channel. The so-called "Shannon limit" 30
`defines the theoretical limit of the amount of data that a
`channel can carry.
`Different techniques have been used to increase the data
`rate that can be handled by a channel. "Near Shannon Limit
`Error-Correcting Coding and Decoding: Turbo Codes," by
`Berrou eta!. ICC, pp 1064-1070, (1993), described a new
`"turbo code" technique that has revolutionized the field of
`error correcting codes. Turbo codes have sufficient random(cid:173)
`ness to allow reliable communication over the channel at a
`high data rate near capacity. However, they still retain suffi(cid:173)
`cient structure to allow practical encoding and decoding algo(cid:173)
`rithms. Still, the technique for encoding and decoding turbo
`codes can be relatively complex.
`A standard turbo coder 100 is shown in FIG. 1. A block of
`k information bits is input directly to a first coder 102. A k bit
`interleaver 106 also receives the k bits and interleaves them
`prior to applying them to a second coder 104. The second
`coder produces an output that has more bits than its input, that
`is, it is a coderwithratethatis less than 1. The coders 102,104
`are typically recursive convolutional coders.
`Three different items are sent over the channel 150: the
`original k bits, first encoded bits 110, and second encoded bits
`112. At the decoding end, two decoders are used: a first
`constituent decoder 160 and a second constituent decoder
`162. Each receives both the original k bits, and one of the
`encoded portions 110, 112. Each decoder sends likelihood
`estimates of the decoded bits to the other decoders. The esti(cid:173)
`mates are used to decode the uncoded information bits as
`corrupted by the noisy channel.
`
`FIG. 2 illustrates a coder 200 according to an embodiment.
`40 The coder 200 may include an outer coder 202, an interleaver
`204, and inner coder 206. The coder may be used to format
`blocks of data for transmission, introducing redundancy into
`the stream of data to protect the data from loss due to trans(cid:173)
`mission errors. The encoded data may then be decoded at a
`45 destination in linear time at rates that may approach the chan(cid:173)
`nel capacity.
`The outer coder 202 receives the uncoded data. The data
`may be partitioned into blocks of fixed size, say k bits. The
`outer coder may be an (n,k) binary linear block coder, where
`50 n>k. The coder accepts as input a block u ofk data bits and
`produces an output block v ofn data bits. The mathematical
`relationship between u and vis v=T0u, where T0 is an nxk
`matrix, and the rate of the coder is kin.
`The rate of the coder may be irregular, that is, the value of
`55 T 0 is not constant, and may differ for sub-blocks of bits in the
`data block. In an embodiment, the outer coder 202 is a
`repeater that repeats the k bits in a block a number of times q
`to produce a block with n bits, where n=qk. Since the repeater
`has an irregular output, different bits in the block may be
`60 repeated a different number of times. For example, a fraction
`of the bits in the block may be repeated two times, a fraction
`of bits may be repeated three times, and the remainder of bits
`may be repeated four times. These fractions define a degree
`sequence, or degree profile, of the code.
`The inner coder 206 may be a linear rate-1 coder, which
`means that then-bit output block x can be written as x=Tiw,
`where Tiis anonsingularnxnmatrix. The innercoder210 can
`
`SUMMARY
`
`A coding system according to an embodiment is config(cid:173)
`ured to receive a portion of a signal to be encoded, for
`example, a data block including a fixed number of bits. The 65
`coding system includes an outer coder, which repeats and
`scrambles bits in the data block. The data block is apportioned
`
`Hughes, Exh. 1005, p. 8
`
`

`
`US 7,916,781 B2
`
`3
`have a rate that is close to 1, e.g., within 50%, more preferably
`10% and perhaps even more preferably within 1% of 1.
`In an embodiment, the inner coder 206 is an accumulator,
`which produces outputs that are the modulo two (mod-2)
`partial sums of its inputs. The accumulator may be a truncated
`rate-1 recursive convolutional coder with the transfer func(cid:173)
`tion 1/(1 +D). Such an accumulator may be considered a block
`coder whose input block [xu ... , xn] and output block
`[y 1 , . . . , y nl are related by the formula
`
`4
`accumulator code. The interleaver 204 in FIG. 2 may be
`excluded due to the randonmess already present in the struc(cid:173)
`ture of the LDGM code.
`If the permutation performed in permutation block 310 is
`fixed, the Tanner graph represents a binary linear block code
`with k information bits (u1 , . . . , uk) and r parity bits
`(x 1 , . . . , xr), as follows. Each of the information bits is
`associated with one of the information nodes 302, and each of
`10 the parity bits is associated with one of the parity nodes 306.
`The value of a parity bit is determined uniquely by the con(cid:173)
`dition that the mod-2 sum of the values of the variable nodes
`connected to each of the check nodes 304 is zero. To see this,
`set x0 =0. Then if the values of the bits on the ra edges coming
`15 out the permutation box are
`
`Xj = Xj-1 + .L V(j-l)R+i
`
`R
`
`i-1
`
`20
`
`40
`
`where "EB" denotes mod-2, or exclusive-OR (XOR), addition.
`An advantage of this system is that only mod-2 addition is 25
`necessary for the accumulator. The accumulator may be
`embodied using only XOR gates, which may simplify the
`design.
`The bits output from the outer coder 202 are scrambled
`before they are input to the inner coder 206. This scrambling
`may be performed by the interleaver 204, which performs a
`pseudo-random permutation of an input block v, yielding an
`output block w having the same length as v.
`The serial concatenation of the interleaved irregular repeat
`code and the accumulate code produces an irregular repeat 35
`and accumulate (IRA) code. An IRA code is a linear code, and
`as such, may be represented as a set of parity checks. The set
`of parity checks may be represented in a bipartite graph,
`called the Tanner graph, of the code. FIG. 3 shows a Tanner
`graph 300 of an IRA code with parameters (f1 , . . . , ~; a),
`where f,~O, ~,f,=1 and "a" is a positive integer. The Tanner
`graph includes two kinds of nodes: variable nodes (open
`circles) and check nodes (filled circles). There are k variable
`nodes 302 on the left, called information nodes. There are r 45
`variable nodes 306 on the right, called parity nodes. There are
`r=(U,if,)/a check nodes 304 connected between the informa(cid:173)
`tion nodes and the parity nodes. Each information node 3 02 is
`connected to a number of check nodes 304. The fraction of
`information nodes connected to exactly i check nodes is f,.
`For example, in the Tanner graph 300, each of the f2 informa(cid:173)
`tion nodes are connected to two check nodes, corresponding
`to a repeat of q=2, and each of the f3 information nodes are
`connected to three check nodes, corresponding to q=3.
`Each check node 304 is connected to exactly "a" informa(cid:173)
`tion nodes 302. In FIG. 3, a=3. These connections can be
`made in many ways, as indicated by the arbitrary permutation
`of the ra edges joining information nodes 302 and check
`nodes 304 in permutation block 310. These connections cor- 60
`respond to the scrambling performed by the interleaver 204.
`In an alternate embodiment, the outer coder 202 may be a
`low-density generator matrix (LDGM) coder that performs
`an irregular repeat of the k bits in the block, as shown in FIG.
`4. As the name implies, an LDGM code has a sparse (low- 65
`density) generator matrix. The IRA code produced by the
`coder 400 is a serial concatenation of the LDGM code and the
`
`50
`
`55
`
`(V1 , . . . , vrJ, then we have the recursive formula for j=
`1, 2, ... , r. This is in effect the encoding algorithm.
`Two types of IRA codes are represented in FIG. 3, a non(cid:173)
`systematic version and a systematic version. The nonsystem(cid:173)
`atic version is an (r,k) code, in which the codeword corre(cid:173)
`sponding to the information bits (uu ... , uk) is (xu ... , xr).
`The systematic version is a (k+r, k) code, in which the code-
`
`The rate of the nonsystematic code is
`
`a
`Rnsys = ~ijj
`
`The rate of the systematic code is
`
`a
`R - - - (cid:173)
`sys-a+Lifi
`;
`
`For example, regular repeat and accumulate (RA) codes
`can be considered nonsystematic IRA codes with a=1 and
`exactly one f, equal to 1, say fq = 1, and the rest zero, in which
`case Rnsys simplifies to R=1/q.
`The IRA code may be represented using an alternate nota(cid:173)
`tion. Let "A, be the fraction of edges between the information
`nodes 302 and the check nodes 304 that are adjacent to an
`information node of degree i, and let p, be the fraction of such
`edges that are adjacent to a check node of degree i+2 (i.e., one
`that is adjacent to i information nodes). These edge fractions
`may be used to represent the IRA code rather than the corre(cid:173)
`sponding node fractions. Define "A(x)=~,"A,x'- 1 and p(x)=
`~,p,x'- 1 to be
`
`lc;/i
`J; = Pi U
`
`the generating functions of these sequences. The pair ("A, p) is
`called a degree distribution. For L(x)=~,f,x,,
`
`Hughes, Exh. 1005, p. 9
`
`

`
`US 7,916,781 B2
`
`5
`The rate of the systematic IRA code given by the
`
`6
`An erasure E output means that the receiver does not know
`how to demodulate the output. Similarly, when 1 is transmit(cid:173)
`ted, the receiver can receive either 1 or E. Thus, for the BEC,
`yE{O, E, 1}, and
`
`1-1
`"'
`LJPi/j
`Rate= 1+ ~AJ/j
`[
`
`m0 (u) =
`
`{
`
`+co if y = 0
`if y = E
`0
`-co if y = 1
`
`10
`
`In the BSC, there are two possible inputs (0,1) and two
`possible outputs (0, 1 ). The BSC is characterized by a set of
`conditional probabilities relating all possible outputs to pos(cid:173)
`sible inputs. Thus, for the BSC yE{O, 1 },
`
`mo(u) =
`
`log--
`
`p l 1- p
`
`1-p
`-log-(cid:173)
`p
`
`if y = 0
`
`if y = 1
`
`25
`
`In theAWGN, the discrete-time input symbols X take their
`values in a finite alphabet while channel output symbols Y can
`take any values along the real line. There is assumed to be no
`distortion or other effects other than the addition of white
`Gaussian noise. In an AWGN with a Binary Phase Shift
`Keying (BPSK) signaling which maps 0 to the symbol with
`30 amplitude v'Es and 1 to the symbol with amplitude -v'Es,
`output yER, then
`
`degree distribution is given by
`"Belief propagation" on the Tanner Graph realization may
`be used to decode IRA codes. Roughly speaking, the belief
`propagation decoding technique allows the messages passed 15
`on an edge to represent posterior densities on the bit associ(cid:173)
`ated with the variable node. A probability density on a bit is a
`pair of non-negative real numbers p(O), p(1) satisfYing p(O)+
`p(1)=1, where p(O) denotes the probability of the bit being 0,
`p(1) the probability of it being 1. Such a pair can be repre- 20
`sented by its log likelihood ratio, m=log(p(O)/p(1 )). The out(cid:173)
`going message from a variable node u to a check node v
`represents information about u, and a message from a check
`node u to a variable node v represents information about u, as
`shown in FIGS. SA and SB, respectively.
`The outgoing message from a node u to a node v depends
`on the incoming messages from all neighbors w of u except v.
`Ifu is a variable message node, this outgoing message is
`
`m(u--+v)= ~m(w--+u)+m0 (u)
`"'*'
`
`where m 0 (u) is the log-likelihood message associated with u. 35
`Ifu is a check node, the corresponding formula is
`
`m(u--+v) n m(w--+u)
`"'*'
`
`tanh---
`2
`
`tanh--- =
`2
`
`40
`
`where N0/2 is the noise power spectral density.
`The selection of a degree profile for use in a particular
`transmission channel is a design parameter, which may be
`affected by various attributes of the channel. The criteria for
`selecting a particular degree profile may include, for example,
`the type of channel and the data rate on the channel. For
`example, Table 1 shows degree profiles that have been found
`to produce good results for an A WGN channel model.
`
`Before decoding, the messages m(w---;.u) and m(u---;.v) are
`initialized to be zero, and m 0 (u) is initialized to be the log(cid:173)
`likelihood ratio based on the channel received information. If 45
`the channel is memoryless, i.e., each channel output only
`relies on its input, andy is the output of the channel code bit
`u, then m 0 (u)=log(p(u=Oiy)/p(u=11y)). After this initializa(cid:173)
`tion, the decoding process may run in a fully parallel and local
`manner. In each iteration, every variable/check node receives so
`messages from its neighbors, and sends back updated mes(cid:173)
`sages. Decoding is terminated after a fixed number of itera(cid:173)
`tions or detecting that all the constraints are satisfied. Upon
`termination, the decoder outputs a decoded sequence based
`on the messages
`
`55
`
`m(u) = ~ Wm(w--+ u).
`
`60
`
`TABLE 1
`
`a
`
`2
`
`"A2
`"A3
`"AS
`M
`"AlO
`"All
`"Al2
`"Al3
`"Al4
`"Al6
`"A27
`"A28
`Rate
`aGA
`a*
`(Eb/NO) *(dB)
`S.L. (dB)
`
`0.139025
`0.2221555
`
`0.638820
`
`0.078194
`0.128085
`0.160813
`0.036178
`
`0.108828
`0.487902
`
`0.333364
`1.1840
`1.1981
`0.190
`-0.4953
`
`0.333223
`1.2415
`1.2607
`-0.250
`-0.4958
`
`4
`
`0.054485
`0.104315
`
`0.126755
`0.229816
`0.016484
`
`0.450302
`0.017842
`0.333218
`1.2615
`1.2780
`-0.371
`-0.4958
`
`Thus, on various channels, iterative decoding only differs
`in the initial messages m 0 (u). For example, consider three
`memoryless channel models: a binary erasure channel
`(BEC); a binary symmetric channel (BSC); and an additive
`white Gaussian noise (AGWN) channel.
`In the BEC, there are two inputs and three outputs. When 0
`is transmitted, the receiver can receive either 0 or an erasure E.
`
`Table 1 shows degree profiles yielding codes of rate
`approximately 1/3 fortheAWGN channel and witha=2, 3, 4.
`For each sequence, the Gaussian approximation noise thresh-
`65 old, the actual sum-product decoding threshold and the cor(cid:173)
`responding energy per bit (E6)-noise power (N0 ) ratio in dB
`are given. Also listed is the Shannon limit (S.L.).
`
`Hughes, Exh. 1005, p. 10
`
`

`
`US 7,916,781 B2
`
`7
`As the parameter "a" is increased, the performance
`improves. For example, for a=4, the best code found has an
`iterative decoding threshold of E6/N0=-0.371 dB, which is
`only 0.12 dB above the Shannon limit.
`The accumulator component of the coder may be replaced
`by a "double accumulator" 600 as shown in FIG. 6. The
`double accumulator can be viewed as a truncated rate 1 con(cid:173)
`volutional coder with transfer function 1/(1 +D+D2
`).
`Alternatively, a pair of accumulators may be the added, as
`shown in FIG. 7. There are three component codes: the 10
`"outer" code 700, the "middle" code 702, and the "inner"
`code 704. The outer code is an irregular repetition code, and
`the middle and inner codes are both accumulators.
`IRA codes may be implemented in a variety of channels,
`including memoryless channels, such as the BEC, BSC, and 15
`AWGN, as well as channels having non-binary input, non(cid:173)
`symmetric and fading channels, and/or channels with
`memory.
`A number of embodiments have been described. Neverthe(cid:173)
`less, it will be understood that various modifications may be 20
`made without departing from the spirit and scope of the
`invention. Accordingly, other embodiments are within the
`scope of the following claims.
`What is claimed is:
`1. A method of encoding a signal, comprising:
`receiving a block of data in the signal to be encoded, the
`block of data including information bits;
`performing a first encoding operation on at least some of
`the information bits, the first encoding operation being a
`linear transform operation that generates L transformed 30
`bits; and
`performing a second encoding operation using the L trans(cid:173)
`formed bits as an input, the second encoding operation
`including an accumulation operation in which the L
`transformed bits generated by the first encoding opera- 35
`tion are accumulated, said second encoding operation
`producing at least a portion of a codeword, wherein Lis
`two or more.
`2. The method of claim 1, further comprising:
`outputting the codeword, wherein the codeword comprises 40
`parity bits.
`3. The method of claim 2, wherein outputting the codeword
`comprises:
`outputting the parity bits; and
`outputting at least some of the information bits.
`4. The method of claim 3, wherein outputting the codeword
`comprises:
`outputting the parity bits following the information bits.
`5. The method of claim 2, wherein performing the first
`encoding operation comprises transforming the at least some 50
`of the information bits via a low density generator matrix
`transformation.
`6. The method of claim 5, wherein generating each of the L
`transformed bits comprises mod-2 or exclusive-OR snnnning
`of bits in a subset of the information bits.
`7. The method of claim 6, wherein eachofthe subsets of the
`information bits includes a same number of the information
`bits.
`8. The method of claim 6, wherein at least two of the
`information bits appear in three subsets of the information 60
`bits.
`9. The method of claim 6, wherein the information bits
`appear in a variable number of subsets.
`10. The method of claim 2, wherein performing the second
`encoding operation comprises using a first of the parity bits in 65
`the accumulation operation to produce a second of the parity
`bits.
`
`45
`
`25
`
`55
`
`8
`11. The method of claim 10, wherein outputting the code(cid:173)
`word comprises outputting the second of the parity bits imme(cid:173)
`diately following the first of the parity bits.
`12. The method of claim 2, wherein performing the second
`encoding operation comprises performing one of a mod-2
`addition and an exclusive-OR operation.
`13. A method of encoding a signal, comprising:
`receiving a block of data in the signal to be encoded, the
`block of data including information bits; and
`performing an encoding operation using the information
`bits as an input, the encoding operation including an
`accumulation ofmod-2 or exclusive-OR sums of bits in
`subsets of the information bits, the encoding operation
`generating at least a portion of a codeword,
`wherein the information bits appear in a variable number of
`subsets.
`14. The method of claim 13, further comprising:
`outputting the codeword, wherein the codeword comprises
`parity bits.
`15. The method of claim 14, wherein outputting the code-
`word comprises:
`outputting the parity bits; and
`outputting at least some of the information bits.
`16. The method of claim 15, wherein the parity bits follow
`the information bits in the codeword.
`17. The method of claim 13, wherein each of the subsets of
`the information bits includes a constant number of the infor(cid:173)
`mation bits.
`18. The method of claim 13, wherein performing the
`encoding operation further comprises:
`performing one of the mod-2 addition and the exclusive(cid:173)
`OR snnnning of the bits in the subsets.
`19. A method of encoding a signal, comprising:
`receiving a block of data in the signal to be encoded, the
`block of data including information bits; and
`performing an encoding operation using the information
`bits as an input, the encoding operation including an
`accumulation ofmod-2 or exclusive-OR sums of bits in
`subsets of the information bits, the encoding operation
`generating at least a portion of a codeword, wherein at
`least two of the information bits appear in three subsets
`of the information bits.
`20. A method of encoding a signal, comprising:
`receiving a block of data in the signal to be encoded, the
`block of data including information bits; and
`performing an encoding operation using the information
`bits as an input, the encoding operation including an
`accumulation ofmod-2 or exclusive-OR sums of bits in
`subsets of the information bits, the encoding operation
`generating at least a portion of a codeword, wherein
`performing the encoding operation comprises:
`mod-2 or exclusive-OR adding a first subset of information
`bits in the collection to yield a first sum;
`mod-2 or exclusive-OR adding a second subset of infor(cid:173)
`mation bits in the collection and the first sum to yield a
`second sum.
`21. A method comprising:
`receiving a collection of information bits;
`mod-2 or exclusive-OR adding a first subset of information
`bits in the collection to yield a first parity bit;
`mod-2 or exclusive-OR adding a second subset of infor(cid:173)
`mation bits in the collection and the first parity bit to
`yield a second parity bit; and
`outputting a codeword that includes the first parity bit and
`the second parity bit.
`
`Hughes, Exh. 1005, p. 11
`
`

`
`US 7,916,

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket