throbber
111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US007116710Bl
`
`c12) United States Patent
`Jin et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,116,710 B1
`Oct. 3, 2006
`
`(54) SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`(75)
`
`Inventors: Hui Jin, Glen Gardner, NJ (US);
`Aamod Khandekar, Pasadena, CA
`(US); Robert J. McEliece, Pasadena,
`CA (US)
`
`(73) Assignee: California Institute of Technology,
`Pasadena, CA (US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 735 days.
`
`(21) Appl. No.: 09/861,102
`
`(22) Filed:
`
`May 18, 2001
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/205,095, filed on May
`18, 2000.
`
`(51)
`
`Int. Cl.
`H04B 1166
`(2006.01)
`(52) U.S. Cl. ...................... 375/240; 375/262; 375/265;
`375/341; 341/51; 341/102; 714/752
`(58) Field of Classification Search ... ... ... ... .. .. 3 7 5/259,
`375/262, 265, 285, 296, 341, 346, 348; 714/746,
`714/752, 755, 756, 786, 792, 794, 795, 796;
`341/51, 52, 56, 102, 103
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5,881,093 A
`6,014,411 A *
`6,023,783 A
`6,031,874 A
`6,032,284 A
`6,044,116 A
`6,396,423 B1 *
`6,437,714 B1 *
`2001/0025358 A1
`
`3/1999 Wang et a!.
`1/2000 Wang ......................... 375/259
`212000 Divsalar et a!.
`212000 Chennakeshu et a!.
`212000 Bliss
`3/2000 Wang
`5/2002 Laumen et a!. ............... 341/95
`8/2002 Kim eta!. .................... 341/81
`9/2001 Eidson et a!.
`
`OTHER PUBLICATIONS
`
`Wiberg eta!., "Codes and Iteratie Decoding on General Graphs",
`1995 Inti. Symposium on Information Theory, Sep. 1995, p. 506.*
`Appendix A.1 "Structure of Parity Check Matrices of Standardized
`LDPC Codes," Digital Video Broadcasting (DVB) User guidelines
`for the second generation system for Broadcasting, Interactive
`Services, News Gathering and other broadband satellite applications
`(DVB-S2) ETSI TR 102 376 Vl.l.l. (2005-02) Technical Report.
`pp. 64.
`Benedetto et a!., "Bandwidth efficient parallel concatenated coding
`schemes," Electronics Letters 31(24):2067-2069 (Nov. 23, 1995).
`Benedetto et a!., "Soft-output decoding algorithms in iterative
`decoding of turbo codes," The Telecommunications and Data
`Acquisition (TDA) Progress Report 42-124 for NASA and Califor(cid:173)
`nia Institute of Technology Jet Propulsion Laboratory, Joseph H.
`Yuen, Ed., pp. 63-87 (Feb. 15, 1996).
`
`(Continued)
`
`Primary Examiner-Dac V. Ha
`(74) Attorney, Agent, or Firm-Fish & Richardson P.C.
`
`(57)
`
`ABSTRACT
`
`A serial concatenated coder includes an outer coder and an
`inner coder. The outer coder irregularly repeats bits in a data
`block according to a degree profile and scrambles the
`repeated bits. The scrambled and repeated bits are input to
`an inner coder, which has a rate substantially close to one.
`
`5,392,299 A
`5,751,739 A *
`
`2/1995 Rhines et al.
`5/1998 Seshadri eta!. ............ 714/746
`
`33 Claims, 5 Drawing Sheets
`
`200~
`
`k '/ u
`
`k/
`u
`
`OUTER
`
`n/
`v
`"-202
`
`p
`
`n/
`w
`
`INNER
`
`\_ 204
`
`\_ 206
`
`Hughes, Exh. 1001, p. 1
`
`

`

`US 7,116,710 Bl
`Page 2
`
`OTHER PUBLICATIONS
`
`Benedetto et a!., "Serial Concatenation of Interleaved Codes:
`Performace Analysis, Design, and Iterative Decoding," The Tele(cid:173)
`communications and Data Acquisition (TDA) Progress Report
`42-126 for NASA and California Institute of Technology Jet Pro(cid:173)
`pulsion Laboratory, Jospeh H. Yuen, Ed., pp. 1-26 (Aug. 15, 1996).
`Benedetto eta!., "A Soft-Input Soft-Output Maximum A Posteriori
`(MAP) Module to Decode Parallel and Serial Concatenated Codes,"
`The Telecommunications and Data Acquisition (TDA) Progress
`Report 42-127 for NASA and California Institute of Technology Jet
`Propulsion Laboratory, Jospeh H. Yuen, Ed., pp. 1-20 (Nov. 15,
`1996).
`Benedetto et al., "Parallel Concatenated Trellis Coded Modulation,"
`ICC '96, IEEE, pp. 974-978, (Jun. 1996).
`Benedetto, S. et al., "A Soft-Input Soft-Output APP Module for
`Iterative Decoding of Concatenated Codes," IEEE Communications
`Letters 1(1):22-24 (Jan. 1997).
`Benedetto et a!., "Serial Concatenation of interleaved codes: per(cid:173)
`formance analysis, design, and iterative decoding," Proceedings
`from the IEEE 1997 International Symposium on Information
`Theory (ISIT), Ulm, Germany, p. 106, Jun. 29-Jul. 4, 1997.
`Benedetto et a!., "Serial Concatenated Trellis Coded Modulation
`with Iterative Decoding," Proceedings from IEEE 1997 Interna(cid:173)
`tional Symposium on Information Theory (ISIT), Ulm, Germany, p.
`8, Jun. 29-Jul. 4, 1997.
`Benedetto et a!., "Design of Serially Concatenated Interleaved
`Codes," ICC 97, Montreal, Canada, pp. 710-714, (Jun. 1997).
`Berrou et a!., "Near Shannon Limit Error-Correcting Coding and
`Decoding: Turbo Codes," ICC pp. 1064-1070 (1993).
`Digital Video Broadcasting (DVB) User guidelines for the second
`generation system for Broadcasting, Interactive Services, News
`Gathering and other broadband satellite applications (DVB-S2)
`ETSI TR 102 376 Vl.l.l. (Feb. 2005) Technical Report, pp. 1-104
`(Feb. 15, 2005).
`Divsalar et a!., "Coding Theorems for 'Turbo-Like' Codes," Pro(cid:173)
`ceedings of the 36th Annual Allerton Conference on Communica(cid:173)
`tion, Control, and Computing, Sep. 23-25 1998, Allerton House,
`Monticello, Illinois, pp. 201-210 (1998).
`Divsalar, D. et a!., "Multiple Turbo Codes for Deep-Space Com(cid:173)
`munications," The Telecommunications and Data Acquisition
`
`(TDA) Progress Report42-121 forNASAand California Institute of
`Technology Jet Propulsion Laboratory, Jospeh H. Yuen, Ed., pp.
`60-77 (May 15, 1995).
`Divsalar, D. et al., "On the Design of Turbo Codes," The Telecom(cid:173)
`munications and Data Acquisition (TDA) Progress Report 42-123
`for NASA and California Institute of Technology Jet Propulsion
`Laboratory, Jospeh H. Yuen, Ed., pp. 99-131 (Nov. 15, 1995).
`Divsalar, D. et al., "Low-rate turbo codes for Deep Space Commu(cid:173)
`nications," Proceedings from the 1995 IEEE International Sympo(cid:173)
`sium on Information Theory, Sep. 17-22, 1995, Whistler, British
`Columbia, Canada, p. 35.
`Divsalar, D. et a!., "Turbo Codes for PCS Applications," ICC 95,
`IEEE, Seattle, WA, pp. 54-59 (Jun. 1995).
`Divsalar, D. et a!., "Multiple Turbo Codes," MILCOM 95, San
`Diego, CA pp. 279-285 (Nov. 5-6, 1995).
`Divsalar eta!., "Effective free distance of turbo codes," Electronics
`Letters 32(5): 445-446 (Feb. 29, 1996).
`Divsalar, D. eta!., "Hybrid concatenated codes and Iterative Decod(cid:173)
`ing," Proceedings from the IEEE 1997 International Symposium on
`Information Theory (ISIT), Ulm, Germany, p. 10 (Jun. 29-Jul. 4,
`1997).
`Divsalar, D. et a!., "Serial Turbo Trellis Coded Modulation with
`Rate-1 Inner Code," Proceedings from the IEEE 2000 International
`Symposium on Information Theory (ISIT), Italy, pp. 1-14 (Jun.
`2000).
`Jin et a!., "Irregular Repeat - Accumulate Codes," 2nd International
`Symposium on Turbo Codes & Related Topics, Sep. 4-7, 2000,
`Brest, France, 25 slides, (presented on Sep. 4, 2000).
`Jin et a!., "Irregular Repeat-Accumulate Codes," 2nd International
`Symposium on Turbo Codes & Related Topics, Sep. 4-7, 2000,
`Brest, France, pp. 1-8 (2000).
`Richardson, et a!., "Design of capacity approaching irregular low
`density parity check codes," IEEE Trans, Inform. Theory 47:
`619-637 (Feb. 2001 ).
`Richardson, T. and R. Urbanke, "Efficient encoding of low-density
`parity check codes," IEEE Trans. Inform. Theory 47: 638-656 (Feb.
`2001).
`* cited by examiner
`
`Hughes, Exh. 1001, p. 2
`
`

`

`U.S. Patent
`
`Oct. 3, 2006
`
`Sheet 1 of 5
`
`US 7,116,710 B1
`
`\.. ~
`w
`Cl
`0
`c...::>
`w
`Cl
`
`"""-
`
`~
`\..
`
`C'\.1
`w
`Cl
`0
`c..:>
`w
`Cl
`
`t ;
`
`\..
`
`13NN\:fH8
`
`~
`"""-
`'--
`
`0
`"""-
`"""-
`
`'--
`
`~
`
`UJ
`0
`0
`c...::>
`
`"'t
`0
`"""-
`\..
`
`~
`"""-
`"""-
`
`'--
`
`C'\.1
`uJ
`Cl
`0
`c...::>
`
`;
`
`;
`
`<.o
`0
`"""-
`\..
`
`a...
`
`~ '
`
`Hughes, Exh. 1001, p. 3
`
`

`

`U.S. Patent
`
`Oct. 3, 2006
`
`Sheet 2 of 5
`
`US 7,116,710 B1
`
`u
`u
`c::(
`
`~ '
`
`:2
`(.!)
`Cl
`_J
`
`_)
`
`a::
`UJ z z
`
`- ~
`
` ,:s:
`
`_)
`
`a:...
`
`~ '>
`
`a::
`UJ
`t-
`::>
`0
`
`_)
`
`~ f':::l
`
`~ '::::J
`
`').
`
`'
`
`~ '
`
`Hughes, Exh. 1001, p. 4
`
`

`

`U.S. Patent
`
`Oct. 3, 2006
`
`Sheet 3 of 5
`
`US 7,116,710 B1
`
`Variable Node
`Fraction of nodes
`degree i
`"
`'
`·---t~
`'ut
`e
`
`I
`
`I
`
`\
`
`\
`
`•
`•
`
`I
`I
`I
`I
`I
`I
`
`I
`I
`I
`I
`I
`I
`I
`
`f2
`
`f3
`
`fj
`
`Check Node
`degree a
`
`306
`
`----·
`
`----·
`
`•
`
`•
`•
`
`FIG. 3
`
`3;;7!?7-
`
`302~" ',
`·---.L I
`
`I
`
`\
`
`e
`
`•
`e
`
`I
`I
`I
`I
`I
`I
`
`I
`I
`I
`I
`I
`I
`I
`
`.
`
`\
`
`\
`
`I
`
`/
`
`·---~~
`... _ ...
`•
`•
`
`·---1~
`·---~bir
`
`I
`I
`I
`I
`
`•
`•
`
`I
`I
`
`I
`
`\
`'-
`
`... _ ....
`
`I
`
`I
`
`Hughes, Exh. 1001, p. 5
`
`

`

`U.S. Patent
`
`Oct. 3, 2006
`
`Sheet 4 of 5
`
`US 7,116,710 B1
`
`@---------
`
`FIG. 5A
`
`304
`
`w
`
`v
`
`304
`
`FIG. 58
`
`Hughes, Exh. 1001, p. 6
`
`

`

`U.S. Patent
`
`Oct. 3, 2006
`
`Sheet 5 of 5
`
`US 7,116,710 B1
`
`--- -------
`I
`I
`
`0
`
`0
`
`(
`\... ..)
`--------;
`a a co
`
`a...
`
`0:: -
`
`--- ~----}
`1
`
`0
`
`I
`
`( '\
`\.... l/.
`
`a...
`
`L __ ..;; ------
`-I
`- - - f-------
`~
`
`0
`
`I
`
`( '\
`\.. J-
`
`a...
`
`I
`I
`L--~------
`
`0:: -
`
`I
`lr-....
`lc,:;
`IL:i:
`I
`I
`I
`I
`I
`I
`I
`
`h
`I~
`
`II'-...
`..J
`
`Hughes, Exh. 1001, p. 7
`
`

`

`US 7,116,710 B1
`
`1
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application claims priority to U.S. Provisional Appli(cid:173)
`cation Ser. No. 60/205,095, filed on May 18, 2000, and to
`U.S. application Ser. No. 09/922,852, filed on Aug. 18, 2000
`and entitled Interleaved Serial Concatenation Forming
`Turbo-Like Codes.
`
`GOVERNMENT LICENSE RIGHTS
`
`The U.S. Government has a paid-up license in this inven(cid:173)
`tion and the right in limited circumstances to require the
`patent owner to license others on reasonable terms as
`provided for by the terms of Grant No. CCR-9804793
`awarded by the National Science Foundation.
`
`BACKGROUND
`
`2
`The repeated and scrambled bits are input to an inner
`coder that has a rate substantially close to one. The inner
`coder may include one or more accumulators that perform
`recursive modulo two addition operations on the input bit
`5 stream.
`The encoded data output from the inner coder may be
`transmitted on a channel and decoded in linear time at a
`destination using iterative decoding techniques. The decod(cid:173)
`ing techniques may be based on a Tanner graph represen-
`10 tation of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`20
`
`FIG. 1 is a schematic diagram of a prior "turbo code"
`15 system.
`FIG. 2 is a schematic diagram of a coder according to an
`embodiment.
`FIG. 3 is a Tanner graph for an irregular repeat and
`accnmulate (IRA) coder.
`FIG. 4 is a schematic diagram of an IRA coder according
`to an embodiment.
`FIG. SA illustrates a message from a variable node to a
`check node on the Tanner graph of FIG. 3.
`FIG. SB illustrates a message from a check node to a
`variable node on the Tanner graph of FIG. 3.
`FIG. 6 is a schematic diagram of a coder according to an
`alternate embodiment.
`FIG. 7 is a schematic diagram of a coder according to
`another alternate embodiment.
`
`Properties of a channel affect the amount of data that can
`be handled by the channel. The so-called "Shannon limit"
`defines the theoretical limit of the amount of data that a 25
`channel can carry.
`Different techniques have been used to increase the data
`rate that can be handled by a channel. "Near Shannon Limit
`Error-Correcting Coding and Decoding: Turbo Codes," by
`Berrou eta!. ICC, pp 1064-1070, (1993), described a new 30
`"turbo code" technique that has revolutionized the field of
`error correcting codes. Turbo codes have sufficient random(cid:173)
`FIG. 2 illustrates a coder 200 according to an embodi(cid:173)
`ness to allow reliable communication over the channel at a
`ment. The coder 200 may include an outer coder 202, an
`high data rate near capacity. However, they still retain
`35 interleaver 204, and inner coder 206. The coder may be used
`sufficient structure to allow practical encoding and decoding
`to format blocks of data for transmission, introducing redun(cid:173)
`algorithms. Still, the technique for encoding and decoding
`dancy into the stream of data to protect the data from loss
`turbo codes can be relatively complex.
`due to transmission errors. The encoded data may then be
`A standard turbo coder 100 is shown in FIG. 1. A block
`decoded at a destination in linear time at rates that may
`ofk information bits is input directly to a first coder 102. A
`40 approach the channel capacity.
`k bit interleaver 106 also receives the k bits and interleaves
`The outer coder 202 receives the nncoded data. The data
`them prior to applying them to a second coder 104. The
`may be partitioned into blocks of fixed size, say k bits. The
`second coder produces an output that has more bits than its
`outer coder may be an (n,k) binary linear block coder, where
`input, that is, it is a coder with rate that is less than 1. The
`n>k. The coder accepts as input a block u ofk data bits and
`coders 102, 104 are typically recursive convolutional coders.
`Three different items are sent over the channel 150: the 45 produces an output block v ofn data bits. The mathematical
`relationship between u and vis v=T0u, where T0 is an nxk
`original k bits, first encoded bits 110, and second encoded
`matrix, and the rate of the coder is kin.
`bits 112. At the decoding end, two decoders are used: a first
`The rate of the coder may be irregular, that is, the value
`constituent decoder 160 and a second constituent decoder
`of T0 is not constant, and may differ for sub-blocks of bits
`162. Each receives both the original k bits, and one of the
`50 in the data block. In an embodiment, the outer coder 202 is
`encoded portions 110, 112. Each decoder sends likelihood
`a repeater that repeats the k bits in a block a nnmber of times
`estimates of the decoded bits to the other decoders. The
`q to produce a block with n bits, where n=qk. Since the
`estimates are used to decode the nncoded information bits as
`repeater has an irregular output, different bits in the block
`corrupted by the noisy channel.
`may be repeated a different number of times. For example,
`55 a fraction of the bits in the block may be repeated two times,
`a fraction of bits may be repeated three times, and the
`remainder of bits may be repeated four times. These frac(cid:173)
`tions define a degree sequence, or degree profile, of the code.
`The inner coder 206 may be a linear rate-1 coder, which
`60 means that then-bit output block x can be written as x=Tiw,
`where TI is a nonsingular nxn matrix. The inner coder 210
`can have a rate that is close to 1, e.g., within 50%, more
`preferably 10% and perhaps even more preferably within
`1% of 1.
`In an embodiment, the inner coder 206 is an accumulator,
`which produces outputs that are the modulo two (mod-2)
`partial sums of its inputs. The accumulator may be a
`
`DETAILED DESCRIPTION
`
`SUMMARY
`
`A coding system according to an embodiment is config(cid:173)
`ured to receive a portion of a signal to be encoded, for
`example, a data block including a fixed number of bits. The
`coding system includes an outer coder, which repeats and
`scrambles bits in the data block. The data block is appor(cid:173)
`tioned into two or more sub-blocks, and bits in different
`sub-blocks are repeated a different nnmber of times accord(cid:173)
`ing to a selected degree profile. The outer coder may include
`a repeater with a variable rate and an interleaver. Alterna- 65
`tively, the outer coder may be a low-density generator matrix
`(LDGM) coder.
`
`Hughes, Exh. 1001, p. 8
`
`

`

`US 7,116,710 B1
`
`3
`truncated rate-1 recurs1ve convolutional coder with the
`transfer function 1/(1+D). Such an accumulator may be
`considered a block coder whose input block [Xu ... , xn] and
`output block [y u ... , y nl are related by the formula
`
`4
`to each of the check nodes 304 is zero. To see this, set x0 =0.
`Then if the values of the bits on the ra edges coming out the
`permutation box are (v u . . . ' v ra), then we have the
`recursive formula
`
`Xj = Xj-1 + .L V(j-l)A+i
`
`l
`
`i=l
`
`10
`
`for j=1, 2, ... , r. This is in effect the encoding algorithm.
`Two types of IRA codes are represented in FIG. 3, a
`nonsystematic version and a systematic version. The non-
`15 systematic version is an (r,k) code, in which the codeword
`corresponding to the information bits (uu ... , uk) is (xu ... ,
`xr). The systematic version is a (k+r, k) code, in which the
`codeword is (uu ... , uk; x 1 , . . . , xr).
`The rate of the nonsystematic code is
`
`where "EB" denotes mod-2, or exclusive-OR (XOR), addi(cid:173)
`tion. An advantage of this system is that only mod-2 addition
`is necessary for the accumulator. The accumulator may be
`embodied using only XOR gates, which may simplify the
`design.
`The bits output from the outer coder 202 are scrambled
`before they are input to the inner coder 206. This scrambling
`may be performed by the interleaver 204, which performs a 20
`pseudo-random permutation of an input block v, yielding an
`output block w having the same length as v.
`The serial concatenation of the interleaved irregular
`repeat code and the accumulate code produces an irregular
`repeat and accumulate (IRA) code. An IRA code is a linear 25
`code, and as such, may be represented as a set of parity
`checks. The set of parity checks may be represented in a
`bipartite graph, called the Tanner graph, of the code. FIG. 3
`shows a Tanner graph 300 of an IRA code with parameters
`(f1 , . . . , ~; a), where f,~O, ~,f,=1 and "a" is a positive 30
`integer. The Tanner graph includes two kinds of nodes:
`variable nodes (open circles) and check nodes (filled
`circles). There are k variable nodes 302 on the left, called
`information nodes. There are r variable nodes 306 on the
`right, called parity nodes. There are r=(U,if,)/a check nodes 35
`304 connected between the information nodes and the parity
`nodes. Each information node 302 is connected to a number
`of check nodes 304. The fraction of information nodes
`connected to exactly i check nodes is f,. For example, in the
`Tanner graph 300, each of the f2 information nodes are
`connected to two check nodes, corresponding to a repeat of
`q=2, and each of the f3 information nodes are connected to
`three check nodes, corresponding to q=3.
`Each check node 304 is connected to exactly "a" infor(cid:173)
`mation nodes 302. In FIG. 3, a=3. These connections can be
`made in many ways, as indicated by the arbitrary permuta(cid:173)
`tion of the ra edges joining information nodes 302 and check
`nodes 304 in permutation block 310. These connections
`correspond to the scrambling performed by the interleaver
`204.
`In an alternate embodiment, the outer coder 202 may be
`a low-density generator matrix (LDGM) coder that performs
`an irregular repeat of the k bits in the block, as shown in FIG.
`4. As the name implies, an LDGM code has a sparse
`(low-density) generator matrix. The IRA code produced by 55
`the coder 400 is a serial concatenation of the LDGM code
`and the accumulator code. The interleaver 204 in FIG. 2 may
`be excluded due to the randonmess already present in the
`structure of the LDGM code.
`If the permutation performed in permutation block 310 is 60
`fixed, the Tanner graph represents a binary linear block code
`with k information bits (uu ... , uk) and r parity bits (xu ... ,
`xr), as follows. Each of the information bits is associated
`with one of the information nodes 302, and each of the parity
`bits is associated with one of the parity nodes 306. The value 65
`of a parity bit is determined uniquely by the condition that
`the mod-2 sum of the values of the variable nodes connected
`
`a
`Rnsys = L iJi
`;
`
`The rate of the systematic code is
`
`a
`R,y,=a+l:if;
`;
`
`For example, regular repeat and accumulate (RA) codes
`can be considered nonsystematic IRA codes with a=1 and
`exactly one f, equal to 1, say fq = 1, and the rest zero, in which
`case Rnsys simplifies to R=1/q.
`The IRA code may be represented using an alternate
`notation. Let "A, be the fraction of edges between the infor(cid:173)
`mation nodes 302 and the check nodes 304 that are adjacent
`40 to an information node of degree i, and let p, be the fraction
`of such edges that are adjacent to a check node of degree i +2
`(i.e., one that is adjacent to i information nodes). These edge
`fractions may be used to represent the IRA code rather than
`the corresponding node fractions. Define "A(x)=~,"A,x'- 1 and
`to be the generating functions of these
`45 p(x)=~,p,x'- 1
`sequences. The pair ("A, p) is called a degree distribution. For
`L( x )= ~,f,x,,
`
`50
`
`/..; /i
`/;= 2.:/..J/j
`
`L(x)~ f ox"A(t)dt!J 0
`1"A(t)dt
`The rate of the systematic IRA code given by the degree
`distribution is given by
`
`-1
`
`'\'
`LJ PJU
`Rate = 1 + - 1
`-· - -
`2.: /..i/j
`[
`j
`
`"Belief propagation" on the Tanner Graph realization may
`be used to decode IRA codes. Roughly speaking, the belief
`
`Hughes, Exh. 1001, p. 9
`
`

`

`US 7,116,710 B1
`
`5
`propagation decoding technique allows the messages passed
`on an edge to represent posterior densities on the bit asso(cid:173)
`ciated with the variable node. A probability density on a bit
`is a pair of non-negative real numbers p(O), p(1) satisfying
`p(O)+p(1)=1, where p(O) denotes the probability of the bit
`being 0, p(1) the probability of it being 1. Such a pair can be
`represented by its log likelihood ratio, m=log(p(O)/p(1)).
`The outgoing message from a variable node u to a check
`node v represents information about u, and a message from 10
`a check node u to a variable node v represents information
`about u, as shown in FIGS. SA and SB, respectively.
`The outgoing message from a node u to a node v depends
`on the incoming messages from all neighbors w of u except
`v. If u is a variable message node, this outgoing message is
`
`15
`
`m(u--+ v) = ~ m(w--+ u) +m0 (u)
`WI''
`
`where m0 (u) is the log-likelihood message associated with u.
`If u is a check node, the corresponding formula is
`
`m(u--+v) n m(w--+u)
`
`=
`
`.w,
`
`tanh--
`2
`
`-
`
`-
`tanh--
`2
`
`20
`
`25
`
`30
`
`6
`conditional probabilities relating all possible outputs to
`possible inputs. Thus, for the BSC yE{O, 1},
`
`mo(u) =
`
`j log~ if y =0
`
`p
`-log~ if y=l
`p
`
`and
`In the A WGN, the discrete-time input symbols X take
`their values in a finite alphabet while channel output sym(cid:173)
`bols Y can take any values along the real line. There is
`assumed to be no distortion or other effects other than the
`addition of white Gaussian noise. In an A WGN with a
`Binary Phase Shift Keying (BPSK) signaling which maps 0
`to the symbol with amplitude /ES and 1 to the symbol with
`amplitude -/ES, output yER, then
`
`m0 (u)oo4yiEjN0
`
`where N0/2 is the noise power spectral density.
`The selection of a degree profile for use in a particular
`transmission channel is a design parameter, which may be
`affected by various attributes of the channel. The criteria for
`selecting a particular degree profile may include, for
`example, the type of channel and the data rate on the
`channel. For example, Table 1 shows degree profiles that
`have been found to produce good results for an A WGN
`channel model.
`
`TABLE 1
`
`a
`
`2
`
`"A2
`"A3
`"AS
`M
`"A10
`"All
`"A12
`"A13
`"A14
`"A16
`"A27
`"A28
`Rate
`aGA
`a*
`(Eb/NO) * (dB)
`S.L. (dB)
`
`0.139025
`0.2221555
`
`0.638820
`
`0.078194
`0.128085
`0.160813
`0.036178
`
`0.108828
`0.487902
`
`0.333364
`1.1840
`1.1981
`0.190
`-0.4953
`
`0.333223
`1.2415
`1.2607
`-0.250
`-0.4958
`
`4
`
`0.054485
`0.104315
`
`0.126755
`0.229816
`0.016484
`
`0.450302
`0.017842
`0.333218
`1.2615
`1.2780
`-0.371
`-0.4958
`
`Before decoding, the messages m(w---;.u) and m(u---;.v) are
`initialized to be zero, and m0 (u) is initialized to be the
`log-likelihood ratio based on the channel received informa(cid:173)
`tion. If the channel is memoryless, i.e., each channel output 35
`only relies on its input, and y is the output of the channel
`code bit u, then m 0 (i)=log(p(u=Oiy)/p(u=11y)). After this
`initialization, the decoding process may run in a fully
`parallel and local manner. In each iteration, every variable/ 40
`check node receives messages from its neighbors, and sends
`back updated messages. Decoding is terminated after a fixed
`number of iterations or detecting that all the constraints are
`satisfied. Upon termination, the decoder outputs a decoded
`sequence based on the messages m(u)=~wm(w---;.u).
`Thus, on various channels, iterative decoding only differs
`in the initial messages m0 (u). For example, consider three
`memoryless channel models: a binary erasure channel
`(BEC); a binary symmetric channel (BSC); and an additive 50
`white Gaussian noise (AGWN) channel.
`In the BEC, there are two inputs and three outputs. When
`0 is transmitted, the receiver can receive either 0 or an
`erasure E. An erasure E output means that the receiver does
`not know how to demodulate the output. Similarly, when 1 55
`is transmitted, the receiver can receive either 1 or E. Thus,
`for the BEC, yE{ 0, E, 1}, and
`
`45
`
`Table 1 shows degree profiles yielding codes of rate
`approximately 1;3 for the AWGN channel and with a=2, 3, 4.
`For each sequence, the Gaussian approximation noise
`threshold, the actual sum-product decoding threshold and
`the corresponding energy per bit (E6)-noise power (N0 ) ratio
`in dB are given. Also listed is the Shannon limit (S.L.).
`As the parameter "a" is increased, the performance
`improves. For example, for a=4, the best code found has an
`iterative decoding threshold of E6/N0=-0.371 dB, which is
`60 only 0.12 dB above the Shannon limit.
`The accumulator component of the coder may be replaced
`by a "double accumulator" 600 as shown in FIG. 6. The
`double accumulator can be viewed as a truncated rate 1
`convolutional coder with transfer function 1/(1+D+D2
`).
`Alternatively, a pair of accumulators may be the added, as
`shown in FIG. 7. There are three component codes: the
`"outer" code 700, the "middle" code 702, and the "inner"
`
`65
`
`+oo
`
`if y=O
`
`m0 (u) =
`
`-Ooo
`
`{
`
`if y = E
`if y = 1
`
`In the BSC, there are two possible inputs (0,1) and two
`possible outputs (0, 1 ). The BSC is characterized by a set of
`
`Hughes, Exh. 1001, p. 10
`
`

`

`7
`code 704. The outer code is an irregular repetition code, and
`the middle and inner codes are both accumulators.
`IRA codes may be implemented in a variety of channels,
`including memory less channels, such as the BEC, BSC, and
`AWGN, as well as channels having non-binary input, non(cid:173)
`symmetric and fading channels, and/or channels with
`memory.
`A number of embodiments have been described. Never(cid:173)
`theless, it will be understood that various modifications may
`be made without departing from the spirit and scope of the
`invention. Accordingly, other embodiments are within the
`scope of the following claims.
`The invention claimed is:
`1. A method of encoding a signal, comprising:
`obtaining a block of data in the signal to be encoded;
`partitioning said data block into a plurality of sub-blocks,
`each sub-block including a plurality of data elements;
`first encoding the data block to from a first encoded data
`block, said first encoding including repeating the data
`elements in different sub-blocks a different number of
`times;
`interleaving the repeated data elements in the first
`encoded data block; and
`second encoding said first encoded data block using an
`encoder that has a rate close to one.
`2. The method of claim 1, wherein said second encoding
`is via a rate 1 linear transformation.
`3. The method of claim 1, wherein said first encoding is
`carried out by a first coder with a variable rate less than one,
`and said second encoding is carried out by a second coder
`with a rate substantially close to one.
`4. The method of claim 3, wherein the second coder
`comprises an accumulator.
`5. The method of claim 4, wherein the data elements
`comprises bits.
`6. The method of claim 5, wherein the first coder com(cid:173)
`prises a repeater operable to repeat different sub-blocks a
`different number of times in response to a selected degree
`profile.
`7. The method of claim 4, wherein the first coder com- 40
`prises a low-density generator matrix coder and the second
`coder comprises an accumulator.
`8. The method of claim 1, wherein the second encoding
`uses a transfer function of 1/(1+D).
`9. The method of claim 1, wherein the second encoding
`uses a transfer function of 1/(1+D+D2
`).
`10. The method of claim 1, wherein said second encoding
`utilizes two accumulators.
`11. A method of encoding a signal, comprising:
`receiving a block of data in the signal to be encoded, the
`data block including a plurality of bits;
`first encoding the data block such that each bit in the data
`block is repeated and two or more of said plurality of
`bits are repeated a different number of times in order to 55
`form a first encoded data block; and
`second encoding the first encoded data block in such a
`way that bits in the first encoded data block are accu(cid:173)
`mulated.
`12. The method of claim 11, wherein the said second
`encoding is via a rate 1 linear transformation.
`13. The method of claim 11, wherein the first encoding is
`via a low-density generator matrix transformation.
`14. The method of claim 11, wherein the signal to be
`encoded comprises a plurality of data blocks of fixed size.
`
`45
`
`35
`
`15. A coder comprising:
`a first coder having an input configured to receive a stream
`of bits, said first coder operative to repeat said stream
`of bits irregularly and scramble the repeated bits; and
`a second coder operative to further encode bits output
`from the first coder at a rate within 10% of one.
`16. The coder of claim 15, wherein the stream of bits
`includes a data block, and wherein the first coder is operative
`to apportion said data block into a plurality of sub-blocks
`10 and to repeat bits in each sub-block a number of times,
`wherein bits in different sub-blocks are repeated a different
`number of times.
`17. The coder of claim 16, wherein the second coder
`comprises a recursive convolutional encoder with a transfer
`15 function of 1/(1+D).
`18. The coder of claim 16, wherein the second coder
`comprises a recursive convolutional encoder with a transfer
`function of 1/(1+D+D2
`).
`19. The coder of claim 15, wherein the first coder com-
`20 prises a repeater having a variable rate and an interleaver.
`20. The coder of claim 15, wherein the first coder com(cid:173)
`prises a low-density generator matrix coder.
`21. The coder of claim 15, wherein the second coder
`comprises a rate 1 linear encoder.
`22. The coder of claim 21, wherein the second coder
`comprises an accumulator.
`23. The coder of claim 22, wherein the second coder
`further comprises a second accumulator.
`24. The coder of claim 15, wherein the second coder
`30 comprises a coder operative to further encode bits output
`from the first coder at a rate within 1% of one.
`25. A coding system comprising:
`a first coder having an input configured to receive a stream
`of bits, said first coder operative to repeat said stream
`of bits irregularly and scramble the repeated bits;
`a second coder operative to further encode bits output
`from the first coder at a rate within 10% of one in order
`to form an encoded data stream; and
`a decoder operative to receive the encoded data stream
`and decode the encoded data stream using an iterative
`decoding technique.
`26. The coding system of claim 25, wherein the first coder
`comprises a repeater operative to receive a data block
`including a plurality of bits from said stream of bits and to
`repeat bits in the data block a different number of times
`according to a selected degree profile.
`27. The coding system of claim 26, wherein the first coder
`comprises an interleaver.
`28. The coding system of claim 25, wherein the first coder
`50 comprises a low-density generator matrix coder.
`29. The coding system of claim 25, wherein the second
`coder comprises a rate 1 accumulator.
`30. The coding system of claim 25, wherein the decoder
`is operative to decode the encoded data stream using a
`posterior decoding techniques.
`31. The coding system of claim 25, wherein the decoder
`is operative to decode the encoded data stream based on a
`Tanner graph representation.
`32. The coding system of claim 25, wherein the decoder
`60 is operative to decode the encoded data stream in linear time.
`33. The coding system of claim 25, wherein the second
`coder comprises a coder operative to further encode bits
`output from the first coder at a rate within 1% of one.
`
`US 7,116,710 B1
`
`8
`
`25
`
`* * * * *
`
`Hughes, Exh. 1001, p. 11
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket