`
`Frederick P. Fish
`1855-1930
`
`WK Richardson
`1859-1951
`
`June 30, 2008
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`Commissioner for Patents
`P.O. Box 1450
`Alexandria, VA 22313-1450
`
`Street Address
`12390 EL CAMINO REAL
`SAN DIEGO, CALIFORNIA
`92130
`
`Mail Address
`P.O. Box 1022
`MINNEAPOLIS, MINNESOTA
`
`55440-1022
`
`Telephone
`858 678-5070
`
`Facsimile
`877 769-7945
`
`Web Site
`WWW.FR.COM
`
`ATLANTA
`
`AUSTIN
`
`BOSTON
`
`DALLAS
`
`DELAWARE
`
`MUNICH
`
`NEW YORK
`
`SAN DIEGO
`
`SILICON VALLEY
`
`TWIN CITIES
`
`WASHINGTON, DC
`
`Presented for filing is a new continuation patent application of:
`
`Applicant: HUI JIN, AAMOD KHANDEKAR AND ROBERT J. MCELIECE
`
`Title:
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE CODES
`
`Assignee: California Institute of Technology
`
`Enclosed are the following papers, including those required to receive a filing date
`under 37 C.P.R. § 1.53(b ):
`
`Specification
`Claims
`Abstract
`Declaration
`Drawings
`
`Pages
`17
`5
`1
`4
`5
`
`Enclosures:
`Form PT0-1449, listing documents cited in the parent applications
`(3 pages). Please confirm that these have been considered in this
`application by returning a copy of the Form PT0-1449 with the
`examiner's initials.
`Rule 63 declaration, copy from a previous application under rule 63( d) for
`continuation or divisional only.
`Small entity statement. See 37 CPR 1.27.
`
`This application is entitled to small entity status.
`
`This application is a continuation (and claims the benefit of priority under 35 U.S.C.
`§ 120) of U.S. Application Serial No. 111542,950, filed October 3, 2006, which is a
`continuation of U.S. Application Serial No. 09/861,102, filed May 18,2001, now
`U.S. Patent No. 7,116,710, which claims the priority of U.S. Provisional Application
`Serial No. 60/205,095, filed May 18, 2000, and is a continuation-in-part of U.S.
`Application Serial No. 09/922,852, filed August 18, 2000, now U.S. Patent No.
`
`Hughes, Exh. 1006, p. 1
`
`
`
`fiSH & RICHARDSON P.C.
`
`Commissioner for Patents
`June 30, 2008
`Page 2
`
`7,089,477. The disclosure of the prior applications are considered part of (and are
`incorporated by reference in) the disclosure of this application.
`
`Basic Filing Fee
`Search Fee
`Examination fee
`over 20
`Total Claims 23
`over 3
`Independent Claims 3
`Fee for Multiple Dependent claims
`Fee for each additional 50 pages of Specification
`and Drawings over 100
`Total Filing fee
`
`3 X $25
`0 X $105
`
`Small Large
`Entity Entity
`75
`310
`255
`510
`210
`105
`25
`50
`210
`105
`185
`370
`
`130
`
`260
`
`$75
`$255
`$105
`$75
`$0
`$0
`
`$0
`$510
`
`The filing fee in the amount of $510 is being paid concurrently herewith on the
`Electronic Filing System (EFS) by way of Deposit Account authorization. Please
`apply all charges or credits to Deposit Account No. 06-1050, referencing Attorney
`Docket No. 06618-637003.
`
`If this application is found to be incomplete, or if a telephone conference would
`otherwise be helpful, please call the undersigned at (858) 678-5070.
`
`Please direct all correspondence to the following:
`
`20985
`PTO Customer Number
`
`Respectfully submitted,
`
`/John F. Conroy, Reg.# 45,485/
`
`John F. Conroy
`Reg. No. 45,485
`Enclosures
`JFC/jhp
`
`Hughes, Exh. 1006, p. 2
`
`
`
`Substitute Form PT0-1449
`(Modified)
`
`U.S. Department of Commerce Attorney's Docket No.
`Patent and Trademark Office 06618-637003
`Applicant
`Information Disclosure Statement
`Hui Jin et al.
`by Applicant
`(Use several sheets if necessary)
`Filing Date
`June 30, 2008
`
`(37 CFR §1.98(b))
`
`Sheet _1_ of _l_
`
`I Application No.
`Not yet assigned
`
`I Group Art Unit
`
`Examiner Desig.
`Initial
`ID
`AA
`
`Document
`Number
`2001/0025358
`
`U.S. Patent Documents
`Publication
`Date
`09/2001
`
`Patentee
`Eidson et al.
`
`Class Subclass
`
`Filing Date
`If Appropriate
`
`AB
`
`AC
`
`AD
`
`AE
`
`AF
`
`AG
`
`AH
`
`AI
`
`AJ
`
`AK
`
`AL
`
`5,392,299
`
`5,530,707
`
`5,751,739
`
`5,802,115
`
`5,881,093
`
`6,014,411
`
`6,023,783
`
`6,031,874
`
`6,032,284
`
`6,044,116
`
`6,094,739
`
`AM
`
`6,396,423
`
`AN
`
`AO
`
`6,437,714
`
`6,859,906
`
`02/1995
`
`06/1996
`
`05/1998
`
`09/1998
`
`03/1999
`
`01/2000
`
`02/2000
`
`02/2000
`
`02/2000
`
`03/2000
`
`07/2000
`
`05/2002
`
`08/2002
`
`02/2005
`
`Rhines et al.
`
`Lin
`
`Seshadri et al.
`
`Meyer
`
`Wang et al.
`
`Wang
`
`Divsalar et al.
`
`Chennakeshu et al.
`
`Bliss
`
`Wang
`
`Miller et al.
`
`Laumen et al.
`
`Kim et al.
`
`Hammons et al.
`
`Foreign Patent Documents or Published Foreign Patent Applications
`Examiner Desig.
`Document
`Publication
`Country or
`Translation
`Initial
`ID
`Number
`Date
`Patent Office
`Yes
`No
`AP
`
`Class Subclass
`
`AQ
`
`AR
`
`Other Documents (include Author, Title, Date, and Place of Publication)
`Examiner Desig.
`Initial
`ID
`
`AS
`
`AT
`
`Document
`Benedetto, S., et al., "A Soft-Input Soft-Output APP Module for Iterative Decoding of Concatenated
`Codes," IEEE Communications Letters, 1(1):22-24, January 1997.
`Benedetto, S., et al., "A Soft-Input Soft-Output Maximum A Posteriori (MAP) Module to Decode
`Parallel and Serial Concatenated Codes," The Telecommunications and Data Acquisition Progress
`Report (TDA PR 42-127), pp. 1-20, November 1996.
`
`Examiner Signature
`
`I Date Considered
`
`EXAMINER: Initials citation considered. Draw line through citation if not in conformance and not considered. Include copy of this form with
`next communication to applicant.
`
`Substitute Disclosure Form (PT0-1449)
`
`Hughes, Exh. 1006, p. 3
`
`
`
`Substitute Form PT0-1449
`(Modified)
`
`U.S. Department of Commerce Attorney's Docket No.
`Patent and Trademark Office 06618-637003
`Applicant
`Information Disclosure Statement
`Hui Jin et al.
`by Applicant
`(Use several sheets if necessary)
`Filing Date
`June 30, 2008
`
`(37 CFR §1.98(b))
`
`Sheet __l_ of_]____
`
`I Application No.
`Not yet assigned
`
`I Group Art Unit
`
`Other Documents (include Author, Title, Date, and Place of Publication)
`Examiner Desig.
`Initial
`ID
`
`Document
`Benedetto, S., et al., "Bandwidth efficient parallel concatenated coding schemes," Electronics
`Letters, 31(24):2067-2069, November 1995.
`Benedetto, S., et al., "Design of Serially Concatenated Interleaved Codes," ICC 97, vol. 2, pp. 710-
`714, June 1997.
`Benedetto, S., et al., "Parallel Concatenated Trellis Coded Modulation," ICC 96, vol. 2, pp. 974-978,
`June 1996.
`Benedetto, S., et al., "Serial Concatenated Trellis Coded Modulation with Iterative Decoding,"
`Proceedings 1997 IEEE International Symposium on Information Theory (!SIT), Ulm, Germany, p.
`8, June 29-July 4, 1997.
`Benedetto, S., et al., "Serial Concatenation of Interleaved Codes: Performace Analysis, Design, and
`Iterative Decoding," The Telecommunications and Data Acquisition Progress Report (TDA PR 42-
`126), pp. 1-26, August 1996.
`Benedetto, S., et al., "Serial concatenation of interleaved codes: performance analysis, design, and
`iterative decoding," Proceedings 1997 IEEE International Symposium on Information Theory (!SIT),
`Ulm, Germany, p. 106, June 29-July 4, 1997.
`Benedetto, S., et al., "Soft-Output Decoding Algorithms in Iterative Decoding of Turbo Codes," The
`Telecommunications and Data Acquisition Progress Report (TDA PR 42-124 ), pp. 63-87, February
`1996.
`Berrou, C., et al., "Near Shannon Limit Error- Correcting Coding and Decoding: Turbo Codes,"
`ICC 93, vol. 2, pp. 1064-1070, May 1993.
`Digital Video Broadcasting (DVB)- User guidelines for the second generation system for
`Broadcasting, Interactive Services, News Gathering and other broadband satellite applications
`(DVB-S2), ETSI TR 102 376 V1.1.1 Technical Report, pp. 1-104 (pg. 64), February 2005.
`
`Divsalar, D., et al., "Coding Theorems for 'Turbo-Like' Codes," Proceedings of the 361h Annual
`Allerton Conference on Communication, Control, and Computing, Monticello, Illinois, pp. 201-210,
`September 1998.
`Divsalar, D., et al., "Effective free distance of turbo codes," Electronics Letters, 32(5):445-446,
`February 1996.
`Divsalar, D., et al., "Hybrid Concatenated Codes and Iterative Decoding," Proceedings 1997 IEEE
`International Symposium on Information Theory (!SIT), Ulm, Germany, p. 10, June 29-July 4, 1997.
`Divsalar, D., et al., "Low-Rate Turbo Codes for Deep-Space Communications," Proceedings 1995
`IEEE International Symposium on Information Theory (!SIT), Whistler, BC, Canada, p. 35,
`September 1995.
`Divsalar, D., et al., "Multiple Turbo Codes for Deep-Space Communications," The
`Telecommunications and Data Acquisition Progress Report (TDA PR 42-121), pp. 66-77, May 1995.
`Divsalar, D., et al., "Multiple Turbo Codes," MILCOM '95, vol. 1, pp. 279-285, November 1995.
`Divsalar, D., et al., "On the Design of Turbo Codes," The Telecommunications and Data Acquisition
`Progress Report (TDA PR 42-123), pp. 99-121, November 1995.
`Divsalar, D., et al., "Serial Turbo Trellis Coded Modulation with Rate-1 Inner Code," Proceedings
`2000 IEEE International Symposium on Information Theory (!SIT), Sorrento, Italy, pp. 194, June
`2000.
`Divsalar, D., et al., "Turbo Codes for PCS Applications," IEEE ICC '95, Seattle, WA, USA, vol. 1,
`pp. 54-59, June 1995.
`
`AU
`
`AV
`
`AW
`
`AX
`
`AY
`
`AZ
`
`BA
`
`BB
`
`BC
`
`BD
`
`BE
`
`BF
`
`BG
`
`BH
`
`BI
`
`BJ
`
`BK
`
`BL
`
`Examiner Signature
`
`I Date Considered
`
`EXAMINER: Initials citation considered. Draw line through citation if not in conformance and not considered. Include copy of this form with
`next communication to applicant.
`
`Substitute Disclosure Form (PT0-1449)
`
`Hughes, Exh. 1006, p. 4
`
`
`
`Substitute Form PT0-1449
`(Modified)
`
`U.S. Department of Commerce Attorney's Docket No.
`Patent and Trademark Office 06618-637003
`Applicant
`Information Disclosure Statement
`Hui Jin et al.
`by Applicant
`(Use several sheets if necessary)
`Filing Date
`June 30, 2008
`
`(37 CFR §1.98(b))
`
`Sheet __]____ of__]____
`
`I Application No.
`Not yet assigned
`
`I Group Art Unit
`
`Other Documents (include Author, Title, Date, and Place of Publication)
`Examiner Desig.
`Initial
`ID
`
`BM
`
`BN
`
`BO
`
`BP
`
`BQ
`
`Document
`Jin, H., et al., "Irregular Repeat - Accumulate Codes," 2nd International Symposium on Turbo
`Codes, Brest, France, 25 pages, September 2000.
`Jin, H., et al., "Irregular Repeat- Accumulate Codes," 2na International Symposium on Turbo Codes
`& Related Topics, Brest, France, pg. 1-8, September 2000.
`Richardson, T.J., et al., "Design of Capacity-Approaching Irregular Low-Density Parity-Check
`Codes," IEEE Transactions on Information Theory, 47(2):619-637, February 2001.
`Richardson, T.J., et al., "Efficient Encoding of Low-Density Parity-Check Codes," IEEE
`Transactions on Information Theory, 47(2):638-656, February 2001.
`Wiberg, N., et al., "Codes and Iterative Decoding on General Graphs," Proceedings 1995 IEEE
`International Symposium on Information Theory (!SIT), Whistler, BC, Canada, p. 468, September
`1995.
`
`Examiner Signature
`
`I Date Considered
`
`EXAMINER: Initials citation considered. Draw line through citation if not in conformance and not considered. Include copy of this form with
`next communication to applicant.
`
`Substitute Disclosure Form (PT0-1449)
`
`Hughes, Exh. 1006, p. 5
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE CODES
`
`CROSS-REFERENCE TO RELATED APPLICATIONS
`
`[0001]
`
`This application is a continuation of U.S.
`
`Application Serial No. 11/542,950, filed October 3, 2006,
`
`which is a continuation of U.S. Application Serial No.
`
`09/861,102, filed May 18, 2001, now U.S. Patent No.
`
`7,116,710, which claims the priority of U.S. Provisional
`
`Application Serial No. 60/205,095, filed May 18, 2000, and
`
`is a continuation-in-part of U.S. Application Serial No.
`
`09/922,852, filed August 18, 2000, now U.S. Patent No.
`
`7,089,477. The disclosure of the prior applications are
`
`considered part of (and are incorporated by reference in)
`
`the disclosure of this application.
`
`GOVERNMENT LICENSE RIGHTS
`
`[0002]
`
`The U.S. Government has a paid-up license in this
`
`invention and the right in limited circumstances to require
`
`the patent owner to license others on reasonable terms as
`
`provided for by the terms of Grant No. CCR-9804793 awarded
`
`by the National Science Foundation.
`
`1
`
`Hughes, Exh. 1006, p. 6
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`BACKGROUND
`
`[0003]
`
`Properties of a channel affect the amount of data
`
`that can be handled by the channel. The so-called "Shannon
`
`limit" defines the theoretical limit of the amount of data
`
`that a channel can carry.
`
`[0004]
`
`Different techniques have been used to increase
`
`the data rate that can be handled by a channel.
`
`"Near
`
`Shannon Limit Error-Correcting Coding and Decoding: Turbo
`
`Codes," by Berrou et al. ICC, pp 1064-1070,
`
`(1993),
`
`described a new "turbo code" technique that has
`
`revolutionized the field of error correcting codes.
`
`Turbo
`
`codes have sufficient randomness to allow reliable
`
`communication over the channel at a high data rate near
`
`capacity. However, they still retain sufficient structure
`
`to allow practical encoding and decoding algorithms.
`
`Still, the technique for encoding and decoding turbo codes
`
`can be relatively complex.
`
`[0005]
`
`A standard turbo coder 100 is shown in Figure 1.
`
`A block of k information bits is input directly to a first
`
`coder 102. A k bit interleaver 106 also receives the k
`
`bits and interleaves them prior to applying them to a
`
`second coder 104. The second coder produces an output that
`
`has more bits than its input, that is, it is a coder with
`
`2
`
`Hughes, Exh. 1006, p. 7
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`rate that is less than 1. The coders 102, 104 are
`
`typically recursive convolutional coders.
`
`[0006]
`
`Three different items are sent over the channel
`
`150:
`
`the original k bits, first encoded bits 110, and
`
`second encoded bits 112. At the decoding end, two decoders
`
`are used:
`
`a first constituent decoder 160 and a second
`
`constituent decoder 162. Each receives both the original k
`
`bits, and one of the encoded portions 110, 112. Each
`
`decoder sends likelihood estimates of the decoded bits to
`
`the other decoders. The estimates are used to decode the
`
`uncoded information bits as corrupted by the noisy channel.
`
`SUMMARY
`
`[0007]
`
`A coding system according to an embodiment is
`
`configured to receive a portion of a signal to be encoded,
`
`for example, a data block including a fixed number of bits.
`
`The coding system includes an outer coder, which repeats
`
`and scrambles bits in the data block. The data block is
`
`apportioned into two or more sub-blocks, and bits in
`
`different sub-blocks are repeated a different number of
`
`times according to a selected degree profile. The outer
`
`coder may include a repeater with a variable rate and an
`
`interleaver. Alternatively, the outer coder may be a low(cid:173)
`
`density generator matrix (LDGM) coder.
`
`3
`
`Hughes, Exh. 1006, p. 8
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`[0008]
`
`The repeated and scrambled bits are input to an
`
`inner coder that has a rate substantially close to one.
`
`The inner coder may include one or more accumulators that
`
`perform recursive modulo two addition operations on the
`
`input bit stream.
`
`[0009]
`
`The encoded data output from the inner coder may
`
`be transmitted on a channel and decoded in linear time at a
`
`destination using iterative decoding techniques. The
`
`decoding techniques may be based on a Tanner graph
`
`representation of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0010]
`
`Figure 1 is a schematic diagram of a prior "turbo
`
`code" system.
`
`[0011]
`
`Figure 2 is a schematic diagram of a coder
`
`according to an embodiment.
`
`[0012]
`
`Figure 3 is a Tanner graph for an irregular
`
`repeat and accumulate (IRA) coder.
`
`[0013]
`
`Figure 4 is a schematic diagram of an IRA coder
`
`according to an embodiment.
`
`[0014]
`
`Figure SA illustrates a message from a variable
`
`node to a check node on the Tanner graph of Figure 3.
`
`[0015]
`
`Figure SB illustrates a message from a check node
`
`to a variable node on the Tanner graph of Figure 3.
`
`4
`
`Hughes, Exh. 1006, p. 9
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`[0016]
`
`Figure 6 is a schematic diagram of a coder
`
`according to an alternate embodiment.
`
`[0017]
`
`Figure 7 is a schematic diagram of a coder
`
`according to another alternate embodiment.
`
`DETAILED DESCRIPTION
`
`[0018]
`
`Figure 2 illustrates a coder 200 according to an
`
`embodiment. The coder 200 may include an outer coder 202,
`
`an interleaver 204, and inner coder 206. The coder may be
`
`used to format blocks of data for transmission, introducing
`
`redundancy into the stream of data to protect the data from
`
`loss due to transmission errors. The encoded data may then
`
`be decoded at a destination in linear time at rates that
`
`may approach the channel capacity.
`
`[0019]
`
`The outer coder 202 receives the uncoded data.
`
`The data may be partitioned into blocks of fixed size, say
`
`k bits. The outer coder may be an (n,k) binary linear
`
`block coder, where
`
`n > k. The coder accepts as input a
`
`block u of k data bits and produces an output block v of n
`
`data bits. The mathematical relationship between u and v
`
`is v=T 0u, where To is an n x k matrix, and the rate of the
`
`coder is k/n.
`
`[0020]
`
`The rate of the coder may be irregular, that is,
`
`the value of To is not constant, and may differ for sub-
`
`5
`
`Hughes, Exh. 1006, p. 10
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`blocks of bits in the data block.
`
`In an embodiment, the
`
`outer coder 202 is a repeater that repeats the k bits in a
`
`block a number of times q to produce a block with n bits,
`
`where n
`
`qk. Since the repeater has an irregular output,
`
`different bits in the block may be repeated a different
`
`number of times. For example, a fraction of the bits in
`
`the block may be repeated two times, a fraction of bits may
`
`be repeated three times, and the remainder of bits may be
`
`repeated four times. These fractions define a degree
`
`sequence, or degree profile, of the code.
`
`[0021]
`
`The inner coder 206 may be a linear rate-1 coder,
`
`which means that the n-bit output block x can be written as
`
`x=T 1w, where T1 is a nonsingular n x n matrix. The inner
`
`coder 210 can have a rate that is close to 1, e.g., within
`
`50%, more preferably 10% and perhaps even more preferably
`
`within 1% of 1.
`
`[0022]
`
`In an embodiment, the inner coder 206 is an
`
`accumulator, which produces outputs that are the modulo two
`
`(mod-2) partial sums of its inputs. The accumulator may be
`
`a truncated rate-1 recursive convolutional coder with the
`
`transfer function 1/(1+D). Such an accumulator may be
`
`considered a block coder whose input block [x 1 , ••• ,xnl and
`
`output block [y 1 , ••• ,ynl are related by the formula
`
`6
`
`Hughes, Exh. 1006, p. 11
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`Yn = X1 EB X2 EB X3 EB
`
`EB Xn
`
`where "EB" denotes mod-2, or exclusive-OR (XOR), addition.
`
`An advantage of this system is that only mod-2 addition is
`
`necessary for the accumulator. The accumulator may be
`
`embodied using only XOR gates, which may simplify the
`
`design.
`
`[0023]
`
`The bits output from the outer coder 202 are
`
`scrambled before they are input to the inner coder 206.
`
`This scrambling may be performed by the interleaver 204,
`
`which performs a pseudo-random permutation of an input
`
`block v, yielding an output block w having the same length
`
`as v.
`
`[0024]
`
`The serial concatenation of the interleaved
`
`irregular repeat code and the accumulate code produces an
`
`irregular repeat and accumulate (IRA) code. An IRA code is
`
`a linear code, and as such, may be represented as a set of
`
`parity checks. The set of parity checks may be represented
`
`in a bipartite graph, called the Tanner graph, of the code.
`
`Figure 3 shows a Tanner graph 300 of an IRA code with
`
`parameters (f 1 ,
`
`••• ,
`
`fj; a), where fi ?: 0, I.ifi = 1 and "a"
`
`7
`
`Hughes, Exh. 1006, p. 12
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`is a positive integer. The Tanner graph includes two kinds
`
`of nodes: variable nodes (open circles) and check nodes
`
`(filled circles). There are k variable nodes 302 on the
`
`left, called information nodes. There are r variable nodes
`
`306 on the right, called parity nodes. There are r =
`
`(k~iifi)/a check nodes 304 connected between the information
`
`nodes and the parity nodes. Each information node 302 is
`
`connected to a number of check nodes 304. The fraction of
`
`information nodes connected to exactly i check nodes is fi.
`
`For example, in the Tanner graph 300, each of the f 2
`
`information nodes are connected to two check nodes,
`
`corresponding to a repeat of q = 2, and each of the f 3
`
`information nodes are connected to three check nodes,
`
`corresponding to q = 3.
`
`[0025]
`
`Each check node 304 is connected to exactly "a"
`
`information nodes 302.
`
`In Figure 3, a = 3. These
`
`connections can be made in many ways, as indicated by the
`
`arbitrary permutation of the ra edges joining information
`
`nodes 302 and check nodes 304 in permutation block 310.
`
`These connections correspond to the scrambling performed by
`
`the interleaver 204.
`
`[0026]
`
`In an alternate embodiment, the outer coder 202
`
`may be a low-density generator matrix (LDGM) coder that
`
`performs an irregular repeat of the k bits in the block, as
`
`8
`
`Hughes, Exh. 1006, p. 13
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`shown in Figure 4. As the name implies, an LDGM code has a
`
`sparse (low-density) generator matrix. The IRA code
`
`produced by the coder 400 is a serial concatenation of the
`
`LDGM code and the accumulator code. The interleaver 204 in
`
`Figure 2 may be excluded due to the randomness already
`
`present in the structure of the LDGM code.
`
`[0027]
`
`If the permutation performed in permutation block
`
`310 is fixed, the Tanner graph represents a binary linear
`
`block code with k information bits (u 1 , ••• , uk) and r parity
`
`bits (Xlr···rXr), as follows. Each of the information bits
`
`is associated with one of the information nodes 302, and
`
`each of the parity bits is associated with one of the
`
`parity nodes 306. The value of a parity bit is determined
`
`uniquely by the condition that the mod-2 sum of the values
`
`of the variable nodes connected to each of the check nodes
`
`304 is zero. To see this, set x 0=0. Then if the values of
`
`the bits on the ra edges coming out the permutation box are
`
`~ = ~-l + J'. v( j-l) .-+i
`i-l
`(v 1 , ••• , Vra), then we have the recursive formula
`
`for j = 1, 2, ... , r. This is in effect the encoding
`
`algorithm.
`
`9
`
`Hughes, Exh. 1006, p. 14
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`[0028]
`
`Two types of IRA codes are represented in Figure
`
`3, a nonsystematic version and a systematic version. The
`
`nonsystematic version is an (r,k) code, in which the
`
`codeword corresponding to the information bits (u 1 , ••• ,uk)
`
`The systematic version is a
`
`(k+r, k) code,
`
`in which the codeword is
`
`... ,
`
`[ 002 9 J
`
`The rate of the nonsystematic code is
`
`a.
`)"' if.
`
`~.i .l.
`
`[0030]
`
`The rate of the systematic code is
`
`a.
`a. + )' i f
`
`~.i .i
`
`[0031]
`
`For example, regular repeat and accumulate (RA)
`
`codes can be considered nonsystematic IRA codes with a = 1
`
`and exactly one fi equal to 1, say fq = 1, and the rest
`
`zero, in which case Rnsys simplifies to R = 1/q.
`
`[0032]
`
`The IRA code may be represented using an
`
`alternate notation. Let Ai be the fraction of edges between
`
`the information nodes 302 and the check nodes 304 that are
`
`adjacent to an information node of degree i, and let Pi be
`
`the fraction of such edges that are adjacent to a check
`
`node of degree i+2 (i.e., one that is adjacent to i
`
`information nodes). These edge fractions may be used to
`
`10
`
`Hughes, Exh. 1006, p. 15
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`represent the IRA code rather than the corresponding node
`
`A.i I i
`-.;;;' A I
`·
`J
`..i.....lj
`j
`the generating functions of these sequences. The pair (A,
`
`f.._ =
`
`p) is called a degree distribution. For L(x) = Lifixi,
`
`[0033]
`
`The rate of the systematic IRA code given by the
`
`,
`
`..,~X
`
`,-
`
`·,
`
`"'0
`
`;tl.
`
`,. 0
`
`L (x) = ! A (t) dt I ! A [tj dt
`! .. Pj I j]-l
`
`1 + ------"3
`[
`},. A3. I j
`-3
`degree distribution is given by
`
`Rate =
`
`- - -
`
`[0034]
`
`"Belief propagation" on the Tanner Graph
`
`realization may be used to decode IRA codes. Roughly
`
`speaking, the belief propagation decoding technique allows
`
`the messages passed on an edge to represent posterior
`
`densities on the bit associated with the variable node. A
`
`probability density on a bit is a pair of non-negative real
`
`numbers p(O), p(1) satisfying p(O) + p(1)
`
`1, where p(O)
`
`denotes the probability of the bit being 0, p(1) the
`
`probability of it being 1. Such a pair can be represented
`
`by its log likelihood ratio, m = log(p(0)/p(1)). The
`
`11
`
`Hughes, Exh. 1006, p. 16
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`outgoing message from a variable node u to a check node v
`
`represents information about u, and a message from a check
`
`node u to a variable node v represents information about u,
`
`as shown in Figures SA and SB, respectively.
`
`[0035]
`
`The outgoing message from a node u to a node v
`
`depends on the incoming messages from all neighbors w of u
`
`except v.
`
`If u is a variable message node, this outgoing
`
`message is
`
`m (u
`
`---+ v)
`
`where m0 (u) is the log-likelihood message associated with u.
`
`If u is a check node, the corresponding formula is
`
`m(u---+ v)
`tanh--'----
`2
`
`tanh
`
`IT
`w~~
`
`m (w ---+ u)
`··
`··
`2
`
`[ 0036 J
`
`Before decoding, the messages m(w 7 u) and
`
`m (u 7 v) are initialized to be zero, and m0 (u) is
`
`initialized to be the log-likelihood ratio based on the
`
`channel received information.
`
`If the channel is
`
`memoryless, i.e., each channel output only relies on its
`
`input, and y is the output of the channel code bit u, then
`
`mo(U) = log(p(u = Oly)/p(u
`
`1 I y) )
`
`After this
`
`initialization, the decoding process may run in a fully
`
`parallel and local manner.
`
`In each iteration, every
`
`12
`
`Hughes, Exh. 1006, p. 17
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`variable/check node receives messages from its neighbors,
`
`and sends back updated messages. Decoding is terminated
`
`after a fixed number of iterations or detecting that all
`
`the constraints are satisfied. Upon termination, the
`
`decoder outputs a decoded sequence based on the messages
`
`[0037]
`
`Thus, on various channels, iterative decoding
`
`only differs in the initial messages m0 (u)
`
`For example,
`
`consider three memoryless channel models:
`
`a binary erasure
`
`channel (BEC);
`
`a binary symmetric channel (BSC); and an
`
`additive white Gaussian noise (AGWN) channel.
`
`[0038]
`
`In the BEC, there are two inputs and three
`
`outputs. When 0 is transmitted, the receiver can receive
`
`either 0 or an erasure E. An erasure E output means that
`
`the receiver does not know how to demodulate the output.
`
`Similarly, when 1 is transmitted, the receiver can receive
`
`either 1 or E. Thus, for the BEC, y E {0, E, 1}, and
`
`if y
`if y
`if y
`
`0
`E
`1
`
`[ 0039 J
`
`In the BSC,
`
`there are two possible inputs (0,1)
`
`and two possible outputs (0, 1). The BSC is characterized
`
`by a set of conditional probabilities relating all possible
`
`13
`
`Hughes, Exh. 1006, p. 18
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`outputs to possible inputs. Thus, for the BSC y E {0, 1},
`
`if y
`
`if y
`
`0
`
`1
`
`l 1- p
`
`log - -
`p
`1- p
`-log-P-
`
`m0 (u)
`
`and
`
`[0040]
`
`In the AWGN,
`
`the discrete-time input symbols X
`
`take their values in a finite alphabet while channel output
`
`symbols Y can take any values along the real line. There
`
`is assumed to be no distortion or other effects other than
`
`the addition of white Gaussian noise.
`
`In an AWGN with a
`
`Binary Phase Shift Keying (BPSK) signaling which maps 0 to
`
`the symbol with amplitude ~ and 1 to the symbol with
`
`amplitude -~, output y E R, then
`
`where N0 /2 is the noise power spectral density.
`
`[0041]
`
`The selection of a degree profile for use in a
`
`particular transmission channel is a design parameter,
`
`which may be affected by various attributes of the channel.
`
`The criteria for selecting a particular degree profile may
`
`include, for example, the type of channel and the data rate
`
`on the channel. For example, Table 1 shows degree profiles
`
`14
`
`Hughes, Exh. 1006, p. 19
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`that have been found to produce good results for an AWGN
`
`channel model.
`
`TABLE 1
`
`a
`
`A2
`
`A3
`
`AS
`
`A6
`
`AlO
`
`All
`
`Al2
`
`Al3
`
`Al4
`
`Al6
`
`A27
`
`A28
`
`Rate
`
`crGA
`
`cr*
`
`(Eb/NO) *(dB)
`
`2
`
`0.139025
`
`0.2221555
`
`0.638820
`
`3
`
`0.078194
`
`0.128085
`
`0.160813
`
`0.036178
`
`0.108828
`
`0.487902
`
`0.333364
`
`0.333223
`
`1.1840
`
`1.1981
`
`0.190
`
`1.2415
`
`1.2607
`
`-0.250
`
`4
`
`0.054485
`
`0.104315
`
`0.126755
`
`0.229816
`
`0.016484
`
`0.450302
`
`0.017842
`
`0.333218
`
`1.2615
`
`1.2780
`
`-0.371
`
`S.L.
`
`(dB)
`
`-0.4953
`
`-0.4958
`
`-0.4958
`
`[0042]
`
`Table 1 shows degree profiles yielding codes of
`
`rate approximately 1/3 for the AWGN channel and with a = 2,
`
`3, 4. For each sequence, the Gaussian approximation noise
`
`threshold, the actual sum-product decoding threshold and
`
`15
`
`Hughes, Exh. 1006, p. 20
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`the corresponding energy per bit (Eb)-noise power
`
`(N 0 ) ratio
`
`in dB are given. Also listed is the Shannon limit (S.L.).
`
`[0043]
`
`As the parameter "a" is increased, the
`
`performance improves. For example, for a = 4, the best
`
`code found has an iterative decoding threshold of Eb/N 0 = -
`
`0.371 dB, which is only 0.12 dB above the Shannon limit.
`
`[0044]
`
`The accumulator component of the coder may be
`
`replaced by a "double accumulator" 600 as shown in Figure
`
`6. The double accumulator can be viewed as a truncated
`
`rate 1 convolutional coder with transfer function 1/(1 + D
`
`+ D2)
`
`[0045]
`
`Alternatively, a pair of accumulators may be the
`
`added, as shown in Figure 7. There are three component
`
`codes: the "outer" code 700, the "middle" code 702, and the
`
`"inner" code 704. The outer code is an irregular
`
`repetition code, and the middle and inner codes are both
`
`accumulators.
`
`[ 004 6 J
`
`IRA codes may be implemented in a variety of
`
`channels, including memoryless channels, such as the BEC,
`
`BSC, and AWGN, as well as channels having non-binary input,
`
`non-symmetric and fading channels, and/or channels with
`
`memory.
`
`[0047]
`
`A number of embodiments have been described.
`
`Nevertheless, it will be understood that various
`
`16
`
`Hughes, Exh. 1006, p. 21
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`modifications may be made without departing from the spirit
`
`and scope of the invention. Accordingly, other embodiments
`
`are within the scope of the following claims.
`
`17
`
`Hughes, Exh. 1006, p. 22
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`WHAT IS CLAIMED IS:
`
`1.
`
`A method of encoding a signal, comprising:
`
`receiving a block of data in the signal to be encoded,
`
`the block of data including information bits;
`
`performing a first encoding operation on at least some
`
`of the information bits, the first encoding operation being
`
`a linear transform operation that generates L transformed
`
`bits; and
`
`performing a second encoding operation using the L
`
`transformed bits as an input, the second encoding operation
`
`including an accumulation operation in which the L
`
`transformed bits generated by the first encoding operation
`
`are accumulated, said second encoding operation producing
`
`at least a portion of a codeword,
`
`wherein L is two or more.
`
`2.
`
`The method of claim 1, further comprising:
`
`outputting the codeword, wherein the codeword
`
`comprises parity bits.
`
`3.
`
`The method of claim 2, wherein outputting the codeword
`
`comprises:
`
`outputting the parity bits; and
`
`outputting at least some of the information bits.
`
`18
`
`Hughes, Exh. 1006, p. 23
`
`
`
`Attorney Docket No.: 06618-637003/3220-C-C
`
`4.
`
`The method of claim 3, wherein outputting the codeword
`
`comprises:
`
`outputting the parity bits following the information
`
`bits.
`
`5.
`
`The method of claim 2, wherein performing the first
`
`encoding operation comprises transforming the at least some
`
`of the information bits via a low density generator matrix
`
`transformation.
`
`6.
`
`The method of claim 5, wherein generating each of the
`
`L transformed bits comprises mod-2 or exclusive-OR summing
`
`of bits in a subset of the information bits.
`
`7.
`
`The method of claim 6, wherein each of the subsets of
`
`the information bits includes a same number of the
`
`information bits.
`
`8.
`
`The method of claim 6, wherein at least two of the
`
`information bits appear in three subsets of the information
`
`bits.
`
`9.
`
`The method of claim 6, wherein a number of s