`FISH & RICHARDSON P.C.
`
`May 18,2001
`
`Attorney Docket No.: 06618-637001/CIT3220
`
`Box Patent Application
`Commissioner for Patents
`Washington, DC 20231
`
`4350 La Jolla Village Drive
`Suite 500
`San Diego, California
`92122
`
`Telephone
`858 6?8-5070
`
`Facsimile
`858 6?8-5099
`
`Web Site
`www.fr.com
`
`Presented for filing is a new patent application claiming priority from a provisional
`patent application of:
`
`•
`
`Applicant: HUI JIN, AAMOD KHANDEKAR AND ROBERT J. McELIECE
`
`BOSTON
`
`DALLAS
`
`DELAWARE
`
`NEW YORK
`
`SAN DIEGO
`
`~,i§ILICON VALLEY
`
`TWIN CITIES
`
`~tSHINGTON, DC
`
`Title:
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE CODES
`
`Enclosed are the following papers, including those required to receive a filing date
`under 37 CPR §1.53(b):
`
`Specification
`Claims
`Abstract
`Declaration
`Drawing(s)
`
`Pages
`16
`6
`1
`[To be Filed at a Later Date]
`5
`
`Enclosures:
`-Postcard.
`
`Under 35 USC §120, this application claims the benefit of prior U.S. application No.
`____ , filed August 18, 2000, and entitled "Interleaved Serial Concatenation
`Forming Turbo-Like Codes".
`
`CERTIFICATE OF MAILING BY EXPRESS MAIL
`
`Express Mail Label No._~E~L=68=8=32=0=04-'-"8""U-"'-S _______ _
`
`I hereby certify under 37 CFR § 1.10 that this correspondence IS being
`deposited with the United States Postal Semce as Express Mail Post Office to
`Addressee with sufficient postage on the date indicated below and is
`addressed to the Commissioner for Patents, Was · gton, D.C. 20231.
`
`Date of Deposit
`
`Signature
`
`Typed or Printed Name of Person S1gmng Certificate
`
`Hughes, Exh. 1002, p. 1
`
`
`
`FISH & RICHARDSON P.C.
`
`Commissioner for Patents
`May 18,2001
`Page2
`
`Under 35 USC § 119( e )(1 ), this application claims the benefit of prior U.S.
`provisional application 60/205,095, filed May 18, 2000.
`
`This application is entitled to small entity status.
`
`Basic filing fee
`Total claims in excess of 20 times $9
`Independent claims in excess of 3 times $40
`Fee for multiple dependent claims
`Total filing fee:
`
`$0
`$0
`$0
`$0
`$0
`
`Under 37 CFR §1.53(£), no filing fee is being paid at this time.
`
`If this application is found to be incomplete, or if a telephone conference would
`otherwise be helpful, please call the undersigned at (858) 678-5070.
`
`Kindly acknowledge receipt of this application by returning the enclosed postcard.
`
`Please send all correspondence to:
`
`SCOTT C. HARRIS
`Fish & Richardson P.C.
`PTO Customer No. 20985
`4350 La Jolla Village Drive, Suite 500
`San Diego, CA 92122
`
`Respectfully submitted,
`
`~fiHarris
`~rff:.I 32,030
`
`Enclosures
`SCH/nsg
`10111374.doc
`
`Hughes, Exh. 1002, p. 2
`
`
`
`PTO/SB/35 (11-00)
`Approved for use through 10/31/2002 OMB 0651-0031
`U.S. Patent and Trademark Office: U.S DEPARTMENT OF COMMERCE
`Under the Paperwork Reduction Act of 1995, no persons are required to respond to a collection of information unless it displays a valid OMB control number.
`
`REQUEST AND CERTIFICATION
`UNDER
`35 U.S.C. 122(b)(2)(B)(i)
`
`First Named Inventor I Hui Jin et al.
`
`'
`
`Title
`
`SERIAL CONCATENATION OF
`INTERLEAVED CONVOLUTIONAL CODES
`FORMING TURBO-LIKE CODES
`I Oflfl18-637001
`I
`
`Attv Dor:kAt Number
`
`I hereby certify that the invention disclosed in the attached application has not and will not
`be the subject of an application filed in another country, or under a multilateral agreement,
`that requires publication at eighteen months after filing.
`I hereby request that the attached
`application not be published under 35 U.S.C. 122(b).
`
`tJ/
`
`Scott C. Harris
`
`This request must be signed in compliance with 38 CFR 1.33(b) and submitted with the
`application upon filing.
`
`If applicant rescinds a
`Applicant may rescind this nonpublication request at any time.
`request that an application not be published under 35 U.S.C. 122(b), the application will be
`scheduled for publication at eighteen months from the earliest claimed filing date for which a
`benefit is claimed.
`
`If applicant subsequently files an application directed to the invention disclosed in the
`attached application in another country, or under a multilateral international agreement, that
`requires publication of applications eighteen months after filing, the applicant must notify
`the United States Patent and Trademark Office of such filing within forty-five (45) days after
`the date of the filing of such foreign or international application. Failure to do so will result
`in abandonment of this application (35 U.S.C. 122(b)(2)(B)(iii)).
`
`Burden Hour Statement: This collection of information is required by 37 CFR 1.213(a). The information is used by the public to request that an application not be
`published under 35 U.S.C 122(b) (and the PTO to process that request). Confidentiality is governed by 35 U.S C 122 and 37 CFR 1.14 This form is estimated
`to take 6 minutes to complete This time will vary depending upon the needs of the individual case. Any comments on the amount of time you are required to
`completed this form should be sent to the Chief Information Officer, U.S. Patent and Trademark Office, Washington, DC 20231 DO NOT SEND FEES OR
`COMPLETED FORMS TO THIS ADDRESS. SEND TO: Assistant Commissioner for Patents, Washington, DC 20231.
`
`Hughes, Exh. 1002, p. 3
`
`
`
`Attorney's Docket No.: 06618/637001 I CIT3220
`
`APPLICATION
`
`FOR
`
`UNITED STATES LETTERS PATENT
`
`TITLE:
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`APPLICANT:
`
`HUI JIN, AAMOD KHANDEKAR AND ROBERT J.
`MCELIECE
`
`CERTIFICATE OF MAILING BY EXPRESS MAIL
`
`Express Mail Label No.
`
`EL688320Q48~U~S ____ _ _
`
`I hereby certifY under 3 7 CFR § 1.10 that this correspondence is bemg
`deposited with the United States Postal Service as Express Mall Post
`Office to Addressee with sufficient postage on the date indicated below
`and is addressed to the Cornmisstoner for Patents, Washington,
`D.C. 20231.
`
`Date of Depostt
`
`Signature
`
`Gildardo Vargas
`Typed or Printed Name of Person Signing Certificate
`
`Hughes, Exh. 1002, p. 4
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`SERIAL CONCATENATION OF INTERLEAVED
`CONVOLUTIONAL CODES FORMING TURBO-LIKE
`CODES
`
`CROSS-REFERENCE TO RELATED APPLICATIONS
`
`[0001]
`
`This application claims priority to U.S.
`
`Provisional Application Serial No. 60/205,095, filed on May
`
`18, 2000, and to U.S. Application Serial No.
`
`-------
`
`, filed
`
`on August 18, 2000 and entitled Interleaved Serial
`
`Concatenation Forming Turbo-Like Codes.
`
`GOVERNMENT LICENSE RIGHTS
`
`[0002]
`
`The U.S. Government has a paid-up license in this
`
`invention and the right in limited circumstances to require
`
`the patent owner to license others on reasonable terms as
`
`provided for by the terms of Grant No. CCR-9804793 awarded
`
`by the National Science Foundation.
`
`BACKGROUND
`
`[0003]
`
`Properties of a channel affect the amount of data
`
`that can be handled by the channel. The so called "Shannon
`
`limit" defines the theoretical limit of the amount of data
`
`that a channel can carry.
`
`1
`
`Hughes, Exh. 1002, p. 5
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0004]
`
`Different techniques have been used to increase
`
`the data rate that can be handled by a channel.
`
`"Near
`
`Shannon Limit Error-Correcting Coding and Decoding: Turbo
`
`Codes, 11 by Berrou et al. ICC, pp 1064-1070, (1993),
`
`described a new 11 turbo code 11
`
`technique that has
`
`revolutionized the field of error correcting codes. Turbo
`
`codes have sufficient randomness to allow reliable
`
`communication over the channel at a high data rate near
`
`capacity. However, they still retain sufficient structure
`
`to allow practical encoding and decoding algorithms.
`
`Still, the technique for encoding and decoding turbo codes
`
`can be relatively complex.
`
`[0005]
`
`A standard turbo coder 100 is shown in Figure 1.
`
`A block of k information bits is input directly to a first
`
`coder 102. A k bit interleaver 106 also receives the k
`
`bits and interleaves them prior to applying them to a
`
`second coder 104. The second coder produces an output that
`
`has more bits than its input, that is, it is a coder with
`
`rate that is less than 1. The coders 102, 104 are
`
`typically recursive convolutional coders.
`
`[0006]
`
`Three different items are sent over the channel
`
`150:
`
`the original k bits, first encoded bits 110, and
`
`second encoded bits 112. At the decoding end, two decoders
`
`are used:
`
`a first constituent decoder 160 and a second
`
`2
`
`Hughes, Exh. 1002, p. 6
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`constituent decoder 162. Each receives both the original k
`
`bits, and one of the encoded portions 110, 112. Each
`
`decoder sends likelihood estimates of the decoded bits to
`
`the other decoders. The estimates are used to decode the
`
`uncoded information bits as corrupted by the noisy channel.
`
`SUMMARY
`
`[0007]
`
`A coding
`
`according to an embodiment is
`
`configured to receive a portion of a signal to be encoded 1
`
`for example, a data block including a fixed number of bits.
`
`The coding system includes an outer coder, which repeats
`
`and scrambles bits in the data block. The data block is
`
`apportioned into two or more sub-blocks, and bits in
`
`different sub-blocks are repeated a different number of
`
`times according to a selected degree profile. The outer
`
`coder may include a repeater with a variable rate and an
`
`interleaver. Alternat
`
`, the outer coder may be a low-
`
`density generator matrix (LDGM) coder.
`
`[0008]
`
`The
`
`and scrambled bits are input to an
`
`inner coder that has a rate substantially close to one.
`
`The inner coder may include one or more accumulators that
`
`perform recursive modulo two addition
`
`on the
`
`input bit stream.
`
`3
`
`Hughes, Exh. 1002, p. 7
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0009]
`
`The encoded data output from the inner coder may
`
`be transmitted on a channel and decoded in linear time at a
`
`destination using iterative decoding techniques. The
`
`decoding techniques may be based on a Tanner graph
`
`representation of the code.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`[0010]
`
`Figure 1 is a schematic diagram of a prior "turbo
`
`codefl system.
`
`[0011]
`
`Figure 2 is a schematic diagram of a coder
`
`according to an embodiment.
`
`[0012]
`
`Figure 3 is a Tanner
`
`for an irregular
`
`and accumulate
`
`coder.
`
`[0013]
`
`Figure 4 is a schematic diagram of an IRA coder
`
`according to an embodiment.
`
`[0014]
`
`Figure SA illustrates a message from a variable
`
`node to a check node on the Tanner graph of
`
`3.
`
`[0015]
`
`Figure SB illustrates a message from a check node
`
`to a variable node on the Tanner graph of Figure 3.
`
`[0016]
`
`Figure 6 is a schematic diagram of a coder
`
`according to an alternate embodiment.
`
`[0017]
`
`Figure 7 is a schematic diagram of a coder
`
`according to another alternate embodiment.
`
`4
`
`Hughes, Exh. 1002, p. 8
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`DETAILED DESCRIPTION
`
`[0018]
`
`Figure 2 illustrates a coder 200 according to an
`
`embodiment. The coder 200 may include an outer coder 202,
`
`an interleaver 204, and inner coder 206. The coder may be
`
`used to format blocks of data for transmission, introducing
`
`redundancy into the stream of data to protect the data from
`
`loss due to transmission errors. The encoded data may then
`
`be decoded at a destination in linear time at rates that
`
`may approach the channel capacity.
`
`[0019]
`
`The outer coder 202 receives the uncoded data.
`
`The data may be partitioned into blocks of fixed size, say
`
`k bits. The outer coder may be an (n,k) binary linear
`
`block coder, where
`
`n > k. The coder accepts as input a
`
`block u of k data bits and produces an output block v of n
`
`data bits. The mathematical relationship between u and v
`
`is v=T 0u, where To is an n x k matrix, and the rate of the
`
`coder is k/n.
`
`[0020]
`
`The rate of the coder may be irregular, that is,
`
`the value of T0 is not constant, and may differ for sub(cid:173)
`
`blocks of bits in the data block.
`
`In an embodiment, the
`
`outer coder 202 is a repeater that repeats the k bits in a
`
`block a number of times q to produce a block with n bits,
`
`where n
`
`qk. Since the repeater has an irregular output,
`
`different bits in the block may be repeated a different
`
`5
`
`Hughes, Exh. 1002, p. 9
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`number of times. For example/ a fraction of the bits in
`
`the block may be repeated two times 1 a fraction of bits may
`
`be repeated three times, and the remainder of bits may be
`
`repeated four times. These fractions define a degree
`
`sequence/ or degree profile/ of the code.
`
`[0021]
`
`The inner coder 206 may be a linear rate-1 coder/
`
`which means that the n-bit output block x can be written as
`
`x=Trw, where T1 is a nonsingular n x n matrix. The inner
`
`coder 210 can have a rate that is close to 1 1 e.g. 1 within
`
`50% 1 more preferably 10% and perhaps even more preferably
`
`within 1% of 1.
`
`[0022]
`
`In an embodiment, the inner coder 206 is an
`
`accumulator/ which produces outputs that are the modulo two
`
`(mod-2) partial sums of its inputs. The accumulator may be
`
`a truncated rate-1 recursive convolutional coder with the
`
`transfer function 1/(1+D). Such an accumulator may be
`
`considered a block coder whose input block [x11 ... 1Xnl and
`
`output block [y1, ... ,ynl are related by the formula
`
`Y2
`
`Y3
`
`X1 Ei1 x2
`
`X1 Ei1 x2 Ei1 X3
`
`6
`
`Hughes, Exh. 1002, p. 10
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`where "$ 11 denotes mod-2, or exclusive-OR (XOR), addition.
`
`An advantage of this system is that only mod-2 addition is
`
`necessary for the accumulator. The accumulator may be
`
`embodied using only XOR gates, which may simplify the
`
`design.
`
`[0023]
`
`The bits output from the outer coder 202 are
`
`scrambled before they are input to the inner coder 206.
`
`This scrambling may be performed by the interleaver 204,
`
`which performs a pseudo-random permutation of an input
`
`block v, yielding an output block w having the same length
`
`as v.
`
`[0024]
`
`The serial concatenation of the interleaved
`
`irregular repeat code and the accumulate code produces an
`
`irregular repeat and accumulate (IRA) code. An IRA code is
`
`a linear code, and as such, may be represented as a set of
`
`parity checks. The set of parity checks may be
`
`in a bipartite graph, called the Tanner graph, of the code.
`
`Figure 3 shows a Tanner graph 300 of an IRA code with
`
`parameters ( f1, ... , fj; a) I where f1 2 0, I:ifi = 1 and "au
`
`is a positive integer. The Tanner graph includes two kinds
`
`of nodes: variable nodes (open circles) and check nodes
`
`(filled circles) . There are k variable nodes 302 on the
`
`left 1 called information nodes. There are r variable nodes
`
`306 on the right 1 called parity nodes. There are r =
`
`7
`
`Hughes, Exh. 1002, p. 11
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`/a check nodes 304 connected between the information
`
`nodes and the parity nodes. Each information node 302 is
`
`connected to a number of check nodes 304. The fraction of
`
`information nodes connected to exactly i check nodes is
`
`For example, in the Tanner graph 300, each of the f 2
`
`information nodes are connected to two check nodes,
`
`corresponding to a repeat of q = 2, and each of the
`
`information nodes are connected to three check nodes,
`
`corresponding to q = 3.
`
`[0025]
`
`Each check node 304 is connected to exactly "a"
`
`information nodes 302.
`
`In
`
`3, a
`
`3. These
`
`connections can be made in many ways, as indicated by the
`
`arbitrary permutation of the ra edges joining information
`
`nodes 302 and check nodes 304 in permutation block 310.
`
`These connections correspond to the scrambling performed by
`
`the interleaver 204.
`
`[0026]
`
`In an alternate embodiment, the outer coder 202
`
`may be a low-density generator matrix (LDGM) coder that
`
`performs an irregular repeat of the k bits in the block, as
`
`shown in Figure 4. As the name implies, an LDGM code has a
`
`sparse (low-density)
`
`matrix. The IRA code
`
`produced by the coder 400 is a serial concatenation of the
`
`LDGM code and the accumulator code. The interleaver 204 in
`
`8
`
`Hughes, Exh. 1002, p. 12
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`Figure 2 may be excluded due to the randomness already
`
`present in the structure of the LDGM code.
`
`[0027]
`
`If the permutation performed in permutation block
`
`310 is fixed, the Tanner graph represents a binary linear
`
`block code with k information bits (u1 , . . . , Uk) and r parity
`
`bits (x1, ... ,xr), as follows. Each of the information bits
`
`is associated with one of the information nodes 302, and
`
`each of the parity bits is associated with one of the
`
`parity nodes 306. The value of a parity bit is determined
`
`uniquely by the condition that the mod-2 sum of the values
`
`of the variable nodes connected to each of the check nodes
`
`304 is zero. To see this 1 set x 0=0. Then if the values of
`
`the bits on the ra
`
`coming out the
`
`(v1 , ••• , Vra), then we have the recursive formula
`..
`+ L: v(j-l.) "+.i.
`
`Xj =
`
`.i.-l.
`
`for j
`
`1, 2, ... , r. This is in effect the encoding
`
`algorithm.
`
`[ 0 02 8]
`
`Two types of IRA codes are represented in Figure
`
`3, a nonsystematic version and a systematic version. The
`
`nonsystematic version is an (r,k) code, in which the
`
`codeword corresponding to the information bits (u1 , . . . ,uk)
`
`is (x1 , ... , Xr). The systematic version is a (k+r, k) code,
`
`in which the codeword is (u1, ... , uki x1, ... , Xr).
`
`9
`
`Hughes, Exh. 1002, p. 13
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0029]
`
`The rate of the nonsystematic code is
`
`[0030]
`
`The rate of the systematic code is
`
`[0031]
`
`For example, regular repeat and accumulate (RA)
`
`codes can be considered nonsystematic IRA codes with a = 1
`
`and exactly one fi equal to 1, say fq = 1, and the rest
`
`zero, in which case Rnsys simplifies to R = 1/q.
`
`[0032]
`
`The IRA code may be represented using an
`
`alternate notation. Let Ai be the fraction of edges between
`
`the information nodes 302 and the check nodes 304 that are
`
`adjacent to an information node of degree i, and let Pi be
`
`the fraction of such edges that are adjacent to a check
`
`node of degree i+2 (i.e., one that is adjacent to i
`
`information nodes) . These edge fractions may be used to
`
`represent the IRA code rather than the corresponding node
`
`fractions. Define A (x) = ~iAiXi- 1 and p (x) = ~iPiXi- 1 to be
`
`the generating functions of these sequences. The pair (A,
`
`p) is called a degree distribution. For L(x) = ~ifiXi,
`
`10
`
`Hughes, Exh. 1002, p. 14
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`J.i. I i
`Lj ).j I j
`
`L (x)
`
`*2.
`
`= Jo ). (t) dt/ j
`
`-.J.
`
`0
`
`). (t) dt
`
`[0033]
`
`The rate of the systematic IRA code given by the
`
`degree distribution is given by
`
`Rate
`
`[0034]
`
`11 Belief propagation 11 on the Tanner Graph
`
`realization may be used to decode IRA codes. Roughly
`
`speaking, the belief propagation decoding technique allows
`
`the messages passed on an edge to represent posterior
`
`densities on the bit associated with the variable node. A
`
`probability density on a bit is a pair of non-negative real
`
`numbers p(O) 1 p(1) satisfying p(O) + p(1)
`
`1 1 where p ( 0)
`
`denotes the probability of the bit being 0 1 p(1) the
`
`probability of it being 1. Such a pair can be represented
`
`by its log likelihood ratio, m = log(p(O)/p(1)). The
`
`outgoing message from a variable node u to a check node v
`
`represents information about U 1 and a message from a check
`
`node u to a variable node v represents information about u,
`
`as shown in Figures 5A and 5B, respectively.
`
`11
`
`Hughes, Exh. 1002, p. 15
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0035]
`
`The outgoing message from a node u to a node v
`
`depends on the incoming messages from all
`
`w of u
`
`except v.
`
`If u is a variable message node, this outgoing
`
`message is
`m (u -7 v = L m (w -7 u) + m0 {u)
`where mo(u) is the log-likelihood message associated
`
`with u.
`
`If u is a check node, the corresponding formula is
`
`tanh _.;._____ =
`-7 v)
`m
`2
`
`[0036]
`
`Before decoding, the messages m(w ~ u) and m(u ~
`
`v) are initialized to be zero, and m0 (u) is initialized to
`
`be the log-likelihood ratio based on the channel received
`
`information.
`
`If the channel is memoryless, i.e., each
`
`channel output only relies on its input, and y is the
`
`output of the channel code bit u, then m0 (u) = log(p(u
`
`0 I y) /p (u
`
`1ly)). After this initialization, the decoding
`
`process may run in a fully parallel and local manner.
`
`In
`
`each iteration, every variable/check node receives messages
`
`from its neighbors, and sends back updated messages.
`
`Decoding is terminated after a fixed number of iterations
`
`or detecting that all the constraints are satisfied. Upon
`
`termination, the decoder outputs a decoded sequence based
`
`on the messages m (u) = Iwm (w ~ u) .
`
`12
`
`Hughes, Exh. 1002, p. 16
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0037]
`
`Thus, on various channels, iterative decoding
`
`only differs in the initial messages m0 (u). For example,
`
`consider three memoryless channel models:
`
`a binary erasure
`
`channel (BEC) ;
`
`a binary symmetric channel (BSC) ; and an
`
`additive white Gaussian noise (AGWN) channel.
`
`[0038]
`
`In the BEC, there are two inputs and three
`
`outputs. When 0 is transmitted, the receiver can receive
`
`either 0 or an erasure E. An erasure E output means that
`
`the receiver does not know how to demodulate the output.
`
`Similarly, when 1 is transmitted/ the receiver can receive
`
`either 1 or E. Thus 1 for the BEC 1 y E {0 1 E, 1} 1 and
`
`if y = 0
`if y = E
`if y = 1
`
`[0039]
`
`In the BSC,
`
`there are two possible inputs (0,1)
`
`and two possible outputs (0 1 1). The BSC is characterized
`
`by a set of conditional probabilities relating all possible
`
`outputs to possible inputs. Thus, for the BSC y E {0, 1},
`
`if
`
`y = 0
`
`if
`
`y =
`
`1
`
`13
`
`l 1- p
`
`log - -
`p
`1-p
`-log-P-
`
`m0 (u)
`
`and
`
`Hughes, Exh. 1002, p. 17
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`[0040]
`
`In the AWGN,
`
`the discrete-time input symbols X
`
`take their values in a finite
`
`while channel output
`
`symbols Y can take any values along the real line. There
`
`is assumed to be no distortion or other effects other than
`
`the addition of white Gaussian noise.
`
`In an AWGN with a
`
`Phase Shift Keying (BPSK)
`
`ing which maps 0 to
`
`the symbol with amplitude ~ and 1 to the symbol with
`
`amplitude -~, output y E R, then
`
`where N0/2 is the noise power spectral density.
`
`[0041]
`
`The selection of a degree profile for use in a
`
`particular transmission channel is a
`
`which may be affected by various attributes of the channel.
`
`The criteria for selecting a particular degree profile may
`
`include, for example, the type of channel and the data rate
`
`on the channel. For
`
`, Table 1 shows
`
`iles
`
`that have been found to produce good results for an AWGN
`
`channel model.
`
`14
`
`Hughes, Exh. 1002, p. 18
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`2
`
`0.139025
`
`0.2221555
`
`0.638820
`
`3
`
`0.078194
`
`0.128085
`
`0.160813
`
`0.036178
`
`0.108828
`
`0.487902
`
`0.333364
`
`0.333223
`
`1.1840
`
`1.1981
`
`0.190
`
`1.2415
`
`1.2607
`
`-0.250
`
`4
`
`0.054485
`
`0.104315
`
`0.126755
`
`0.229816
`
`0.016484
`
`0.450302
`
`0.017842
`
`0.333218
`
`1. 2615
`
`1.2780
`
`-0.371
`
`a
`
`A-2
`
`A-3
`
`A-5
`
`A-6
`
`A-10
`
`/..11
`
`A-12
`
`A-13
`
`A-14
`
`A-16
`
`A-27
`
`A-28
`
`Rate
`
`crGA
`
`cr*
`
`(Eb/NO)*(dB)
`
`S.L.
`
`(dB)
`
`-0.4953
`
`-0.4958
`
`-0.4958
`
`[0042]
`
`Table 1 shows degree profiles yielding codes of
`
`rate approximately 1/3 for the AWGN channel and with a
`
`2,
`
`3, 4. For each sequence, the Gaussian approximation noise
`
`threshold, the actual sum-product decoding threshold and
`
`the corresponding energy per bit (Eb)-noise power (N0 ) ratio
`
`in dB are given. Also listed is the Shannon limit (S.L.).
`
`[0043]
`
`As the parameter "a" is increased, the
`
`performance improves. For example, for a
`
`4, the best
`
`15
`
`Hughes, Exh. 1002, p. 19
`
`
`
`Docket No.: 06618/637001/CIT3220
`
`code found has an iterative decoding threshold of Eb/N0
`
`0.371 dB, which is only 0.12 dB above the Shannon limit.
`
`[0044]
`
`The accumulator component of the coder may be
`
`replaced by a "double accumulator" 600 as shown in Figure
`
`6. The double accumulator can be viewed as a truncated
`
`rate 1 convolutional coder with transfer function 1/(1 + D
`
`+ D2) •
`
`[0045]
`
`, a
`
`of accumulators may be the
`
`added, as shown in Figure 7. There are three component
`
`codes: the "outer" code 700, the "middle" code 702, and the
`
`"inner" code 704. The outer code is an irregular
`
`repetition code, and the middle and inner codes are both
`
`accumulators.
`
`[0046]
`
`IRA codes may be implemented in a variety of
`
`channels, including memoryless channels, such as the BEC,
`
`BSC, and AWGN, as well as channels having non-binary input,
`
`non-symmetric and fading channels, and/or channels with
`
`memory.
`
`[004 7]
`
`A number of embodiments have been described.
`
`Nevertheless, it will be understood that various
`
`modifications may be made without departing from the spirit
`
`and scope of the invention. Accordingly, other embodiments
`
`are within the scope of the following claims.
`
`16
`
`Hughes, Exh. 1002, p. 20
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`CLAIMS
`
`1.
`
`A method of encoding a
`
`compri
`
`obtaining a block of data in the signal to be encoded;
`
`partitioning said data block into a plurality of sub(cid:173)
`
`blocks, each sub-block including a plurality of data
`
`elements;
`
`first encoding said data block to form a first encoded
`
`data block, said first encoding including repeating the
`
`data elements in different sub-blocks a different number of
`
`times
`
`interleaving the repeated data elements in the first
`
`encoded data block;
`
`and
`
`second encoding said first encoded data block us
`
`an
`
`encoder that has a rate close to one.
`
`2.
`
`The method of claim 1, wherein said second
`
`encoding is via a rate 1 linear transformation.
`
`3.
`
`The method of claim 1, wherein said first
`
`encoding is carried out by a first coder with a variable
`
`rate less than one, and said second encoding is carried out
`
`by a second coder with a rate substantially close to one.
`
`17
`
`Hughes, Exh. 1002, p. 21
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`4.
`
`The method of claim 3, wherein the second coder
`
`comprises an accumulator.
`
`5.
`
`The method of claim 4, wherein the data elements
`
`comprises bits.
`
`6.
`
`The method of claim 5, wherein the first coder
`
`comprises a repeater operable to repeat different sub(cid:173)
`
`blocks a different number of times in response to a
`
`selected degree profile.
`
`7.
`
`The method of claim 4, wherein the first coder
`
`comprises a low-density generator matrix coder and the
`
`second coder comprises an accumulator.
`
`8.
`
`The method of claim 1, wherein the second
`
`encoding uses a transfer function of 1/(1+D).
`
`9.
`
`The method of claim 1, wherein the second
`
`encoding uses a transfer function of 1/(1+D+D2
`
`) .
`
`10. The method of claim 1, wherein said second
`
`encoding utilizes two accumulators.
`
`18
`
`Hughes, Exh. 1002, p. 22
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`11. A method of encoding a signal, comprising:
`
`receiving a block of data the
`
`to be encoded,
`
`said data block including a
`
`ity of bits;
`
`first encoding the data block such that each bit in
`
`the data block is repeated and two or more of said bits are
`
`repeated a different number of times in order to form a
`
`first encoded data block; and
`
`second encoding the first encoded data block in such a
`
`way that the bits in the first encoded data block are
`
`accumulated.
`
`12. The method of claim 11, wherein the said second
`
`coding is via a rate 1 linear transformation.
`
`13. The method of claim 11, wherein the first coding
`
`is via a low-density generator matrix transformation.
`
`14. The method of claim 11, wherein the s
`
`to be
`
`encoded comprises a plurality of data blocks of fixed size.
`
`15. A coder comprising:
`
`a first coder having an input conf
`
`to receive a
`
`stream of bits, said first coder operative to repeat said
`
`bits irregularly and scramble said repeated bits; and
`
`19
`
`Hughes, Exh. 1002, p. 23
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`a second coder operative to further encode bits output
`
`from the first coder at a rate close to one.
`
`16. The coder of claim 15, wherein the stream of bits
`
`includes a data block, and wherein the first coder is
`
`operative to apportion said data block into a plurality of
`
`sub-blocks and to repeat bits in the each sub-block a
`
`number of times, wherein bits in different sub-blocks are
`
`repeated a different number of times.
`
`17. The coder of claim 15, wherein the first coder
`
`comprises a repeater having a variable rate and an
`
`interleaver.
`
`18. The coder of claim 15, wherein the first coder
`
`comprises a low-density generator matrix coder.
`
`19. The coder of claim 15, wherein the second coder
`
`comprises a rate 1 linear encoder.
`
`20. The coder of claim 19, wherein the second coder
`
`comprises an accumulator.
`
`20
`
`Hughes, Exh. 1002, p. 24
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`21. The coder of claim 20, wherein the second coder
`
`further comprises a second accumulator.
`
`22. The coder of claim 16, wherein the second coder
`
`comprises a recursive convolutional encoder with a transfer
`
`function of 1/(1 +D).
`
`23. The coder of claim 16, wherein the second coder
`
`comprises a recursive convolutional encoder with a transfer
`
`function of 1/(1 + D + D2
`
`).
`
`24. A coding system comprising:
`
`a first coder having an input configured to receive a
`
`stream of bits, said first coder operative to repeat said
`
`bits irregularly and scramble said repeated bits;
`
`a second coder operative to further encode the bits
`
`output from the first coder at a rate close to one in order
`
`to form an encoded data stream; and
`
`a decoder operative to receive the encoded data stream
`
`and decode the encoded data stream using an iterative
`
`decoding technique.
`
`25. The coding system of claim 24, wherein the first
`
`coder comprises a repeater operative to receive a data
`
`21
`
`Hughes, Exh. 1002, p. 25
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`block including a plurality of bits from said stream of
`
`bits and to repeat bits in the data block a different
`
`number of times according to selected a degree profile.
`
`26. The coding system of claim 25, wherein the first
`
`coder comprises an interleaver.
`
`27. The coding system of claim 24, wherein the first
`
`coder comprises a low-density generator matrix coder.
`
`28. The coding system of claim 24, wherein the second
`
`coder comprises a rate l accumulator.
`
`29. The coding system of claim 24, wherein the
`
`decoder is operative to decode the data using a posterior
`
`decoding techniques.
`
`30. The coding system of claim 24, wherein the
`
`decoder is operative to decode the data based on a Tanner
`
`graph representation.
`
`31. The coding system of claim 24, wherein the
`
`decoder is
`
`ive to decode the data in linear time.
`
`22
`
`Hughes, Exh. 1002, p. 26
`
`
`
`Attorney Docket No.: 06618/637001/CIT3220
`
`ABSTRACT OF THE DISCLOSURE
`
`[0048]
`
`A serial concatenated coder includes an outer
`
`coder and an inner coder. The outer coder irregularly
`
`repeats bits in a data block according to a
`
`profile
`
`and scrambles the repeated bits. The scrambled and
`
`repeated bits are input to an inner coder, which has a rate
`
`substantially close to one.
`
`23
`
`Hughes, Exh. 1002, p. 27
`
`
`
`100~
`
`..
`/
`
`/
`
`10~
`.. ...
`
`CODE1
`
`p
`
`... ...
`
`CODE2
`
`160
`
`.. ...
`.. ...
`
`DECODE 1
`
`1--
`
`_J w
`z
`z
`<l:: :r:
`
`(.)
`
`(H 2
`
`~
`...
`
`DECODE 2
`
`.. ...
`
`f110
`
`...
`
`\
`
`104
`
`150
`
`FIG. 1
`PRIOR ART
`
`Hughes, Exh. 1002, p. 28
`
`
`
`j~
`
`J~
`
`A~
`
`J~
`
`-
`
`-
`
`~
`
`(,0
`0
`
`C\1 L/
`
`0::
`w z
`z -
`
`.. ~
`
`c
`
`S;
`
`"'¢
`0
`C\1
`
`l.---1
`
`.
`(!) -u.
`
`N
`0
`
`..3
`
`D..
`
`~
`
`.....
`
`c
`
`'->
`
`0::
`w
`1-
`::J
`0
`
`.. ~
`
`..... ,
`
`:::::1
`
`~" ':::::1
`
`r 0
`
`0
`C\1
`
`u
`u
`<(
`
`.. ~
`
`c"
`
`~
`<.9
`0
`_J
`
`~ .
`(!) -u.
`
`-
`
`,,
`
`A~
`
`~ ..... '
`
`1 0
`
`0
`"'¢
`
`Hughes, Exh. 1002, p. 29
`
`
`
`1
`
`N
`
`f2
`
`f3
`
`fJ
`
`Check Node
`degree a
`
`3</Y
`
`____ ..
`----..
`____ ..
`
`?-<~-----..
`
`v,
`
`x,
`--- --e
`
`Variable Node
`Fraction of nodes
`degree i
`
`,-'
`•---:-u~
`I
`'
`I
`I
`
`I
`
`\
`
`I
`
`3<:??..-
`
`I ,,
`_-f:::.
`
`, .... , ',
`
`·--- t(cid:173)
`I
`I
`I
`I
`
`I
`
`·---\--~
`---:--...
`',_,
`
`\
`
`,-~,,
`e---l--
`I
`I
`I
`
`'
`:
`
`'
`
`: :
`:
`·---'l-~k~·
`' .... _.~
`
`\
`
`I
`
`:
`
`\F\C;. 3
`
`Hughes, Exh. 1002, p. 30
`
`
`
`-------
`
`FIG. SA
`
`w
`
`304
`
`304
`
`FIG. 58
`
`Hughes, Exh. 1002, p. 31
`
`
`
`- - -1
`I
`I
`I
`I
`I
`I
`I
`- -r- J ci
`-
`
`0
`(.0
`
`LL
`
`0
`
`0
`
`I
`I
`I
`I
`I
`I
`I
`I
`L
`
`~~'
`
`n..
`
`a:::
`
`~
`
`J ~
`
`,.... \
`"""' 0
`
`,--- - -
`•
`
`0
`
`( 1\ ...
`\_ L)
`
`1
`
`I
`I
`I
`I
`I
`n..
`I
`Lj ____
`
`_j
`
`,- ~ ~-- -
`•
`
`" .