`e, then the estimate U is equal to the transmitted code vector U. On the other
`hand, if the error estimate is incorrect, the decoder will estimate a code vector
`that was not transmitted, and we have an undetectable decoding error.
`Example 5.4 Error Correction
`Assume that code vector U = 1 0 1 1 1 0, from the Section 5.4.3 example, is trans-
`mitted, and the vector r = 0 0 1 1 1 0 is received. Show how a decoder, using the
`Table 5.1 syndrome look-up table, can correct the error.
`
`Solution
`The syndrome of r is computed:
`S = [0 0 1 1 1 0]H T =[ 1 0 0]
`Using Table 5.1, the error pattern corresponding to the syndrome above is estimated
`to be
`
`e=100000
`The corrected vector is then estimated by
`~J=r+e
`= 0 0 1 1 1 0+ 1 0 0 0 0 0
`= 101 1 10
`Since the estimated error pattern is the actual error pattern in this example, the error
`correction procedure yields U = U.
`
`5.5 CODI(VG ST'RENGT'H
`
`5.5.1 Weight and Distance of binary Vectors
`
`It should be clear that not all error patterns can be correctly decoded. The error
`correction capability of a code will be investigated by first defining its structure.
`The Hamming weight, w(U), of a vector U is defined to be the number of nonzero
`elements in iJ. For a binary vector this is equivalent to the number of ones in the
`vector. For example, if U = 1 0 0 1 0 1 1 0 1, then w(U) = 5. The Hamming
`distance between two code vectors U and V, denoted d(U, V), is defined to be
`the number of elements in which they differ: for example,
`U = 1 0 0 1 0 1 1 0 1
`V = 0 1 1 1 1 0 1 0 0
`
`By the properties of modulo-2 addition, we note that the sum of two binary vectors
`
`280
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 241
`
`
`
`is another vector whose binary ones axe located in those positions in which the
`two vectors differ: for example,
`U + V= 1 1 1 0 1 1 0 0 1
`Thus we observe that the Hamming distance between two code vectors is equal
`to the Hamming weight of their sum: that is, d(U, V) = w(U + V). Also, we see
`that the Hamming weight of _a code vector is equal to its Hamming distance from
`the all-zeros vector.
`
`5.5.2 Minimum Distance of a Linear Code
`
`Consider the set of distances between all pairs of code vectors in the space Vn.
`The smallest member of the set is the minimum distance of the code and is denoted
`dm;n • Why do you suppose we have an interest in the minimum distance; why not
`the maximum distance? The minimum distance, like the weakest link in a chain,
`gives us a measure of the code's minimum capability and therefore characterizes
`the code's strength.
`As discussed earlier, the sum of any two code vectors yields another code
`vector member of the subspace. `This property of linear codes is stated simply as:
`If U and V are code vectors, then W = U + V must also be a code vector. Hence
`the distance between two code vectors is equal to the weight of a third code
`vector; that is, d(U, V) = w(U + V) = w(W). Thus the minimum distance of a
`linear code can be ascertained without examining the distance between all com-
`binations of code vector pairs. We only need to examine the weight of each code
`vector (excluding the all-zeros vector) in the subspace; the minimum weight cor-
`responds to the minimum distance, dn,;n• Equivalently, dm;n corresponds to the
`smallest of the set of distances between the all-zeros code vector and all the other
`code vectors.
`
`5.5.3 Error Detection and Correction
`
`The task of the decoder, having received the vector r, is to estimate the transmitted
`code vector U=. The optimal decoder strategy can be expressed in terms of the
`maximum likelihood algorithm (see Appendix B) as follows: Decide in favor of
`Uz if
`
`P(r~U;)
`P(r~Uz) = max
`over all U;
`Since for the binary symmetric channel (BSC), the likelihood of U; with respect
`to r is inversely proportional to the distance between r and U;, we can write:
`Decide in favor of U; if
`
`(5.41)
`
`d(r, U~) = min
`over all LIB
`
`d(r, U,)
`
`Sec. 5.5
`
`Coding Strength
`
`(5.42)
`
`281
`
`Petitioner's Exhibit 1003
`Page 242
`
`
`
`In other words, the decoder determines the distance between r and each of the
`passible transmitted code vectors U;, and selects as most likely a U; for which
`(5.43)
`for i, j = 1, . . . , M and i ~ j
`d(r, U,) <— d(r, U,)
`where M = 2k is the size of the code vector set. If the minimum is not unique,
`the choice between minimum distance codewords is arbitrary. Distance metrics
`are treated further in Chapter 6.
`In Figure 5.15 the distance between two code vectors U and V is shown
`using a number line calibrated in Hamrrcing distance. Each black dot represents
`a corrupted code vector. Figure S.15a illustrates the reception of vector rl, which
`is distance 1 from U and distance 4 from V. An error-correcting decoder, following
`the maximum. likelihood strategy, will select U upon receiving rl. If r, had been
`the result of a 1-bit corruption to the transmitted code vector U, the decoder has
`successfully corrected the error. But if r, had been the result of a 4-bit corruption
`to the transmitted code vector V, the result is a decoding error. Similarly, a double
`en-or in transmission of U might result in the received vector r2, which is distance
`2 from U and distance 3 from V, as shown in Figure S.15b. Here, ton, the decoder
`will select U upon receiving r2. A triple error in transmission of U might result
`in a received vector r3 which is distance 3 from U and distance 2 from ~, as shown
`in Figure S.15c. Here the decoder will select V upon receiving r3i and will have
`made an error in decoding. From Figure 5.15 it should be clear that if error
`
`Decision
`line
`
`Region 1
`
`~
`
`Region 2
`
`O -O
`U Y~
`
`• ~
`j
`
`~6
`
`•
`
`U r2
`
`(a)
`
`j
`i
`
`(b)
`
`O
`V
`
`V
`
`U I
`
`r3
`
`__
`
`~
`
`(c)
`
`282
`
`Figure 5.15 Error correction and
`detection strength. (a) Received vector
`r ~ . (U) Received vector rz. (c) Received
`vector r3.
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 243
`
`
`
`detection and not correction is the task, a corrupted vector, characterized by a
`black dot and representing a 1-bit, 2-bit, 3-bit, or 4-bit error, can be detected.
`l However, five errors in transmission might result in code vector V being received
`when code vector U was actually transmitted; such an error would be
`undetectable.
`From Figure 5.15 we can see that the error-detecting and error-correcting
`j capabilities of a code are related to the minimum distance between code vectors.
`The decision line in the figure serves the same purpose in the process of decoding
`as it does in demodulation, to define the decision regions. In the Figure 5.15
`example, the decision criterion of choosing U if r falls in region 1, and choosing
`V if r falls in region 2, illustrates that such a code, with Amin — ~, can correct
`two errors. In general, the error•-correcting capability, t, of a code is defined as
`the maximum number of guaranteed correctable errors per codeword, and is writ-
`ten [4]
`
`t = I dm~n - 1
`L 2 J (5.44)
`
`where LxJ means the largest integer not to exceed x. Often, a code that corrects
`all possible sequences of t or fewer errors can also correct certain sequences of
`t + 1 errors. This can be seen in Figure 5.14. In this example dmin = 3, and thus
`from Equation (5.44), we can see that all t = 1 bit-error patterns are correctable.
`Also, a single t + 1 or 2-bit errorpattern is correctable. In general, a t-error-
`correcting (n, k) linear code is capable of correcting a total of 2n-k error patterns.
`If a t-error-correcting block code is used strictly for error correction on a binary
`symmetric channel (BSC) with transition probability p, the probability that the
`decoder commits an erroneous decoding, and that the n-bit block is in error, can
`be calculated by using Equation (5.18) as an upper bound:
`PM ~ ~ ('~lp~(1 - p)n-;
`= t+1 ~J~
`The bound becomes an equality when the decoder corrects all combinations of
`errors up to and including t errors, but no combinations of errors greater than t.
`Such decoders are called bounded distance decoders. The decoded bit-error prob-
`ability depends on the particular code and decoder. It can be expressed [5] by
`the following approximation:
`
`(5.45)
`
`P'~l - p)n-.;
`
`n1 n
`pB ~ _ ~ .1
`n.;=r+~
`A code can be used to detect errors prior to, or instead of, correcting them.
`It should be clear from Figure 5.15 that any received vector characterized by a
`black dot (a corrupted code vector) can be identified as an error. Therefore, the
`error-detecting capability, e, is defined in terms of dm;n as
`e = drnin — 1
`(5.47)
`A block code with minimum distance dmin guarantees that all error patterns of
`
`(5.46)
`
`Sec. 5.5
`
`Coding Strength
`
`263
`
`a
`
`Petitioner's Exhibit 1003
`Page 244
`
`
`
`dm;,, - 1 or fewer errors can be detected. Such a code is also capable of detecting
`a large fraction of error patterns with dmin or more errors. In fact, an (n; k) code
`is capable of detecting 2n - 2k error patterns of length n. The reasoning is as
`follows. There are a total of 2" - 1 possible nonzero error patterns in the space
`of 2" n-tuples. Even the bit pattern of a valid codeword represents a potential
`error pattern. Thus there are 2k - 1 error patterns that are identical to the 2k - 1
`nonzero codewords. If any of these 2k - 1 error patterns occurs, it alters the
`transmitted codeword U~ into another codeword U;. Thus U; will be received and
`its Syndrome is zero. The decoder accepts U; as the transmitted codeword and
`thereby commits an incorrect decoding. Therefore, there are 2`` - 1 undetectable
`error patterns. If the error pattern is not identical to one of the 2k codewords,
`the syndrome test on the received vector r yields a nonzero syndrome, and the
`error is detected. Therefore, there are exactly 2n - 2k detectable error patterns.
`For large n, where 2k « 2n, only a small fraction of error patterns are undetected.
`
`5.5.3.1 Code Vector Weight Distribution
`Let A; be the number of code vectors of weight j within an (n, k) linear code.
`The numbers Ao, A1, . . . , An are called the weight distribution of the code. If
`the code is used only for error detection, on a BSC, the probability, PI,d, that the
`decoder does not detect an error can be computed from the weight distribution
`of the code [5] as follows:
`
`n
`Pnd ~ AJPJ~1 — p~n—J
`j=1
`where p is the transition probability of the BSC. If the minimum distance of the
`code is dmin ~ the values of A 1 to Ad,,,;,,- 1 are zero.
`
`~S.4g~
`
`Example 5.5 Probability of an Undetected Error in an Error Detecting Code
`Consider that the (6, 3) code, given in Section 5.4.3, is used only for error detection.
`Calculate the probability of an undetected error if the channel is a BSC and the
`transition probability is 10-2.
`Solution
`The weight distribution of this code is Ao = 1, Al = A z = 0, A 3 = 4, A4 = 3, As
`= 0, A6 = 0. Therefore, we can write, using Equation (5.48),
`Pnd = 473(1 — p)3 + 3p4(1 — p)2
`For p = 14-z, the probability of an undetected error is 3.9 x 10-6
`
`5.5.3.2 Simultaneous Error Correction and Detection
`It is possible to trade correction capability from the maximum guaranteed
`(t), where t is defined in Equation (5.44), for the ability to simultaneously detect
`a class of errors. A code can be used for the simultaneous correction of a errors
`and detection of (3 errors where (3 >- a, provided that its minimum distance is [4]
`d,=,in >_ a + (3 + 1
`(5.49)
`
`284
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 245
`
`
`
`When t or fewer errors occur, the code is capable of detecting and correcting
`them. When more than t but fewer thane + 1 errors occur, where e is defined
`in Equation (5.47), the code is capable of detecting their presence but "correcting
`only a subset of them. For' example, a code with dm;n = 7 can be used to simul-
`taneously detect and correct in any one of the following ways:
`Correct (a)
`Detect (R)
`3 3
`4 2
`5 1
`6 0
`Note that correction implies prior detection. For the above example, when there
`are three errors, all of them can be detected and corrected. When there are five
`errors, all of them can be detected but only a subset of them (one) can be corrected.
`
`5.5.4 Visualization of a 6-Tuple Space
`
`Figure 5.16 is a visualization of the eight codewords from the example of Section
`5.4.3. The codewords are generated from linear combinations of the three inde-
`pendent 6-tuples in Equation (5.26); the codewords form athree-dimensional sub-
`space. The figure shows such a subspace completely occupied by the eight code-
`words (large black circles); the coordinates of the subspace have purposely been
`drawn to emphasize their nonorthogonality. Figure 5.16 is an attempt to illustrate
`` the entire space, containing sixty-four 6-tuples, even though there is no precise
`way to draw or construct such a model. Spherical layers or shells are shown
`around each codeword. Each of the nonintersecting inner layers is a Hamming
`distance of 1 from its associated codeword; each outer layer is a Hamming distance
`of 2 from its codeword. Larger distances are not useful in this example. For each
`codeword, the two layers shown are occupied by perturbed codewords. There
`are six such points on each inner sphere (a total of 48 points), representing the
`six possible 1-bit error-perturbed vectors associated with each codeword. These
`1-bit perturbed codewords are distinct in the sense that they can best be associated
`with only one codeword, and therefore -can he corrected. As is seen from the
`standard array of Figure 5.14, there is also one 2-bit error pattern that can he
`corrected. There is a total of (z) = 15 different 2-bit error patterns that can be
`inflicted on each codeword, but only one of them, in our example the 0 1 0 0 0 1
`error pattern, can be corrected. The other fourteen 2-bit error patterns yield vec-
`tors that cannot be uniquely identified with just one codeword; these noncor-
`rectable error patterns yield vectors that are equivalent to the error-perturbed
`vectors of two or more codewords. In the figure, all correctable (fifty-six) l-and
`2-bit error-perturbed codewords are shown as small black circles. Perturbed code-
`words that cannot be corrected are shown as small clear circles.
`Figure 5.16 is useful for visualizing the properties of a class of codes known
`as perfect codes. A e-error-correcting code is called a perfect code if its standard
`array has all the error patterns of t or fewer errors and no others as coset leaders.
`
`Sec. 5.5
`
`Coding Strength
`
`285
`
`
`~~ y i....Is x _~
`i _~ i.if..._.~L~.~..4..t....~~i.~~lr ~_w.~...1
`~.,t~f s
`~.,J~..1.....i'.,
`
`..ui.....~ uL~..~.vc .~.~~.ts ~ t~ ~ ~, y.r.f1~ ._~~~i.raw<...cx .,. u... 1 .
`
`..W.,.... ..
`
`
`
`_rf._l~.i ..:lc.
`
`iJ. .z ... ~ a.t~
`
`n.~._..~_ ...M ay...i ~
`
`Petitioner's Exhibit 1003
`Page 246
`
`
`
`IBC
`
`~ ~~
`
`Figure 5.16 Example of eight codewords in a 6-tuple space.
`
`In terms of Figure 5.16, a t-error-correcting perfect code is one that can, with
`maximum likelihood decoding, correct all perturbed code vectors occupying a
`shell at Hamming distance t or less from its originating codeword, and cannot
`correct any perturbed vectors occupying shells at distances greater than t.
`Figure 5.16 is also useful for understanding the basic goal in the search for -
`good codes. We would like for the space to be filled with as many codewords as
`possible (efficient utilization of the added redundancy), and we would also like
`
`286
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 247
`
`
`
`these codewords to be as far away from one another as possible. Obviously, these
`goals conflict.
`
`5.5.5 Erasure Correction
`
`A receiver may be designed to declare a symbol erased when it is received am-
`biguously or when the receiver recognizes the presence of interference or a tran-
`sient malfunction. Such a channel has an input alphabet of size Q and an output
`alphabet of size Q + 1; the extra output symbol is called an erasure flag, or simply
`an erasure. When a demodulator makes a symbol error, two parameters are
`j needed- to correct that error, its location and its correct symbol value. In the case
`of binary symbols, this reduces to needing only the error location. However, if
`the demodulator declares a symbol erased, although the correct symbol value is
`not known, the symbol location is known, and for this reason, the decoding of
`
`1 erased codewords can be simpler than error correcting. An error control code
`
`~ can be used to correct erasures or to correct errors and erasures simultaneously.
`If the code has minimum distance dmin ~ any pattern of p or fewer erasures can
`be corrected if [6]
`
`dm;n >_ p + 1
`(5.50)
`Assume for the moment that no errors occur outside the erasure positions. 'lhe
`advantage of correcting by means of erasures is expressed quantitatively as fol-
`lows: If a code has a minimum distance dm~,,, then from Equation (5.50), dm;,, — 1
`erasures can be reconstituted. Since the number of errors that can be corrected
`without erasure information is (dm;,, — 1)/2 at most, from Equation (5.44), the
`advantage of correcting by means of erasures is clear. Further, any pattern of a
`errors and y erasures can be corrected simultaneously if [6]
`'(5.51)
`d,i,;,, >— 2a + y + 1
`Simultaneous erasure correction and error correction can be accomplished
`in the following way. First, the y-erased positions are replaced with zeros and
`the resulting codeword is decoded normally. Next, the y-erased positions are
`replaced with ones, and the decoding operation is repeated on this version of the
`codeword. Of the two codewords obtained (one with erasures replaced by zeros,
`and the other with erasures replaced by ones) the one corresponding to the small-
`est number of errors corrected outside the y-erased positions is selected. This
`technique will always result in correct decoding if Equation (5.51) is satisfied.
`Example 5.6 Erasure Correction
`Consider the codeword set presented in Section 5.4.3:
`000000 110100 011010 101110 101001 011101 110011 000111
`Suppose that the codeword 110011 was transmitted and that the two leftmost digits
`were declared by the receiver to be erasures. Verity that the received flawed se-
`quence xx0011 can be corrected.
`
`Sec. 5.5
`
`Coding Strength
`
`287
`
`,~
`~
`~.
`~
`~
`,;~
`r..,..~.~_,~~.~.ti..~ivW~,.L. ..~~_~~_~~_~~~_.,__,._.._3. ..~_~ .~
`._.__~_r._.,~,.~.~u..,.~~.~,..~_..n._._,_y,.,w...
`
`,~„
`~ .~....._u_..~,~~..._ ...~~.~.~. ~.,...,~:,a...,~,~._.. y,__~.
`
`Petitioner's Exhibit 1003
`Page 248
`
`
`
`sir
`
`Solution
`Since dm~~ — p + 1 — 3, the code can correct as many asp = 2 erasures. This is
`easily verified above or with Figure 5.14 by comparing the rightmost four digits of
`xx0011 with each of the allowable codewords. The codeword that was actually trans-
`mitted is closest in Hamming distance to the flawed sequence.
`
`5.6 CYCLIC CODES
`
`~
`
`Binary cyclic codes are an important subclass of linear block codes. The codes
`are easily implemented with feedback shift registers; the syndrome calculation is
`easily accomplished with similar feedback shift registers; and the underlying al-
`gebraic structure of a cyclic code lends itself to efficient decoding methods. An
`(n, k) linear code is called a cyclic code if it can be described by the following
`property, If the n-tuple U = (uo, u,, u2, . . . un _,) is a code vector in the
`subspace S, then U~'~ _ (un_,, uo, ul, . . . , un_ Z) obtained by an end-around
`shift is also a code vector in S. Or in general U~'~ _ (u
`u
`n—i~ un—i+ls
`e n—li
`uo, u,, . . . , un_;_,), obtained by fi end-around or cyclic shifts, is also a code
`vector in S.
`The components of a code vector U = (uo, u,, u2i . . . , un-,) can be treated
`as the coefficients of a polynomial U(X) as follows:
`U(X) = uo + u,X + u2X2 + ~•• + u,~_,Xn- '
`(5.52)
`The polynomial function U(X) can be thought of as a "placeholder" for the digits
`of the code vector U; that is, an n-tuple vector is described by a polynomial of
`degree n — 1 or less. The presence or absence of each term in the polynomial
`indicates the presence of a 1 or 0 in the corresponding location of the n-tuple. If
`the un _ 1 component is nonzero, the polynomial is of degree n — 1. The usefulness
`of this polynomial description of a codeword will become clear as we discuss the
`algebraic structure of the cyclic codes.
`
`e
`
`5.6.1 Algebraic Structure of Cyclic Code§
`
`Expressing the code vectors in polynomial form, the cyclic nature of the code
`manifests itself in the following way. If U(X) is an (n — 1)-degree codeword
`polynomial, then U~`~(X), the remainder resulting from dividing X`U(X) by
`X" + 1, is also a codeword; that is,
`r ~,>
`X U+Xl = q(X) + Xn ~~1
`
`(5.53)
`
`or, multiplying through by Xn + 1,
`X'U(X) = q(X)(Xn + l) + Uc~~~X) J (5.54)
`remainder
`
`288
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 249
`
`
`
`f
`
`l
`
`I
`
`i
`
`which can also be described in terms of modulo arithmetic as follows:
`U~'~(X) = X`U(X) modulo (X" + 1)
`(5.55)
`where x modulo y is defined as the remainder obtained from dividing x by y. Let
`us demonstrate the validity of Equation (5.55) for the case of i = 1.
`U(X) = up + u1X + u2X2 + "' + u„_ZXr-2 + un _IXn-1
`
`~J~~'~ = Llpt~' + LL1X2 -~- Lf2~'3 -~ . .. -}- Lln-2Xn-1 -~-
`jln-1~'n
`We now add and subtract u„_ 1, or since we are using modulo-2 arithmetic, we
`add un _ 1 twice, as follows:
`n — 1 -~-
`tYU~~'~ = 1,l n — 1 -f- Lf p~l' -I- Ll 1 tZi2 -~- L[ 2~'3 + ... -{- 1.1 n — 2X
`
`
`
`].l n — 1 r~'rz .~- L[ n — 1
`
`U«~\X~
`
`U~l)~X) + un-1(Xn + 1)
`Since U~1~(X) is of degree n — 1, it cannot he divided by X" + 1. Thus we can
`write from Equation (5.53)
`U~'~(X) = XLJ(X) modulo (Xn + 1)
`By extension we can write
`U~`~(~ = X`U(X) modulo (X" + 1)
`
`(5.56)
`
`'; Example 5.7 Cyclic Shift of a Code Vector
`Let U = i l 0 1, for n = 4. Express the code vector in polynomial form, and using
`Equation (5.54), solve for the third end-around shift of the code vector.
`Solution
`U(X) = 1 + X + X3 (polynomial is written low order to high order)
`X`U(X) = X3 + X4 + X6 where i = 3
`Divide X3U(X) by X4 + 1, and solve for the remainder using polynomial
`division.
`
`XZ + 1
`
`X4 + 1 X6 + X4 + X3
`
`X6 -~' X2
`X4 + X3 '~' X2
`
`X4 + 1
`X3 + X~ + 1
`remainder U~3~(X)
`Writing the remainder low order to high order: 1 + XZ + X3, the codeword U~3~ _
`- - 1 0 1 -1-is three cyclic-shifts of U-=-1 1 0 1. Remember that for binary codes, the
`addition operation is performed modulo-2, so that + 1 = —1, and we consequently
`do not show any minus signs in the computation.
`
`Sea 5.6
`
`Cyclic Codes
`
`289
`
`~.__
`
`Petitioner's Exhibit 1003
`Page 250
`
`
`
`5.6.2 Binary Cyclic Code Properties
`We can generate a cyclic code using a generator polynomial in much the way that
`we generated a block code using a generator matrix. The generator polynomial
`g(X) for an (n, k) cyclic code is unique and is of the form
`g~X) _ ~o + b'tX + gz~'2 + ... + g,-Xr
`(S.S7)
`where go and g, must equal 1. Every codeword polynomial in the subspace is of
`the form U(X) = m(X)g(X), where U(X) is a polynomial of degree n - 1 or less.
`Therefore, the message polynomial m(X) is written
`m(X) = ma + rri1X + m2X2 + ... + mn-,.-,X"-,- '
`(5.58)
`There are 2n-'" codeword polynomials, and there are 2k code vectors in an (n, k)
`code. Since there must be one codeword polynomial for each code vector
`n -r=k
`
`or
`
`r = n - k
`Hence g(X), as shown. in Equation (5.57), must be of degree n - Ic, and every
`codeword polynomial in the (n, k) cyclic code can be expressed as
`U(X) _ (mo + m1X + m2X2 + ••• + mk_IXk-i)g(X)
`(5.59)
`U is said to be a valid code vector of the subspace S if, and only_ if, g(X) divides
`into U(X) without a remainder.
`A generator polynomial g(X) of an (n, k) cyclic code is a factor of X" + 1;
`that is, X" + 1 = g(X)h(X). For example,
`
`Using g(X) = 1 + X + X3 as a generator polynomial of degree n - k = 3, we
`can generate an (n, k) _ (7, 4) cyclic code. Or, using g(X) = 1 + X + X2 + X4
`where n - k = 4 we can generate a (7, 3) cyclic code. In summary, if g(X) is a
`polynomial of degree n - k and is a factor of Xn + 1, then g(X) uniquely generates
`an (n, k) cyclic code.
`
`5.6.3 Encoding in Systematic Form
`In Section 5.4.5 we introduced the systematic form and discussed the reduction
`in complexity that makes this encoding form attractive. Let us use some of the
`algebraic properties of the cyclic code to establish a systematic encoding pro-
`cedure. We can express the message vector in polynomial form, as follows:
`m(X) = tno + m1X + m2X2 + ••• + mk-iXk-'
`(5.60)
`In .systematic form, the message digits are utilized as part of the code vector. We
`
`290
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 251
`
`
`
`can think of shifting the message digits into the rightmost k stages of a codeword
`register, and then appending the parity digits by placing them in the leftmost
`n - k stages. Therefore, we want to manipulate the message polynomial alge-
`braically so that it is right-shifted n - k positions. If we multiply m(X) by Xn-k
`we get the right-shifted message polynomial:
`X"-km(X) = moX"-k + miXn-k+i ~, ... -{- mk-iX"-'
`If we next divide equation (5.61) by g(X), the result can be expressed as
`Xn-km~X~ - q~X)g~X) + r(X)
`
`(5.61)
`
`(5.62)
`
`where
`
`1'~1i'~ = Yp -~- YtX ~- Y2t1i2 -~- '•' -{- Yn_k_lXn—k-1
`We can also say that
`
`(5.63)
`r(X) = X"-km(X) modulo g(X)
`Adding r(X) to both sides of Equation (5.62), using modulo-2 arithmetic, we get
`(5.64)
`r(X) + Xn-km(X) = q(X)g(X) = U(X)
`The left-hand side of Equation (5.64) is recognized as a valid codeword polynom-
`ial, since it is a polynomial of degree n - 1 or less, and when divided by g(X)
`there is a zero remainder. This codeword can be expanded into its polynomial
`terms as follows:
`r(X) + X'~ —k1Tt~X~ = Yp + r1X + ••• + rn-k-1X"-k-1
`n-1
`-{- YYIpA"'—k + ~lXn—k+l ,.~ ... -{- i72k—lX
`The codeword polynomial corresponds to the code vector
`
`U = (ro, Y~, . . . , rn-k-~, mop m~, ~~ • • , mk-~)
`~,
`k message bits
`(n - k) parity bits
`
`(5.65)
`
`', Example 5.8 Cyclic Code in Systematic Form
`Using the generator polynomial g(~ - 1 + X + X3, generate a systematic code
`vector from the (7, 4) codeword set for the message vector m = 1 0 i 1.
`Solution
`
`m(X) = 1 + XZ + X3, n = 7, k = 4, n - k = 3
`~'n kll](X) _ ~l'3(1 ~- XZ -1- ~I'3) _ ~I'3 -{- li's -f- ~i6
`Dividing X"-km(X) by, g(X) using polynomial division, we can write
`
`-
`
`X 3 -~- X 5 + X 6 = ~ 1 '~' tY '~- X Z -~- X 3~ ~ 1 -~- X '~' tY3 ~ -~
`
`1
`
`quotient
`q(~
`
`generator
`g(X)
`
`remainder
`r(X)
`
`Sec. 5.6
`
`Cyclic Codes
`
`291
`
`a~ _ _
`
`Petitioner's Exhibit 1003
`Page 252
`
`
`
`Using Equation (5.64) yields
`U(X) = r(X) + X3m(X) = 1 + X3 + XS + X6
`U = 1 U 0
`1 0 1 1
`parity
`message
`bits
`bits
`
`Y ~
`
`y
`
`5.6.4 Circuit for Dividing Polynomials
`
`We have seen that the cyclic shift of a codeword polynomial and that the encoding
`of a message polynomial involves the division of one polynomial by another. Such
`an operation is readily accomplished by a dividing circuit (feedback shift register),
`Given two polynomials V(X) and g(X), where
`
`and
`
`V(X) = v~ + v~X + v2X2 + ••~ + vmX'n
`
`g(X)=go+Sid'+gzX2+... +g,-Xr
`
`such that m >— r, the divider circuit of Figure 5.17 performs the polynomial division
`steps of dividing V(X) by g(X), thereby determining the quotient and remainder
`terms:
`
`V(X) _ q~X~ + r(X)
`g~X)
`g~X)
`The stages of the register are first initialized by being filled with zeros. The first
`r shifts enter the most significant (higher-order) coefficients of V(X). After the
`rth shift, the quotient output is gr lv„Z ; this is the highest-order term in the quo-
`tient. For each quotient coefficient q; the polynomial q;g(X) must be subtracted
`from the dividend. The feedback connections in Figure 5.17 perform this sub-
`traction. The difference between the leftmost r terms remaining in the dividend
`
`Quotient
`
`V(X)
`
`~m-1~ ~m
`(high-order__
`coefficient
`first)
`
`292
`
`Figure 5.17 Circuit for dividing polynomials.
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 253
`
`
`
`and the feedback terms q~g(X) is formed on each shift of the circuit and appears
`as the contents of the register. At each shift of the register, the diFference is shifted
`one stage; the highest-order- term (which by construction is zero) is shifted out,
`while the next significant coefficient of V(X) is shifted in. After m + 1 total shifts
`into the register, the quotient has been serially presented at the output and the
`remainder resides in the register.
`
`Example 5.9 Dividing Circuit
`Use a dividing circuit of the form shown in Figure 5.17 to divide V(X) = X3 + XS
`+ X~ (V = 0 0 0 1 0 1 1) by g(X) _ (1 + X + X ~). Find the quotient and remainder
`terms. Compare the circuit implementation to the polynomial division steps per-
`formed by hand.
`Solution
`The dividing circuit needs to perform the following operation:
`
`X3 + XS + X6 r(X)
`q~X) +
`1 + X + X3 -
`1 + X + X~
`The required feedback shift register, following the general form of Figure 5.17, is
`shown in Figure 5.18. Assume that the register contents are initially zero. The op-
`erational steps of the circuit are as follows:
`
`Input queue
`0 00101 1
`000101
`0 0010
`0 001
`0 00
`0 0
`0
`—
`
`Shift number
`0
`1
`2
`3
`4
`5
`6
`7
`
`Register
`contents
`000
`100
`1 10
`01 1
`01 1
`1 1 1
`101
`100
`
`Output
`—
`0
`0
`0
`1
`1
`1
`1
`
`After the fourth shift, the quotient coefficients {q;} serially presented at the output
`are seen to be 1 1 1 1, or the quotient polynomial is q(X) = 1 + X + XZ + X3.
`The remainder coefficients {r;} are 1 0 0, or the remainder polynomial r(X) = 1. In
`
`Feedback polynomial
`
`X~
`
`X~
`
`X2 X3
`
`Input
`0 0 0 1 0 1 1
`
`+
`
`+
`
`Output
`
`Figure 5.18 Dividing circuit for Example 5.9.
`
`Sec. 5.6
`
`Cyclic Codes
`
`293
`
`- ~_~--~
`
`Petitioner's Exhibit 1003
`Page 254
`
`
`
`ir
`
`~I
`
`~
`
`'
`~
`
`1 '~- ~' + XZ + X3 '~
`
`1 t x t X"
`
`summary, the circuit computation V(X)lg(X) is seen.to be
`tY3 ~' XS + X6 _
`"z
`1 t !i 1- el "
`The polynomial division steps are as follows:
`Output after shift number:
`4
`5
`6
`7
`~
`~
`~
`~
`X3 +XZ + X + 1
`+ X3
`1l'3 + ~' ~- 1 X6 'f' ~'S
`tY6
`
`feedback after 4th shift
`'~' X4 'I' X3 E—
`register after 4th shift
`XS '~ X4 E
`XS + X3 + XZ E
`feedback after 5th shift
`X4 + X3 + XZ f
`register after 5th shift
`X'~
`+ XZ + X E
`feedback after 6th shift
`X3 + Xf
`register after 6th shift
`X3 + X + 1~ feedback after 7th shift
`1 E-- register after 7th shift
`(remainder)
`
`5.6.5 Systematic Encoding with an
`(n - k)-Stage Shift Register
`
`The encoding of a cyclic code in systematic form has been shown, in Section
`5.6.3, to involve the computation of parity bits as the result of the formation of
`X"-km(X) modulo g(X), in other words, the division of an upshifted (right shifted)
`message polynomial by a generator polynomial g(X). The need for upshifting is
`to make room for the parity bits, which are appended to the message bits, yielding
`the code vector in systematic form. Upshifting the message bits by n - k positions
`is a trivial operation and is not really performed as part of the dividing circuit.
`Instead, only the parity bits are computed; they are then placed in the appropriate
`location alongside the message bits. The parity polynomial is the rerrcainder after
`dividing by the generator polynomial; it is available in the register after n shifts
`through the (n - k)-stage feedback register shown in Figure 5.18. Notice that the
`first n - k shifts through the register are simply filling the register. We cannot
`have any feedback until the rightmost stage has been filled; we therefore can
`shorten the shifting cycle by loading the input data to the output of the last stage,
`as shown in Figure 5.19. Further, the- feedback term into the leftmost stage is the
`sum of the input and the rightmost stage. We guarantee that this sum is generated
`by ensuring that go = gn-k = 1 for any generator polynomial g(X). The circuit
`feedback connections correspond to the coefficients of the generator polynomial,
`which is written
`g~X~ = 1 -{- gl~l' -E- 82X2 -f- ... -j- gn—k—IXn—k-1 + ~'n—~c
`
`~5.66~
`
`294
`
`Channel Coding: Part 1
`
`Chap. 5
`
`Petitioner's Exhibit 1003
`Page 255
`
`
`
`n — k shift register stages
`
`gut
`
`V(X)
`
`Switch Z
`
`Figure 5.19 F,ncoding with an (n — k)-stage shift register.
`
`The following steps describe the encoding procedure used with the Figure 5.19
`{ encoder.
`
`i ~
`
`1. Switch 1 is closed during the first k shifts, to allow transmission of the
`message bits into the n — k stage encoding shift register.
`2. Switch 2 is in the down position to allow transmission of the message bits
`directly to an output register during the first k shifts.
`3. After transmission of the kth message bit, switch 1 is opened and switch 2
`i is moved to the up position.
`4. The remaining n — k shifts clear the encoding register by moving the parity
`bits to the output register.
`S. The total number of shifts is equal to n, and the contents of the output register
`is the codeword polynomial r(X) + X"-km(X).
`
`Example 5.10 Systematic Encoding of a Cyclic Code
`Use a feedback shift register of the form shown in Figure 5.19 to