throbber
460 DlOflAL ('OMMUNICAnONS
`460 DlOflAL ('OMMUNICAnONS
`
`bound on R, such that P,,->O as n~:c, For unquantized (soft-decision)
`bound on R, such that P,,->O as n~:c, For unquantized (soft-decision)
`decoding, R" is given as
`decoding, R" is given as
`
`2
`R = log
`2 1 _ e--f./Nl !
`{}
`
`(8-1-102)
`(8-1-102)
`
`where 'to! No = R, Y. is the SNR per dimension_ This result was derived in
`where 'to! No = R, Y. is the SNR per dimension_ This result was derived in
`Section 7-2.
`Section 7-2.
`On the other hand, if the output of the demodulator is quantized to Q levels
`On the other hand, if the output of the demodulator is quantized to Q levels
`prior to decoding, the Chernoff bound may be used to upper-bound the
`prior to decoding, the Chernoff bound may be used to upper-bound the
`ensemble average binary error probability Pis;, 8m ) defined in Section 7-2. The
`ensemble average binary error probability Pis;, 8m ) defined in Section 7-2. The
`result of this derivation is the same upper bound on Pe given in (8-1-101) but
`result of this derivation is the same upper bound on Pe given in (8-1-101) but
`with R" replaced by R Q , where
`with R" replaced by R Q , where
`
`(8-1-103)
`(8-1-103)
`
`In (8-1-103), {Pi} are the prior probabilities of the two signals at the input to
`In (8-1-103), {Pi} are the prior probabilities of the two signals at the input to
`the channel and {P(i I j)} denote the transition probabilities of the channel. For
`the channel and {P(i I j)} denote the transition probabilities of the channel. For
`example. in the case of a binary symmetric channel, we have p, = po = t
`example. in the case of a binary symmetric channel, we have p, = po = t
`P(O 10) = p(llt) = 1- P, and P(O 11) = p(ll 0) = p. Hence,
`P(O 10) = p(llt) = 1- P, and P(O 11) = p(ll 0) = p. Hence,
`
`where
`where
`
`2
`2
`RQ = log2 1 + v'4p(1 _ p) Q = 2
`RQ =log21 +v'4p(1_p) Q=2
`
`p = Q(v'2Yb RJ
`p = Q(v'2Yb RJ
`
`(8-:-104)
`(8-:-104)
`
`(8-1-105)
`(8-1-105)
`
`A plot of RQ versus 10 log ('lJNo) is illustrated in Fig, 8-1-15 for Q =2 and
`A plot of RQ versus 10 log ('lJNol is illustrated in Fig, 8-1-15 for Q =2 and
`Q = 00 (soft-decision decoding). Note that the difference in decoder perfor(cid:173)
`Q = 00 (soft-decision decoding). Note that the difference in decoder perfor(cid:173)
`mance between unquantized soft-decision decoding and hard-decision decod(cid:173)
`mance between unquantized soft-decision decoding and hard-decision decod(cid:173)
`ing is approximately 2 dB. In fact. it is easily demonstrated again that as
`ing is approximately 2 dB. In fact. it is easily demonstrated again that as
`'lJNll~O. the
`loss
`in performance due to hard-decision decoding
`is
`'lJNll~O. the
`loss
`in performance due to hard-decision decoding
`is
`
`1.0
`1.0
`N O.9
`O.9
`N
`H 0.8
`H 0.8
`~ 0.7
`~ 0.7
`oc~ 0.6
`oc~ 0.6
`1 0.5
`1 0.5
`,z 0.4
`,z 0.4 g 0.3
`
`"
`
`0.2
`0.1
`0.1
`
`FlGURE 11-1-15 Comparison of Ro (soft·decision decoding) with RQ (hard(cid:173)
`FlGURE 11-1-15 Comparison of Ro (soft·decision decoding) with RQ (hard(cid:173)
`decision decoding.) as a function of the SNR per dimension.
`decision decoding.) as a function of the SNR per dimension.
`
`o
`o
`-5
`5
`IOlogtlJNn) (dB)
`IOlogtlJNn) (dB)
`
`10
`10
`
`TQ Delta Exhibit 2006 (Part 2 of 2)
`Cisco Systems, Inc. v. TQ Delta LLC
`IPR2016-01020
`
`470
`
`

`

`f'HAPlfR it BLOCK AND CONVOLUTIONAL CHANNEL COVES 461
`f'HAPlfR it BLOCK AND CONVOLUTIONAL CHANNEL COVES 461
`
`to log", ~Jr = 2 dB, which is the same decibel difference that was obtained in
`to log", ~Jr = 2 dB, which is the same decibel difference that was obtained in
`our comparison of the channel capacity relations. We mention that about 1 dB
`our comparison of the channel capacity relations. We mention that about 1 dB
`of this loss can be recovered by quantizing the output of the demodulator to
`of this loss can be recovered by quantizing the output of the demodulator to
`three levels instead of two (see Problem 7-11). Additional improvements are
`three levels instead of two (see Problem 7-11). Additional improvements are
`possible by quantizing the output into more than three levels. as shown in
`possible by quantizing the output into more than three levels. as shown in
`Section 7-3.
`Section 7-3.
`
`8-1-7 Bounds on Minimum Distance of Linear Block Codes
`8-1-7 Bounds on Minimum Distance of Linear Block Codes
`The expressions for the probability of error derived in this chapter for
`The expressions for the probability of error derived in this chapter for
`sofl-decision and hard-decision decoding of linear binary block codes clearly
`sofl-decision and hard-decision decoding of linear binary block codes clearly
`indlcate the importance that the minimum distance parameter plays in the
`indlcate the importance that the minimum distance parameter plays in the
`performance of the code. If we consider soft-decision decoding, for example,
`performance of the code. If we consider soft-decision decoding, for example,
`the upper bound on the error probability given by (8-1-52) indicates that, for a
`the upper bound on the error probability given by (8-1-52) indicates that, for a
`given code rate K = kin, the probability of error in an A WON channel
`given code rate K = kin, the probability of error in an A WON channel
`decrease', exponentially with dm •n • When this bound is used in conjunction with
`decrease', exponentially with dm •n • When this bound is used in conjunction with
`the lower bound on d m " given below, we obtain an upper bound on PM that
`the lower bound on d m " given below, we obtain an upper bound on PM that
`can be achieved by many known codes. Similarly, we may use the upper bound
`can be achieved by many known codes. Similarly, we may use the upper bound
`given by (8-1-82) for the probability of error for hard-decision decoding in
`given by (8-1-82) for the probability of error for hard-decision decoding in
`conjunction with the lower bound on d min to obtain an upper bound on the
`conjunction with the lower bound on d min to obtain an upper bound on the
`error probability for
`linear binary block codes on the binary symmetric
`error probability for
`linear binary block codes on the binary symmetric
`channel.·
`channel.·
`On the other hand, an upper bound on dmin can be used to determine a
`On the other hand, an upper bound on dmin can be used to determine a
`lower bound on the probability of error achieved by the best code. For
`lower bound on the probability of error achieved by the best code. For
`example, suppose that hard-decision decoding is employed. In this case, we
`example, suppose that hard-decision decoding is employed. In this case, we
`have the two lower bounds on PM given by (8-1-86) and (8-1-87). with the
`have the two lower bounds on PM given by (8-1-86) and (8-1-87). with the
`former being the tighter. When either one of these two bounds is used in
`former being the tighter. When either one of these two bounds is used in
`conjunction with an upper bound on d min the result is a lower bound on PM for
`conjunction with an upper bound on d min the result is a lower bound on PM for
`the best (n. k) code. Thus, upper and lower bounds on d min are important in
`the best (n. k) code. Thus, upper and lower bounds on d min are important in
`assessing the capabilities of codes.
`assessing the capabilities of codes.
`A simple upper bound on the minimum distance of an (n. k) binary or
`A simple upper bound on the minimum distance of an (n. k) binary or
`non-binary linear block code was given in (8-1-14) as d min ,;; n - k + 1. It is
`non-binary linear block code was given in (8-1-14) as d min ,;; n - k + 1. It is
`convenient to normalize this expression by the block size n. That is.
`convenient to normalize this expression by the block size n. That is.
`
`1
`d min
`1
`d min
`~ ~ (l - R,.) + -
`~ ~ (l - R,.) + -
`n
`n
`n
`n
`
`(8-1- j(}6)
`(8-1- j(}6)
`
`where Rc is the code rate. For large n. the factor lin can be neglected.
`where Rc is the code rate. For large n. the factor lin can be neglected.
`If a code has the largest possible distance, i.e., d min = n - k + 1, it is called a
`If a code has the largest possible distance, i.e., d min = n - k + 1, it is called a
`maximum-distance-separable code. Except for the trivial repetition-type codes.
`maximum-distance-separable code. Except for the trivial repetition-type codes.
`there are no binary maximum-separable codes. In fact, the upper bound in
`there are no binary maximum-separable codes. In fact, the upper bound in
`(8-1-106) is extremely loose for binary codes. On the other hand. nonbinary
`(8-1-106) is extremely loose for binary codes. On the other hand. nonbinary
`codes with d nin = n - k + 1 do exist. For example, the Reed-Solomon codes,
`codes with d nin = n - k + 1 do exist. For example, the Reed-Solomon codes,
`which comprise a subclass of BCH codes, are maximum-distance-separable.
`which comprise a subclass of BCH codes, are maximum-distance-separable.
`In addition to the upper bound given above, there are several relatively
`In addition to the upper bound given above, there are several relatively
`
`471
`
`

`

`462
`462
`
`l)I(lliAL (O:'tot~1lXIC .. HIOl'\iS
`l)I(lliAL (O:'tot~1lXIC .. HIOl'\iS
`
`tight hounds on the minimum distance of linear block codes. We shall briefly
`tight hounds on the minimum distance of linear block codes. We shall briefly
`descrihe four important bounds. three of which are upper bounds and the
`descrihe four important hounds. three of which are upper bounds and the
`other a lower bound. The derivations of these bounds are lengthy and are not
`other a lower hound. The derivations of these hounds are lengthy and are not
`of particular interest in our subsequent discussion. The interested reader may
`of particular interest in our subsequent discussion. The interested reader may
`Tefer 10 Chapler 4 of the hook by Peterson and Weldon (1972) for those
`Tefer 10 Chapler 4 of the hook by Peterson and Weldon (1972) for those
`derivations.
`derivations.
`One upper hound on the minimum distance can be obtained from the
`One upper hound on the minimum distance can be obtained from the
`inequality in [8-1-83). By taking the logarithm of both sides of (8-1-83) and
`inequality in [8-1-83). By taking the logarithm of hoth sides of (8-1-83) and
`dividing by If. we obtain
`dividing by If. we obtain
`' (n)
`' (n)
`1 - R.- ;. - log, 2:
`.
`1 - R.- ;. - log, 2:
`.
`I
`I
`n
`n
`
`;00-0
`;00-0
`
`I
`I
`
`(8-1-\07)
`(8-1-\07)
`
`Since the error-correcting capability of the code. measured by I, is related to
`Since the error-correcting capability of the code. measured by I, is related to
`the minimum distance. the above relation is an upper bound on the minimum
`the minimum distance. the ahove relation is an upper bound on the minimum
`distance. It is called the Hamming upper bOllnd.
`distance. It is called the Hamming upper bOllnd.
`The asymptotic form of {8-1-107) is obtained by letting n-+ ce. Now, for any
`The asymptotic form of {8-1-107) is obtained by letting n-+ ce. Now, for any
`II. let t" be the largest integer I for which (8-1-107) holds. Then, it can be shown
`II. let t" be the largest integer I for which (8-1-107) holds. Then, it can be shown
`(Peterson and Weldon. 1972) that as n.- x , the ratio tin for any (n, k) block
`(Peterson and Weldon. 1972) that as n.- x , the ratio tin for any (n, k) block
`code cannot exceed r,,/n, where tuln satisfies the equation
`code cannot exceed r,,/n, where tuln satisfies the equation
`
`1- Rc = H(lo/n)
`1- Rc = H(lo/n)
`
`(8-1-108)
`(8-1-108)
`
`and H(xl is the binary entropy function defined by (3-2-10)_
`and H(xl is the binary entropy function defined by (3-2-10)_
`The generalization of the Hamming bound to nonbinary codes is simply
`The generalization of the Hamming bound to nonbinary codes is simply
`
`1 - Rc;' !.Iog., [± (.~)(q - I)i]
`1 - Rc;' !.Iog., [± (.~)(q -I)i]
`
`n
`n
`
`(=0 1
`(=0 1
`
`(8-1-109)
`(8-1-109)
`
`Another upper hound. developed by Plotkin (1960), may be stated as
`Another upper bound. developed by Plotkin (1960), may be stated as
`follows. The number of check digits required to achieve a minimum distance
`follows. The number of check digits required to achieve a minimum distance
`d mh in an (n. k) linear block code satisfies the inequality
`d mh in an (n. k) linear block code satisfies the inequality
`
`qdm ;, -I
`qdm ;, -I
`-I-loud·
`-I-loud·
`n-k:;;'
`n-k:;;'
`oq mm
`oq mm
`q-
`q-
`I
`I
`
`(8-1-110)
`(8-1-110)
`
`where q is the alphabet size. When the code is binary, (8-1-110) may be
`where q is the alphabet size. When the code is binary, (8-1-110) may be
`expressed as
`expressed as
`
`) 1 (
`dmin (1
`) 1 (
`dmin (1
`- - 1 - -2d . log2 d min
`- - 1 - -2d . log2 d min
`,,;; -
`,,;; -
`2
`2
`n
`n
`min
`min
`
`2)
`2)
`
`1 - R,. + -
`1 - R,. + -
`n
`n
`
`In the limit asn.- lC with dminln ,,;;~, (8-1-110) reduces to
`In the limit asn.- lC with dminln ,,;;~, (8-1-110) reduces to
`
`(8-1-111)
`(8-1-111)
`
`472
`
`

`

`CHAYff-1{ It llU)('fo( A~D ('():-';VGL1'TJO:\AL CHA':-;I-L \ '()DES
`llU)('i\. A~D ("():-';VGL1'TJO:\AL CHA':-;I-L. \ "()DES
`CHA:>·ff·K K
`
`463
`463
`
`Finally. there is another tight upper bound on the minimum distance
`Finally, there is another tight upper bound on the minimum distance
`obtained by Elias (Berlekamp. I 96i'i). It may be expressed in its asymptotic
`obtained by Elias (Berlekamp. I 96i'i). It may be expressed in its asymptotic
`form as
`form as
`
`(~-1-112)
`(~-1-112)
`
`where the parameter A ,s related Ie) the code rate through the equation
`where the parameter A ,s related (e) the code rate through the equation
`
`R.= I +Alog,A -tl-A)log,(I-A). (hA~1 (~-I-I13)
`R,= I +Alog,A -tl-A)log,(I-A). (hA~1 (~-I-II3)
`
`Lower bounds on the minimum distance of (n. k) block codes also exist. In
`Lower bounds on the minimum distance of (fl, k) block codes also exist. In
`particular. binary block codes exist that have a normalized minimum distance
`particular. binary block codes exist that have a normalized minimum distance
`that asymptotically satisfies the inequality
`that asymptotically satisfies the inequality
`
`(8-1-1141
`(8-1-1141
`
`where a is related u the code rate through the equation
`where a is related u the code rate through the equation
`
`R, = I -
`R, = I - fi(a)
`f/( a. )
`
`= 1 + a log, (f + (I - (Y) log, (l - a).
`= I + a log, '" + (I - (Y) log, (l - a).
`
`(8-1-115)
`(8-1-115)
`
`ThiS lower bound is a specie! case of a lower bound developed by Gilbert
`ThiS lower bound is a special case of a lower bound developed by Gilbert
`(1952) and Varsharmov (1957). which applies to non binary and b:nary block
`(1952) and Varsharmov (1957). which applies to non binary and b:nary block
`codes.
`codes.
`The asymptotic bounds given above arc plotted in Fig. 8·1-16 for binar)
`The asymptotic bounds given above arc plotted in Fig. 8·1-16 for binar)
`codes. Also plotted in the figure for purposes of comparison are curves of the
`codes. Also plotted in the figure for purposes of comparison are curves of the
`minimum distance as a function of code rate for BCH codes of block lengths
`minimum distance as a function of code rate for BCH codes of block lengths
`n = 31 and 63. We observe that for n = 31 and 63, the normalized minimum
`n = 31 and 63. We observe thai for n = 31 and 63, the normalized minimum
`distance falls well above the Varsharmov-Gilbcrl lower bound. As the block
`distance falls well above the Varsharmov-Gilbcrt lower bound. As the block
`lcn~lh n increa,cs. \he efficiencv of the BCH codes diminishes. For example,
`lcn~th n increa,cs. \he efficiencv of the BCH codes diminishes. For example.
`when n = 1023, the curve for the normalized minimum distance falls close to
`when n = 1023. the curve for the normalized minimum distance falls close to
`
`05
`0.5
`
`n
`n
`
`Plot!.;!.'. urpcr bo"nd
`Plot!.;!.'. urpcr bo"nd
`
`Il~
`l:h..t\
`l:h..t\~_....;;;:
`""","-~
`i Uppl!: b"una
`i Uppl!: b"una
`
`() I r Gilhel1-Varsbrmn\"
`() I r Gilhel1-Varsbrmnv
`
`lo .... er hound
`lo .... er hound
`
`FIC;l.:RE 8-1-16
`FIC;L:RE 8-1-16
`
`l;ppt:r and lower hounds un normalizl'd minimum
`l;ppt:r and lower hounds un normalizl'd minimum
`Jistance as.a function of code fJte.
`Jistance a'S a function of code fJte.
`
`"
`"
`
`0.2
`0.2
`
`OfJ
`OfJ
`:1~
`:1~
`Code r,lk' N,
`Code r,lk' N,
`
`0,,;,\
`0·';'\
`
`1.0
`1.0
`
`473
`
`

`

`464
`464
`
`DIGITAL COMMF"ICATIONS
`DIGITAL COMMF"ICATIONS
`
`the Varsharmov-Gilbert bound. As n increases beyond n = 1023. the normal(cid:173)
`the Varsharmov-Gilbert bound. As n increases beyond n = 1023. the normal(cid:173)
`ized minimum distance of the BCH codes continues to decrease and falls below
`ized minimum distance of the BCH codes continues to decrease and falls below
`the Varsharmov-Gilbert bound. That is, dm;n/n approaches zero as n tends to
`the Varsharmov-Gilbert bound. That is, dm;n/n approaches zero as n tends to
`infinity. Consequently the BCH codes. which are the most important class of
`infinity. Consequently the BCH codes. which are the most important class of
`cyclic codes. are not very efficient at large block lengths.
`cyclic codes. are not very efficient at large block lengths.
`
`8-1-8 Nonbinary Block Codes and Concatenated Block
`8-1-8 Nonbinary Block Codes and Concatenated Block
`Codes
`Codes
`
`A non binary block code consists of a set of fixed-length code words in which
`A non binary block code consists of a set of fixed-length code words in which
`the elements of the code words are selected from an alphabet of q symbols.
`the elements of the code words are selected from an alphabet of q symbols.
`denoted by {a, 1,2 ..... q - I}. Usually. q = 2*, so that k information bits are
`denoted by {a, 1,2 ..... q - I}. Usually. q = 2*, so that k information bits are
`mapped into one of tbe q symbols. The length of the non binary code word is
`mapped into one of tbe q symbols. The length of the non binary code word is
`denoted by N and the number of information symbols encoded into a block of
`denoted by N and the number of information symbols encoded into a block of
`N symbols is denoted by K. The minimum distance of the nonbinary code is .
`N symbols is denoted by K. The minimum distance of the nonbinary code is .
`denoted by D m ;". A systematic (N, K) block code consists of K information
`denoted by D m ;". A systematic (N, K) block code consists of K information
`symbols and N - K parity check symbols.
`symbols and N - K parity check symbols.
`Among the various types of nonbinary linear block codes, the Reed(cid:173)
`Among the various types of nonbinary linear block codes, the Reed(cid:173)
`Solomon codes are some of the most important for practical applications. As
`Solomon codes are some of the most important for practical applications. As
`indicated previously, they comprise a subset of the BCH codes, which in turn
`indicated previously, they comprise a subset of the BCH codes, which in turn
`are a class of cydic codes. These codes are described by the parameters
`are a class of cydic codes. These codes are described by the parameters
`
`N=q-!=2A-1
`N=q-!=2"-1
`K = 1.2.3, ' " , N - I
`K = 1.2.3, ' " , N - I
`Dm;n=N- K + 1
`Dm;n=N- K + 1
`
`R,=K/N
`R,=K/N
`
`Such a code is guaranteed 10 correct up to
`Such a code is guaranteed 10 correct up to
`
`1= U(Dm ," -
`1= U(Dm ," -
`
`l)J
`l)J
`=U(N - K)J
`=U(N - K)J
`
`(8-1-116)
`(8-1-116)
`
`(8-1-117)
`(8-1-117)
`
`symbol errors. Of course, these codes may be extended or shortened in the
`symbol errors. Of course, these codes may be extended or shortened in the
`manner described previously for binary block codes.
`manner described previously for binary block codes.
`The weight distribution {A,} of the class of Reed-Solomon codes is known.
`The weight distribution {A,} of the class of Reed-Solomon codes is known.
`The coefficients in the weight enumerating polynomial are given as
`The coefficients in the weight enumerating polynomial are given as
`
`where D == D m ," and q = 2*,
`where D == D m ," and q = 2*,
`One reason for the importance of the Reed-Solomon codes is their good
`One reason for the importance of the Reed-Solomon codes is their good
`
`(8-I-WI)
`(8-I-WI)
`
`474
`
`

`

`CHAPTER K: BLOCK AND CON\iOLUTlO~AL {,HA~NL CODES
`CHAPTER K: BLOCK AND CON\iOLUTlO~AL {,HA~NL CODES
`
`46S
`46S
`
`distance properties. A second reason for their importance is the existence of
`distance properties. A second reason for their importance is the existence of
`efficient hard-decision decoding algorithms, which make it possible to imple(cid:173)
`efficient hard-decision decoding algorithms, which make it possible to imple(cid:173)
`ment relatively long codes in many practical applications where coding is
`ment relatively long codes in many practical applications where coding is
`desirable.
`desirable.
`A nonbinary code is particularly matched to an M-ary modulation technique
`A nonbinary code is particularly matched to an M-ary modulation technique
`transmitting the 2' possible symbols. Specifically, M-ary orthogonal
`for
`transmitting the 2' possible symbols. Specifically, M-ary orthogonal
`for
`signaling. e.g., M-ary FSK, is frequently used. Each of the 2* symbols in the
`signaling. e.g., M-ary FSK, is frequently used. Each of the 2* symbols in the
`q-ary alphabet .is mapped to one of the M = 2* orthogonal signals. Thus, the
`q-ary alphabet .is mapped to one of the M = 2* orthogonal signals. Thus, the
`transmission of a code word is accomplished by transmitting IV orthogonal
`transmission of a code word is accomplished by transmitting IV orthogonal
`signals, where each signal is selected from the set of M = 2' possible signals.
`signals, where each signal is selected from the set of M = 2' possible signals.
`The optimum demodulator for such a signal corrupted by A WGN consists
`The optimum demodulator for such a signal corrupted by A WGN consists
`of M matched filters (or cross-correIa tors) whose outputs are passed to the
`of M matched filters (or cross-correIa tors) whose outputs are passed to the
`decoder, either in the form of soft decisions or in the form of hard deciSions. If
`decoder, either in the form of soft decisions or in the form of hard deciSions. If
`hard decisions are made by the demodulator, the symbol error probability f"
`hard decisions are made by the demodulator, the symbol error probability f"
`and the code parameters are sufficient to characterize the performance of the
`and the code parameters are sufficient to characterize the performance of the
`decoder. In fact, the modulator, the A WGN channel, and the demodulator
`decoder. In fact, the modulator, the A WGN channel, and the demodulator
`form an equivalent discrete (M-ary) input, discrete (M-ary) output, symmetric
`form an equivalent discrete (M-ary) input, discrete (M-ary) output, symmetric
`memoryless channel characterized by the transition probabilities p. = 1 - PM
`memoryless channel characterized by the transition probabilities p. = 1 - PM
`and f,,/(M - 1). This channel model, which is illustrated in Fig. 8-1-17. is a
`and f,,/(M - 1). This channel model, which is illustrated in Fig. 8-1-17. is a
`generalization of the BSe.
`generalization of the BSe.
`The performance of the hard-decision decoder may be characterized by the
`The performance of the hard-decision decoder may be characterized by the
`following upper bound on the code word error probability:
`following upper bound on the code word error probability:
`P. "". ~ (IV)' r (1- P )N-,
`P. "". ~ (IV)' r (1- P )N-,
`
`(L .J .M
`(L .J .M
`I
`I
`1'-' -<-- 1
`1'-' -<-- 1
`
`M
`M
`
`(8-1-119)
`(8-1-119)
`
`where I is the number of errors guaranteed to be corrected by the code.
`where I is the number of errors guaranteed to be corrected by the code.
`When a code word error
`is made. the corresponding symbol error
`When a code word error
`is made. the corresponding symbol error
`probability is
`probability is
`[ " ' IV ' .
`[ " ' IV ' .
`p, .• = ~ ~ i( . 1p',,(I- f,,)"
`p, .• = ~ ~ i( . 1p',,(I- f,,)"
`
`'"Y 1= t t l
`ill 1= t t l
`
`I J
`I J
`
`(8-1-120)
`(8-1-120)
`
`FIGURE 8-1-17 M-ary input. M-ary output, s},mrnetric merroryless
`FIGURE 8-1-17 M-ary input. M-ary output, s},mrnetric merroryless
`channel.
`channel.
`
`1 - p.w
`M - , .t::-----..::~ __ 30 "1 _ .
`PM
`p= M- 1
`
`475
`
`

`

`466
`466
`
`DIGITAL COMMU!"Ic.,-\TTO~S
`DIGITAL COMMU!"Ic.,-\TTO~S
`
`Furthermore, if the symbols are converted to binary digits, the hit error
`Furthermore, if the symbols are converted to binary digits, the hit error
`probability corresponding to (8-1-120) is
`probability corresponding to (8-1-120) is
`
`(8-1-121)
`(8-1-121)
`
`Example 8-1-13
`Example 8-1-13
`
`I = 31 Reed-Solomon code
`Let us evaluate the performance of an N = 2' -
`I = 31 Reed-Solomon code
`Let us evaluate the performance of an N = 2' -
`with D m ," = 3, 5, 9, and 17, The corresponding values of K are 29, 21, n.
`with D m ," = 3, 5, 9, and 17, The corresponding values of K are 29, 21, n.
`and 15. The modulation is ll/! = 32 orthogonal FSK with noncoherem
`and 15. The modulation is ll/! = 32 orthogonal FSK with noncoherem
`detection at the receiver.
`detection at the receiver.
`The probability of a symbol error is given by (5-4-46), and may he
`The probability of a symbol error is given by (5-4-46), and may he
`expressed as.
`expressed as.
`
`(8-,-122)
`(8-,-122)
`
`where y is the SNR per code symbol. By using (8-1-122) in (8-1-120l and
`where y is the SNR per code symbol. By using (8-1-122) in (8-1-120l and
`combining the result with (8-1-121), we obtain the bit error probability. The
`combining the result with (8-1-121), we obtain the bit error probability. The
`results of these computations are plotted in Fig. 8-1-1/;. Note that the more
`results of these computations are plotted in Fig. 8-1-1/;. Note that the more
`powerful codes (large D mi,) give poorer performance at low SNR per bit
`powerful codes (large D mi,) give poorer performance at low SNR per bit
`than the weaker codes. On the other hand, at high SNR, the more powerful
`than the weaker codes. On the other hand, at high SNR, the more powerful
`codes give better perfolmance. Hence, there are crossovers among the
`codes give better perfolmance. Hence, there are crossovers among the
`various codes, as illustrated for example in Fig. 8-1-18 for the t = 1 and 1 = 8
`various codes, as illustrated for example in Fig. 8-1-18 for the t = 1 and 1 = 8
`codes. Crossovers also occur among the I = I, 2, and 4 codes at smaller
`codes. Crossovers also occur among the I = I, 2, and 4 codes at smaller
`values of SNR per bit. Similarly, the curves for t = 4 and 8 and for 1=1-: and
`values of SNR per bit. Similarly, the curves for t = 4 and 8 and for 1=1-: and
`2 cross in the region of high S"lR. This is the characteristic behavior for
`2 cross in the region of high S"lR. This is the characteristic behavior for
`noncoherent detection of the coded waveforms.
`noncoherent detection of the coded waveforms.
`
`If the demodulator docs nOl make a hard deciSIon on each sl'mbol. buL
`If the demodulator docs nOl make a hard deciSIon on each sl'mbol. buL
`
`~o- (
`~o- (
`
`c:
`c:
`'.::-
`:O.j
`'.::-
`:O.j
`E
`E
`
`"'FIGURE 8.1-18
`"'FIGURE 8.1-18
`
`Perforrna!lce of ~everal i\- =-: 1L (-crror correcting Reeu-So\omon
`Perforrna!lce of ~everal i\- =-: 1L (-crror correcting Reeu-So\omon
`code5 wi!:') 32-ary FSK moduhtion on an A. WGN channel
`code5 wi!:') 32-ary FSK moduhtion on an A. WGN channel
`(nHncohe~enr demodulation).
`(nHncohe~enr demodulation).
`
`10
`
`4.0
`
`(,';':0
`';:-.JR ~I h';,ll"JBl
`
`476
`
`

`

`CHA:1TlR.s BLOCK A~D ["O!\:VOUTIO:-..lAl CHA'''Fl \ O[)l'~ 467
`CHA:>TlR.s BLOCK A~D ["O!\:VOUTIO:-..lAl CHA'''Fl \ O[)l'~ 467
`
`Outer
`Outer
`encoder
`encoder
`! \ Kl
`! \ Kl
`
`Input
`Input
`data
`data
`
`FlGURE 8-t-J9
`FlGURE 8-t-J9
`
`Jolt;!
`Jolt;!
`Block diagram uf a communications ~yslem employing a concatenated c\)dc.
`Block diagram uf a communications ~yslem employing a concatenated c\)dc.
`
`the d.::coder,
`to
`the unquantized matched filter outputs
`instead, passe,
`the d.::coder,
`to
`the unquantized matched filter outputs
`instead, passe,
`soft-decision decoding can he performed. This decoding involves the formation
`soft-decision decoding can he performed. This decoding involves the formation
`of q" = 2"< correlation metrics. where each metric corresponds to one of the
`of q" = 2"< correlation metrics. where each metric corresponds to one of the
`q" code words and consists of a sum of /II matched filter outputs corr.:spon<.iing
`q" code words and consists of a sum of /II matched filter outputs corr.:spon<.iing
`to the N code symt>ols. The matched filter outputs may be added coherently. or
`to the N code symt>ols. The matched filter outputs may be added coherently. or
`they may be envelope-cetected and then added, or they may he 'quare law
`they may be envelope-cetected and then added, or they may he 'quare law
`detected and then added. If coherent detection is used and the channel noise is
`detected and then added. If coherent detection is used and the channel noise is
`AV .... GN, the computation of the probability of error is a straightforward
`AV .... GN, the computation of the prohability of error is a straightforward
`extension of the hinary case considered in Section 8-1-4. On the other hand.
`extension of the hinary case considered in Section 8-1-4. On the other hand.
`when envelope detection or square-law detection and noncoherent comhining
`when envelope detection or square-law detection and noncoherent combining
`are used to fonn the decision variables, the computation of the decoder
`are used to form the decision variables, the computation of the decoder
`performance is considerably more complicated.
`performance is considerably more complicated.
`
`Concatenated Block Codes A concatenated code consists 01 tWe) separate
`Concatenated Block Codes A concatenated code consists 01 tWe) separate
`codes which are combined to form a larger code. Usually one code is selected
`codes which are combined to form a larger code. Usually one code is selected
`to be nonbinarv and the other is binary. The two codes are concatt:nated as
`to be nonbinarv and the other is binary. The two codes are concatt:nated as
`illustrated in Fig. 8-1-\9. The Ilonbinary (N, K i code forms the <'uter code and
`illustrated in Fig. 8-1-\9. The nonbinary (N, K; code forms the <'uter code and
`the binary code forms the inner code. Code words are formed by subdividing a
`the binary code forms the inner code. Code words are formed by subdividing a
`block of kK information bits into K groups, called symbols, where each symhol
`block of kK information bits into K groups, called symbols, where each symhol
`consists of k bits. The K k-bit symbols are encoded into N k-bit symbols by the
`consists of k bits. The K k-bit symbols are encoded into N k-bit symbols by the
`outer encoder, as is usually done with a nonbinary code. The inner encoder
`outer encoder, as is usually done with a nonbinary code. The inner encoder
`takes each k-bit symbol and encodes it into a binary block code of length fl.
`takes each k-bit symbol and encodes it into a binary block code of length fl.
`Thus we obtain a concatenated block code having a block length nf Nfl bits and
`Thus we obtain a concatenated block code having a block length nf Nfl bits and
`containing kK information bits. That is, we have created an equivalent
`containing kK information bits. That is, we have created an equivalent
`(Nn, Kk) long binary code. The bits in each code word are transmitted over
`(Nn, Kk) long binary code. The bits in each code word are transmittC!d over
`the channel by means of PSK or, perhaps. by FSK.
`the channel by means of PSK or, perhaps. by FSK.
`We also indicate that the minimum distance of til, concatenated code is
`We also indicate that the minimum distance of til, concatenated code is
`dm;nDm;m where Dm'n is the minimum distance of the outer code and d""n is the
`dm;nDm;m where Dm'n is the minimum distance of the outer code and d""n is the
`minimum distance of the inner code. Furthermore, the rate of 'he t'oneaten(cid:173)
`minimum distance of the inner code. Furthermore, the rate of 'he t'oneaten(cid:173)
`ated code is KkiNfl, which is equal to the product of the two code rates.
`ated code is KkiNfl, which is equal to the product of the two code rates.
`A hard-decision decoder for a concatenated code is conveniently separated
`A hard-decision decoder for a concatenated code is conveniently separated
`into an inner decoder and an outer decoder. The inner decoder takes the hard
`into an inner decoder and an outer decoder. The inner decoder takes the hard
`decisions on each group of n bits, corresponding to a code word of the inner
`decisions on each group of n bits, corresponding to a code word of the inner
`code, and makes a decision on the k information bits based on maximum(cid:173

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket