`772
`IEEE TRANSACTIC
`[17] J. K. Omura, “On the Viterbi decoding algorithm,”
`Andrew J. Viterbi (S’54-M’58SM’63) w&s
`IEEE
`Trans. Inform. Theory, vol. IT-15, Jan. 1969, pp. 177-179.
`9, 1935.
`born in Bergamo, Italy, on March
`[181 FI .Jelinek, “Fast. sequential
`decoding algorithm using a
`He received the B.S. and M.S. degrees
`in
`stack,” I B M J. Res. Dev., vol. 13, no. 6, Nov. 1969, pp.
`electrical engineering
`from the Massachu-
`675-685.
`setts Institute
`of Technology, Cambridge,
`J. A. Heller, “E;ror
`[191 E. A. Bucher and
`robability bounds
`in 1957, and the Ph.D. degree in electrical
`for systematic convolutional codes,
`IEEE Trans. Inform.
`engineering from the University of Southern
`Theory, vol. IT-16, Mar. 1970, pp. 219-224.
`California, Los Angeles, in 1962.
`[201 J. P. Odenwalder, “Optimal
`decoding of convolutional
`While attending M.I.T., he participated in
`codes,’’ Ph.D. dissertation, Dep. Syst.
`Sci., Sch. Eng. Appl.
`the cooperative program
`at the Raytheon
`Sci., Univ. California, Los ‘Angeles, 1970.
`Company. In 1957 he joined the Jet Propul-
`[211 G. D. Forney, Jr., “Coding and its application in
`space
`sion Laboratory where he became a Research Group Supervisor in
`vol. 7, June 1970, pp.
`communlcatlons,” IEEE Spectrum,
`the Commnnications Systems Research Section. I n 1963 he joined
`47-58.
`“Convolutional codes I: Algebraic structure,” ZEEE
`[221 -,
`the faculty of the University of California, Los Angeles, as an As-
`sistant Professor. In 1965 he was promoted
`to Associate Professor
`Trans. Inform. Theory, vol. IT-16, Nov. 1970, pp. 720-738;
`and in 1969 to Professor of Engineering and Applied Science. He
`“I1 : Maximum likelihood decoding,’’ and “111 : Sequential
`1968 of Linkabit Corporation of which he is
`was a cofounder in
`decoding,” I E E E Trans. Inform. Theory,
`to be published.
`presently Vice President.
`[231 W. J. Rosenberg, “Structural properties
`of convolutional
`Dr. Viterbi is a member of the Editorial Boardsof thePRocmmNGs
`codes,” Ph.D. dissertation, Dep.
`Syst.. Sci., Sch. Eng. Appl.
`OF THE IEEE and of the journal Information and Control. He is a
`Sci., Univ. California, Los Angeles, 1971.
`[241 J. A. Heller and I. M. Jacobs, “Viterbi decoding for satellite
`member of Sigma Xi, Tau Beta Pi, and E t a Kappa Nu and has served
`on several governmental advisory committees and panels. He is the
`835-848.
`and space com,munication,” this issue, pp.
`[251 A. R. Cohen, J. A. Heller, and A. J. Viterbi, “A new cod-
`coauthor of a book on digital cornmurkation and author of another
`on coherent communication, and he has received three awards for
`ing technique for asynchronous multiple access communica-
`his journal publications.
`tion,,’ this issue, pp. 849-855.
`
`Burst-Correcting Codes for the Classic Bursty Channel
`
`clarify
`of this paper is to organize and
`Abstract-The purpose
`the work of the past decade on burst-correcting codes. Our method
`an idealized model, called the classic bursty
`is, first, to define
`channel, toward which most burst-correcting schemes are explicitly
`or implicitly aimed; next, to b o y d the best possible performance
`of schemes which
`on this channel; and, finally, to exhibit classes
`are asymptotically optimum and serve as archetypes of the burst-
`correcting codes actually in use. In this light we survey and cat-
`egorize previous work on burst-correcting codes. Finally, we discuss
`in which real channels fail to satisfy’ the
`qualitatively the ways
`assumptions of the classic bursty channel, and the effects
`of such
`failqres on the various types
`of burst-correcting schemes. We
`conclude by comparing forward-error-correction to
`the popular
`alternative of automatic repeat-request (ARQ).
`
`INTRODUCTION
`OST WORK in coding theory has been addressed
`to efficient communication over memoryless
`channels. While this work has
`been directly
`applicable to space channels [ 13, it has been of little use
`on all other real channels, where errors tend to occur in
`bursts. The use of interleaving to adapt random-error-
`correcting codes to bursty channels is frequently
`pro-
`
`Paper approved by the Communicatioq Theory Committee
`the IEEE Communication
`Technology Group for publication
`received May 10, 1971.
`without oral presentation. Manuscript
`Mass., 02195.
`The author is with Codex Corporatioq, Newton,
`
`of
`
`posed, but turns out to be a rather inefficient method of
`burst correction.
`Of the work that has gone into burst-correcting codes,
`the bulk has
`been devoted to finding
`codes capable of
`length B separated by guard
`correcting all bursts of
`length G. We call these
`zero-error burst-
`spaces of
`It has been realized in the past
`correcting codes.
`few
`years that this work too has been somewhat misdirected ;
`are suited, called
`for on channels for which such codes
`in this paper classic bursty channels, much more efficient
`communication is possiblk if we require pnly that practi-
`cally all bursts of length B be correctible.
`The principal purpose of this paper is tutorial. In order
`to clarify the
`issues involved in
`the design of burst-
`correcting codes, we examine an idealized model,
`the
`classic bursty channel, on which bursts are never longer
`than B nor guard spaces shorter than G. We see that the
`inefficiency of zero-error codes is due
`to their operating
`at the zero-error capacity of the channel, approximately
`(G - B ) / (G + B ) , rather than at the true capacity,
`which i s more like G / ( G + B ) . Operation at the true
`capacity is possible, however, if bursts can be treated as
`erasures; that is, if their locations can be identified. By
`the construction of some archetypal schemes
`in which
`(RS) codes are used with inter-
`short Reed-Solomon
`leavers, we arrive at asymptotically optimal
`codes of
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1008
`Page 1 of 10
`
`
`
`FORNEY: BURST-CORRECTING CODES
`either the burst-locating or zero-error type. (The useful-
`ness of RS codes in this situation
`is seen t,o be due to
`their being optimal
`in a sense similar to that in which
`optimal burst-correcting codes are optimal.) Finally, we
`note that the sensitivity to errors in the guard space
`which characterizes most known Ilurst-locating schemes
`is avoidable a t a minor cost in guard-space-to-burst ratio.
`When we turn to typical real channels, however, the
`superiority of one error-correcting scheme over another
`is much harder to assert. We discuss qualitatively what
`may be expected with various schemes when the channel
`does not fit the idealized model. Finally, we compare for-
`ward error correction to the more widely
`used method
`of automatic repeat-request (ARQ).
`CLASSIC BURSTY CHANNEL
`The classic, hursty channel is an idealization of the cx-
`perimental fact that
`on most channels transmission is
`poorer a t some times than
`a t others. In this paper a
`classic bursty channel is
`defined as one having the fol-
`lowing properties.
`1) The channel (like the girl with the curl) has two
`modes of behavior: “When she was
`good, she was very,
`We
`very good, but when she was bad, she was horrid.”
`call these two states burst and gmrd space. In the burst
`state, channel outputs carry no information about the
`inputs. In the guard space, we shall initially assume that
`channel outputs are error-free; later we shall allow some
`small background error probability
`p . We further dis-
`tinguish between the cases where the c,hannel state is un-
`known at the receiver (the usual case), and where the
`channel state is known, when bursts may be regarded as
`we speak of a classic
`erasure bursts. In the latter case
`erasure-burst channel.
`2) The channel never stays in burst mode for more
`than B symbols, nor does it ever stay’ in the guard space
`than G symbols.
`mode for fewer
`The two-state assumption [2], while artificial, is not a
`crippling idealization of most actual channels, especially
`is encom-
`when the ’possibility of guard space errors
`passed. It is the second assumption that is the Achilles
`heel of this model; yet, as we shall see later, the making
`of this assumption is in a sense unavoidable.
`
`CAPACITY THE~REMS
`The capacity C of a channel
`is defined as the maxi-
`mum continuous rate
`of transnlission for which arbi-
`trarily low error probability is achievable. Its zero-error
`capacity C,, is defined as the maximum rate for
`which
`zero-error probability is achievable. We recall that on all
`C,,
`memoryless channels except erasure-type channels,
`is strictly less than C ; and in fact that usually (on any
`completely connected channel) C,, is zero.
`It is clear intuitively that, since the classic bursty
`channel may wipe out B out of every G‘ + B transmitted
`symbols, its capacity in symbols per transmitted symbol
`must be bounded by G/ (G! + B ) . Furthermore, it is evi-
`dent that this capacity
`bound retains its validity even
`
`773
`when feedback is permitted. An information-theoretic
`proof of these facts is easily constructed.
`In this section we shall derive bounds on zero-error
`capacity under the sole restriction that infinite buffering
`not he allowed at the encoder. We encourage
`the reader
`not to be intimidated by the theorem-proof format, which
`we have adopted for brevity and for conceptual clarity;
`the theorems are simple (and old), and the proofs are
`elementary. We have organized the argument to show
`that there
`is a fundamental relationship between opti-
`mum codes for the classic bursty channel and the maxi-
`mum distance separable codes [3], [4], such as the RS
`codes [ 5 ] . We have also centered our attention on burst-
`dbes this approach
`c r a k e c,orrection ;% not only
`1eacI
`easily and naturally to the
`usua! hurst-error-correction
`is going on in burst-
`results, but it also clarifies what
`locating codes.
`of codes. In order to
`We consider two different types
`show the relation between these capacity theorems and
`(n, k )
`well-known block code results,
`we first consider
`block codes, in which k information symbols determine n
` ill he taken as q-ary
`encoded symbols. (All symbols
`q ; one notable aspect of the major re-
`for some integer
`sults is their independence of q.) The rated R of a block
`code is k / n . Second, in order
`to show how general these
`theorelns are, we consider any encoder of finite memory,
`say v y-ary memory;elements, and we allow the number
`of inputs k ( t ) accepted and outputs n ( t ) put. out on the
`t time units to
`channel in
`be any monotonic functions
`of t , providing only that the limit
`
`exists. We call this limit the code rate R , and we call the
`code an ( R , v ) finite-memory code. We then appeal to
`the following simple lemmas.
`Lemma 1 : In an (n, k ) block code, for any set of k - T
`code positions, there are
`at least qr code words all of
`which have identical symbols in those positions.
`Proof: There &re
`the code, but only
`qIi words in
`qk-r possibilities for the symbols in any k - T positions,
`so a t least one possibility must be repeated qr or more
`times.
`Le,tnma 1 ( a ) : In an.(E, Y ) finite-memory code, for any
`set of k ( t ) - T - v positions among the n ( t ) outputs be-
`fore time t , there are at least qr code words all of which
`a t time t ,
`leave the encoder memory in the same state
`and all of which have identical symbols
`in those posi-
`tions.
`Proof: There are qJict) code words of length n ( t ) and
`- < q u encoder memory states, so there must be at least
`q J i ( t ) - v code sequences all of which leave the encoder in
`some identical state. There are only
`Q’~(~)-”--? possibilities
`for the symbols in any k ( t ) - v - T positions, so at least
`at least q7 times among
`one possibility must be repeated
`any set’ of at least q7<;tt)-v code words.
`The minimum distance d of a block code is defined as
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1008
`Page 2 of 10
`
`
`
`774
`of positions in which two words
`the minimum number
`differ; we also define the 7nini,mu.71~ span S of a code as
`the minimum number of consecutive positions outside of
`d 5 S. Lemma
`which two words are the same. Trivially
`1 immediately yields the following corollary.
`Corollary 1: In an (n, k ) block code, d I S L n -
`k + 1.
`Proof: By Lemma 1 there are at least two
`words
`which are the same in the first k - 1 positions.
`Block codes for which d = n - k + 1 are called maxi-
`'mum distance separable codes. The only known general
`class of such codes is the RS codes. These are q-ayy codes
`with blocklengths q or less, where qJs a prime power [6,
`pp. 21-29] ; hence only the nonbinary RS codes are non-
`[3] refers to constructions which give
`trivial, Singleton
`codes of length q + 1, for q a prime power, and proves
`that k I q - 1, n .-
`k I q - 1 for any maximum dis-
`tance separable code.
`The class of block codes for which S = n - k + 1 is
`RS codes of course satisfy this
`much greater. The
`equality; but, more generally, any cyclic block code sat-
`isfies S = n - k + 1 (because any n - k consecutive
`erasures can always
`be cyclically permuted .into the
`check positions).
`Now consider erasure correction. A pattern of erasures
`is called correctible if no two code words have the same
`symbols in all positions outside the erasure pattern.
`Corollary 2 : I n a n (n, k) block code no
`pattern of
`more than n - k erasures is correctible.
`one
`be corrected requires
`That is, every erasure to
`check symbol. For example, a rate-1/2 block
`code can
`correct no more than n/2 erasures.
`Theorem 1 : Any ( E , V ) finite-memory code capable of
`correcting erasure bursts of length B separated by guard
`spaces of length G has rate R . 5 G / ( G + B ) . That is,
`the 'zero-error capacity of the classic erasure-burst chan-
`nel is bounded by Co 5 G/ (G + B ) .
`Proof: By Lemma 1 ( a ) , if there are more than
`n ( t ) - k ( t ) + v burst symbols in the first n ( t ) received
`at least two code words which
`symbols, then there are
`k ( t ) - v - 1 or fewer guard space
`are identical in the
`symbols and which leave the encoder in the same state.
`These two words therefore cannot
`be distinguished on
`the basis of the first n ( t ) received symbols, and since
`they both leave the encoder in the same state no informa-
`tion from future received symbols can help
`to distinguish
`them. Thus a decoding error
`will occur for one
`or the
`if the number N,(t) of burst
`other of these code words
`symbols in the first n ( t ) symbols exceeds n ( t ) - k ( t ) +
`V , or if the burst density N , ( t ) / n ( t ) satisfies
`
`forever between B burst bits
`Let the channel alternate
`and G guard space bits; then as t 4 CL, the burst density
`approaches B / ( G + B ) , while the right side approaches
`1 - R for any finite V, so that if B / ( G + B ) > 1 -' R
`
`WEE TR.4NS.4CTIONS O N C O M M U N I C A T I O N S TECHNOLOGY. OCTOBER 1971
`t large enough that the inequality is satisfied.
`there is a
`Hence we must have B,/(G + B ) 5 1 - R to guarantee
`zero errors.
`Q.E.D.
`
`For example, no rate-1/2 finite-memory code can have
`a guard-space-to-burst ratio
`of better than one-to-one.
`We now take up error correction. Twp patterns of er-
`if no two code
`rors are called sinrultaneously correctible
`words differ only in the positions
`covered by
`the union
`of the two error patterns. For if this condition holds, then
`there is no common received word into which' two dif-
`ferent code words
`can be transformed by changing the
`first code word in some
`'of the positions of the first error
`second in some of the positions of the
`pattern and the
`second; while if it does not hold, there is such a common
`received word.
`(Note that not every position in an error
`pattern is required 'actually to
`be in error.) The
`close
`relation of error correction to erasure correction is shown
`by the following lemma.
`L e n m a 2: Any partition of an uncorrectible erasure
`pattern results in two disjoint error patterns
`which are
`not simultaneously correctible.
`Proof: From the definition of an uncorrectible era-
`sure pattern there are two
`code words identical outside
`the union of the two error patterns.
`This is the reason why it always takes twice as much
`redundancy to correct errors as erasures. For example,
`in block
`codes we need two check
`symbols for every
`error to be corrected.
`error patterns of L(n - k)/2j + I or fewer positions
`Corollary 3': In any (n, k ) block code
`which are not simultaneously correctible; that is, we can
`guarantee correction of no more than L ( n - k)/2J errors.
`Note: rz1 is the least integer not
`less than z, and Lz]
`is the greatest integer not greater than x.
`erasure pattern of n - k + 1 erasures, which may be
`Proof: By Corollary
`2 there is an uncorrectible
`of r(n - k + l)/27 and
`partitioned 'into two subsets
`L(n - k + 1)/2 J positions which are not simultaneously
`correctible, by Lemma
`2. The proof is completed by
`noting that
`
`there are two
`
`n - k + 2 ] + k + , , -
`2
`
`
`1
`
`2
`
`we
`
`'
`
`For example, a rate-l/2 block code can correct no
`more than n/4 symbol errors. For burst correction,
`have the following corollary.
`Corollary 4: If some pattern of erasure bursts of length B
`G' is uncorrectible,
`separated by guard spaces
`of length
`at least two patterns
`of error bursts of
`then there are
`length 5 rB/21 separated by guard spaces
`2 G + LB/2J which are not simultaneously correctible.
`of length
`Proof: Partition the uncorrectible erasure bursts
`guard ' spaces of size G + LB/2_( and G + rB/27, as
`into error bursts of sizes rB/21 and 'LB/2J separated by
`
`shown in Fig. 1, and apply Lemma 2.
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1008
`Page 3 of 10
`
`
`
`FORNEY: BURST-CORRECTING CODES
`
`Fig. 1. (a) Uncorrectible erasure
`(b) Two error
`burst pattern.
`burst patterns that are
`not, simultaneously correctible.
`
`775
`i+-BtG+Bl
`R
`G‘
`- > -
`B - I - R
`for arbitrarily low error. For R = 1/2, for example, G 2
`3B for zero error, whereas G 2 B for arbitrarily small
`error. We shall see in the next section
`that these bounds
`can be effectively achieved when G and B are large and
`there are no errors in the guard space.
`,.
`ARCHETYPAL CODING SCHEMES
`We shall now exhibit some coding schemes which ap-
`proximately meet the bounds
`of the previous section
`when the guaM space is error-free. These
`schemes are
`of various classes
`offered as theoretical archetypes
`of
`schemes of practical interest, rather than as practical
`techniques directly applicable to real channels. We do
`feel that they bring out clearly the important issues in
`burst correction.
`It is evident from intuitive capacity arguments
`that
`any code must introduce constraints over a number
`of
`channel symbols of the order of B + G, since only over
`this time span can we be sure of channel behavior not
`too much worse than average.
`of obtaining
`Interleaving is the most obvious method
`long code constraint lengths. Sophisticated designers have
`it, since
`commonly avoided
`the usual interleaving
`schemes proposed by unsophisticated designers are rather
`poor burst correctors. However,
`it is still more sophisti-
`[SI) that there is nothing objec-
`cated to observe (with
`tionable per se in schemes
`which combine short
`codes
`with interleaving, as long as the decoder operates
`sensibly.
`The usual type of interleaver is a block interleaver, in
`which, for example, bits are laid down in the rows of a
`B x N matrix, and read out from the columns.
`In this
`paper we shall use a somewhat simpler and more
`effec-
`[ 111 a
`tive type of interleaver, which we have called
`periodic (or convolutional) interleaver. (Similar inter-
`leavers were independently proposed by Cowell and Bur-
`ton [ 121 and Ramsey [ 131 .) Schematically, as illustrated
`in Fig.
`2, symbols to
`be interleaved are arranged in
`blocks of N (by a serial/parallel conversion,
`if neces-
`sary). The ith symbol in each block is delayed by (i -
`1)NB’ time units through a (i - 1) B’ stage shift register
`clocked once every N symbol times, where B’ = B / N .
`(A time unit thus corresponds to the transmission
`of a
`block of N symbols.) Output bits may
`be serialized for
`channel transmission. At the receiver, groups of N sym-
`bols are reblocked, and the ith symbol
`in each block is
`delayed by ( N - i)NB’ time units through an ( N - 1) B’
`stage shift register.
`B X N interleaver. Correspondingly
`We call this a
`B X N deinterleaver,
`there exists a similar but inverse
`also illustrated in Fig.
`2. The combination has the fol-
`lowing properties.
`of ( N - 1)B’
`1) All symbols receive a total delay
`time units, or N ( N - 1)B” = ( N - l ) B symbol times
`(plus l;he channel delay).
`
`Consequently we have the following Theorem.
`Theorem 2: Any ( R , Y ) finite-memory code capable of
`correcting all error bursts of length B separated by guard
`spaces of length G has R 2 (G - B ) / (G + B ) . That is,
`the zero-error capacity of the classic bursty channel is
`bounded by Co I (G - B ) / ( G + B ) .
`Proof: By Corollary 4 a code satisfying the assump-
`of length
`tion is capable of correcting all erasure bursts
`2B separated by guard spaces of length G - B. But then
`Theorem 1 implies that R 5 (G - B ) / ( G - B + 2B).
`For example, no rate-1/2 finite-memory code can cor-
`rect all bursts of length B unless they are separated by
`guard spaces of a t least 3B.
`Theorem 2 is known as the Gallager bound. S’imilar
`theorems were proved by Reiger
`[7], for linear block
`codes of length B + G, and by Wyner and Ash [8], for
`convolutional codes with a decoding constraint length
`of
`R t- G. Gallager 19, pp. 289-2901 first proved the result
`in general, assuming only finite decoding delay. Our
`alternate assumption here
`of finite encoder memory is
`possibly more to the point, since it shows that the limita-
`tions are inherent in any realizable
`code, apart from the
`realizability of the decoder. Massey [lo] sketched a still
`more general proof showing
`that an error-free
`decision
`on the entire (perhaps infinite) transmitted sequence
`on
`the basis
`of the entire
`received sequence was possible
`only if R I (G - B ) / ( G + B ) .
`To summarize, we have shown that the capacity of the
`C 5 G/(G + B) ,
`classic bursty channel is bounded by
`that the zero-error capacity of the classic erasure-burst
`channel is bounded by the same quantity, but that the
`zero-error capacity
`of
`the classic bursty channel is
`bounded by Co i (G - B ) / ( G + B ) . The difference
`between signaling at arbitrarily low probability of error
`and at zero probability of error on the classic bursty
`channel can therefore
`be quite large; the guard-space-
`to-burst ratio must exceed
`
`1 + R
`G
`- > -
`B - 1 - R
`
`for zero error, but only
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1008
`Page 4 of 10
`
`
`
`776
`
`
`
` BxN DEINTERLEAVER
`
`
`
`
`
`
`
` INTERLEAVER
`
`c
`
`C
`r ”
`N
`
`I
`I
`
`BIN
`
`(N-i) 8’
`
`IEEE TRANSACTIONS O N COMMUNICATIONS TECHNOLOGY, OCTOBER 1971
`
`7- i
`
`0 -
`
`2
`
`Fig. 3. Appearance of bursts of B blocks separated by guard
`spaces of ( N - 1)B’ blocks in N deinterleaver output, streams.
`
`Fig. 2. Periodic
`
`interleaver and corresponding
`( B E B / N ) .
`
`deinterleaver
`
`t
`
`re-
`2) The memory requirements at transmitter and
`ceiver are N ( N - 1)B’/2, or N ( N - l)B/ = ( N - l ) B
`total.
`3 ) A single channel burst affecting B’ or fewer blocks
`( B - N + 1 or fewer symbols) passes through the dein-
`of the N
`terleaver in such a way as to affect only one
`deinterleaver output streams at a time.
`See Fig. 3. Re-
`by guard spaces of ( N - l)B’
`peated bursts separated
`or more blocks ( ( N - 1) B + N - 1 symbols) also af-
`fect only one of the N output streams at a time.
`kB’ or fewer blocks af-
`4) A channel burst affecting
`k of
`the N deinterleaver output
`fects no more than
`streams at a time. Repeated bursts separated by guard
`spaces of ( N - k ) B’ or more blocks affect only k of the
`output streams at a time. See Fig.’ 4.
`The unsophisticated approach would now be to let the
`input symbols in any one block be a code word from a
`of length N capable of correcting up to
`block code
`symbol errors. For example, the rate-112 binary (24, 12)
`Golay code corrects up to 3 bit errors. With this code
`and a B X 24 interleaver we can correct all bursts
`length approximately 3B separated by guard spaces
`approximately 21B. This 7-to-1 guard-space-to-burst
`ratio is far inferior to even the 3-to-1 ratio required for
`zero-error capacity.
`We should note, however, that if the location of bursts
`can be detected, then use of a cyclic symbol-erasure-cor-
`For example, the
`recting code is perfectly respectable.
`(24, 12) Golay
`code can correct any 12 cyclically
`con-
`that with this code and a B X
`secutive erasures. We see
`24 interleaver we can correct all bursts
`of length ap-
`proximately 12B separated by guard spaces
`of approxi-
`mately 12B, which is the best we can hope for. The rea-
`son this technique works well for burst-erasure correction
`but not for burst-error correction is that a cyclic binary
`X < n - k + 1 but generally
`code achieves the bound
`falls far short of the bound d 5 n - k + 1.
`These observations lead us to look for a maximum
`q
`distance separable code for burst-error correction. Let
`be a prime power and let b be an integer such that q b 2
`N ; then there exists an RS code of length N with super-
`symbols consisting of b q-ary symbols. Then on each
`of
`K input streams we take consecutive segments of b sym-
`( N , K ) RS
`bols as the information supersymbols in an
`N code supersymbols, which then
`code, and generate
`form the input streams to the interleaver. At the decoder
`we perform error-correction on the N deinterleaver out-
`
`of
`of
`
`I- 4
`
`Fig. 4. Appearance of bursts of k B blocks separated by guard
`spaces of ( N - k)B’ blocks in N deinterleaver output streams.
`
`puts after reblocking into supersymbols, as illustrated in
`Fig. 5.
`We can correct up to ( N - K)J2 errors with this code;
`use B X N interleavers we can correct
`therefore if we
`( N - K)B,/2 separated
`bursts of length approximately
`of approximately ( N + K)B/2. (The
`by guard spaces
`approximation comes from the blocking
`of input data
`into code blocks of length bN symbols, and clearly be-
`comes insignificant if B is substantially larger than bN.)
`Hence we obtain guaranteed burst correction with
`a
`of nearly ( N + K ) / ( N -
`guard-space-to-burst ratio
`K) = (1 + R ) / ( 1 - R ) , in agreement with the
`zero-
`error-capacity bound for the classic bursty channel.
`I n
`other words, for B >> N log, N , there is a code of rate
`R = K/N which meets the bound, so the bound is asymp-
`totically tight. (Burton has recently come upon a similar
`scheme, with N - K = 2, from a different direction [14].
`Peterson [15, pp. 198-1991 suggested using very long
`noninterleaved RS codes for burst correction; such codes
`are asymptotically optimum but more complex and less
`related to other burst-correction schemes than those
`described here.)
`I n exactly the same way, the use of an erasure-correct-
`ing ( N , K ) R S code with a B x N interleaver on the
`classic erasure-burst .channel succeeds in correcting all
`erasure bursts of length approximately ( N - K ) B sepa-
`rated by guard spaces of approximately KB, for a guard-
`space-to-burst ratio of nearly K / ( N - K ) = R/ (1 -
`R ) , which is the zero-error-capacity bound for the classic
`erasure-burst channel.
`Since the erasure zero-error-capacity bound equals the
`capacity bound, this suggests
`that we could approach
`the capacity of the classic bursty channel with a similar
`scheme if the decoder could only tell with high proba-
`bility where the bursts were. But this is really not
`SO
`difficult. Again we suppose an ( N , K ) RS code used with
`a B X N interleaver. We suppose initially there has been
`no burst for some time. When a burst begins, as
`soon as
`an error appears in the bottommost stream it is detected,
`( N , K ) RS code can detect up to
`( N - K )
`since an
`symbol errors. At this point the
`start of the burst has
`been located with probability
`(1 - qwL) to be within the
`
`
`CommScope, Inc.
`IPR2023-00066, Ex. 1008
`Page 5 of 10
`
`
`
`FORNEY: BURST-CORRECTING CODES
`
`t - 4 - k
`
`(N-I1 6'
`
`DATA
`IN
`
`r
`
`777
`
`
`
`8/2
`
`UPPER
`
`C
`H
`N
`N
`E
`L
`
`-
`
`CONTROL
`LOGIC
`
`Fig. 5. Use of an ( N , K ) RS code with :t B X N interleaver.
`
`Fig. 6. Zegers' time-diversity scheme.
`
`last ~n blocks (if we assume that the probability that no
`l / g ) . If the
`error occurs in any burst
`mode symbol is
`burst is assumed to cover no more than ( N - Kj B' - nz
`blocks, then we know how it will appear at the deinter-
`leaver outputs to within
`712 blocks of uncertainty:
`4. Hence, by
`namely, in the pattern illustrated in Fig.
`taking the cross-hatched positions as erasures,
`we can
`correct the entire burst. Furthermore, after a guard
`space of KB' + 711 blocks, we can start all over again. I n
`( N -
`summary, we can correct bursts
`of length about
`K ) B - n z N separated by guard spaces
`of length about
`K B - 7nN with error probability less than q"L per burst.
`For B large, the probability of failing to correct a burst
`may be made arbitrarily small, while the guard-space-
`K / ( N - Kj =
`to-burst ratio is held near the optimal
`R / ( 1 - R ) . This suggests that the capacity
`of
`the
`classic bursty channel is for all practical purposes indeed
`G / ( G + B ) for G and B large. (This scheme does not
`a I,rOOf, since for fixed B and G it cannot give
`us arbitrarily low error probabilities.)
`To recapitulate: with
`high probability we can easily
`determine the approximate location of error bursts; then
`we can deal with the easily-handled erasure-burst chan-
`nel. The amount
`of additional information which
`we
`must extract from the received symbols to locate the
`to the amount
`burst becomes negligibly small compared
`( B symbols) needed to correct the burst as
`B becomes
`large.
`we
`To illustrate how simple these ideas are, suppose
`code.
`desire an asymptotically optimum rate-1/2 binary
`case the (2, 1) R S code degenerates into thc
`In this
`code, and the B X N interleaver to
`binary repetition
`simple time diversity of order B, as illustrated in Fig. 6.
`At the decoder the prescription is as follows: starting
`from a quiet state, watch the upper and
`lower channel
`outputs; whenever they f