throbber
United States Patent [191
`Piret
`
`[11] Patent Number:
`[45] Date of Patent:
`
`4,747,104
`May 24, 1988
`
`[75]
`
`[54] DATA TRANSMISSION SYSTEM
`EMPLOYING A COMBINATION OF BLOCK
`ENCODING AND CONVOLUTION
`ENCODING FOR ERROR PROTECTION
`Inventor: Philippe M. 0. A. Piret, Lasne,
`Belgium
`[73] Assignee: U.S. Philips Corporation, New York,
`N.Y.
`[21] Appl. No.: 871,013
`Jun. 5, 1986
`[22] Filed:
`Foreign Application Priority Data
`[30]
`Jun. 14, 1985 [EP] European Pat. Off ......... 85200939.8
`Int. 0.4 .............................................. G06F 11/10
`[51]
`[52] U.S. a .......................................... 371/39; 371/44
`[58] Field of Search ....................... 371/37, 39, 40, 43,
`371/44, 45
`
`[56]
`
`References Cited
`U.S. PA TENT DOCUMENTS
`3,891,959 6/1975 Tsuji et al ............................. 371/43
`3,988,677 10/1976 Fletcher et al. .................. 371/37 X
`4,038,636 7/1977 Doland .................................. 371/37
`4,178,550 12/1979 Acampora et al ................... : 371/37
`4,240,156 12/1980 Doland .................................. 371/43
`4,276,646 6/1981 Haggard et al. ...................... 371/37
`
`4,355,392 10/1982 Doi et al. .......................... 371/43 X
`4,369,512 1/1983 Brossard et al ....................... 371/43
`4,536,878 8/1985 Rattlingourd et al ................ 371/43
`Primary Examiner-Charles E. Atkinson
`Attorney, Agent, or Firm-Thomas A. Briody; Jack
`Oisher
`ABSTRACT
`[57]
`A data transmission system for providing error protec(cid:173)
`tion of transmitted data words. The less significant bits
`of a data word are, by means of matrix multiplication,
`encoded into a first redundant proto-code word and the
`more significant bits are, by means of further matrix
`multiplication and delay by different word recurrence
`intervals encoded in a set of further redundant proto(cid:173)
`code words. A composite of the proto-code word is
`formed by means of a modulo-2-addition of code words,
`so that for the less significant data bits a block code is
`realized, while for the more significant data bits a con(cid:173)
`volutional encoding is realized. In the decoding, the
`more significant bits of the composite code word are
`decoded by means of Viterbi decoding, the Viterbi
`metric being determined from the deviation between
`the reconstructed contribution of the less significant bits
`to such code word and the actually received contribu(cid:173)
`tion of such bits to such code word.
`
`7 Oaims, 3 Drawing Sheets
`
`24
`.......... ~ENCODER~S.;..i.__,
`X
`
`30
`
`28
`
`X
`
`DATA WORD BIT
`INPUTS
`
`ENCODER
`
`g
`
`IPR2018-1556
`HTC EX1021, Page 1
`
`

`

`U.S. Patent May 24, 1988
`
`Sheet 1 of 3
`
`4,747,104
`
`24
`,........""""ENCODER_s....,.._,..
`X
`
`28
`/
`
`X
`
`30
`
`32
`
`DATA WORD BIT
`INPUTS
`
`9
`
`40
`
`36
`
`ENCODER
`
`9
`
`FIG.1
`1 )
`0 0 )
`
`1 0
`
`0
`
`0
`
`\
`
`D
`
`D 0
`
`0
`
`0
`
`0
`
`0
`
`; JJFIG. 3a
`
`Z 1
`
`0
`
`0
`
`0
`
`0
`
`D D
`
`C D
`( : D
`
`0
`
`0
`
`X'=
`
`Y' =
`
`Z 4=
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`G ~ ( D) = (\?))
`
`Fl G. 3b
`
`IPR2018-1556
`HTC EX1021, Page 2
`
`

`

`U.S. Patent May 24, 1988
`
`Sheet 2 of 3
`
`4,747,104
`
`0
`
`0
`
`0
`
`0
`
`0
`
`1
`
`0
`
`0
`
`Z 1 =
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`1
`
`0 0 1
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`1
`
`1
`
`1 )
`1
`
`)
`
`1
`0
`
`0
`
`FIG. 2a
`
`0 1
`
`0 0
`
`1 1
`
`0
`
`1
`
`0
`
`0
`
`0
`
`0 1 0
`
`0
`
`0 1
`
`Z2=
`
`Z3=
`
`0
`
`0
`
`0
`
`1
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0 1
`
`0 1
`
`0 1
`
`0
`
`0
`
`1 0
`
`FIG. 2b
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`0
`
`1
`
`0
`
`0
`
`0
`
`0 1
`
`0
`
`0
`
`0
`
`0
`
`0
`
`1
`
`0
`
`1
`
`1
`
`0
`
`FIG. 2 c
`
`( X +
`
`G i =
`
`+ D 2 X )
`
`DY
`Zi
`
`FIG. 2d
`
`IPR2018-1556
`HTC EX1021, Page 3
`
`

`

`U.S. Patent May 24, 1988
`
`Sheet 3 of 3
`
`4,747,104
`
`52
`
`MICROCOMPUTER
`
`50
`
`CODE
`WORD
`
`REGISTER
`
`DATA MEMORY
`
`ENCODER
`47
`
`62
`
`56
`
`REGISTER
`
`58
`
`FIG.4
`
`,,.---------------------_..,70
`STRT
`
`72
`
`RECEP v (t)
`74
`,--------'-----......
`b' (t) (s) = dec.[v(t)-c(t)(s)]
`·wt (s)
`
`s=o .. 63
`
`76
`r-------......11...--------------------
`w 1 (a(tll= min [ Wo (a( t-ll• w(a(t-1)-a(t)J
`p (a(tll!; b (a(tll=b' (t) (s) ,
`s = (p(a(t ll-a (t ))
`
`t=t+ 1
`
`FIG. 5
`
`IPR2018-1556
`HTC EX1021, Page 4
`
`

`

`1
`
`4,747,104
`
`DATA TRANSMISSION SYSTEM EMPLOYING A
`COMBINATION OF BLOCK ENCODING AND
`CONVOLUTION ENCODING FOR ERROR
`PROTECTION
`
`BACKGROUND OF THE INVENTION
`The invention relates to a system for transmitting
`data words from a transmitting station to a receiving
`station, wherein redundancy bits are added for protec(cid:173)
`tion against bit errors arising in the transmission. With
`increasing bit rates, and generally noisy transmitting
`media, various error protection systems have been pro(cid:173)
`posed.
`
`2
`significant bits of each data word into a first proto(cid:173)
`codeword, and also a second encoder which by further
`matrix multiplication block encodes the remaining,
`more significant, bits of each data word into a set of n
`5 further proto-code words. The second encoder further
`comprises delay elements for imparting respective dif(cid:173)
`ferent delays to the further proto-code words relative to
`the recurrence times of successive data words, and also
`comprises modulo-two adding means for bitwise adding
`10 the first proto-code word and n further proto-code
`words, the latter being derived from as many different
`data words, thereby convolution encoding the more
`significant bits. The output of the modulo-two adding
`means is supplied to the transmitting medium for trans-
`15 mission to the receiver station.
`In certain preferred embodiments the number n may
`be 3, and from each each data word the two most signif(cid:173)
`icant bits are encoded by what effectively is a convolu(cid:173)
`tional type encoding. It was found that in this way
`appreciable error protection could be obtained at only a
`limited cost of apparatus requirements.
`The invention also relates to a transmitter and a re(cid:173)
`ceiver for use in such systems. The error protection in a
`mobile radio system can be provided in both directions,
`i.e. from a mobile station to a fixed base station, and also
`in the reverse direction. However, in the latter situation,
`other factors can also increase the communication reli(cid:173)
`ability, such as increasing the power level transmitted
`or the antenna configuration. These two measures often
`are not feasible for applying to a mobile station. The
`decoding at the received is preferably effected by
`Viterbi decoding, which is a special kind of maximum
`likelihood decoding. The Viterbi metric denotes the
`total incured amount of deviation from the "true" solu(cid:173)
`tion, and which thus must be minimized. It is deter(cid:173)
`mined from the deviation between the reconstructed
`contribution stemming from the less significant bits of
`the code word, and the actually received contribution
`from those less significant bits.
`
`DESCRIPTION OF THE RELATED ART
`An example of such a system is described in U.S. Pat.
`No. 4,312,070 to Coombes et al, which relates to a digi-
`tal data processing system for use in a mobile trunked
`dispatch communication system. The system of the 20
`present invention is generally applicable to data com(cid:173)
`munication systems wherein communication between
`stations is subject to burst error phenomena, and each
`transmitted data word comprises more significant bits
`and less significant bits. In such situation a higher effec- 25
`tive level of reliability can be attained if the more signif(cid:173)
`icant bits are better protected than the less significant
`bits. Such situation typically occurs in audio transmis(cid:173)
`sion, notably speech transmission, where errors in the
`less significant bits make the speech less agreeable or 30
`somewhat difficult to understand but an error in a more
`significant bit can easily make the speech completely
`uncomprehensible. The transmitting medium may be a
`broadcast medium, or also, for example, a data commu(cid:173)
`nication line, or a storage medium, such as magnetic 35
`tape.
`A burst refers to a series of channel bits wherein the
`error probability is relatively high. Outside a burst the
`error probability is relatively low, and the chance for an
`error to occur is usually independent of any other non- 40
`burst errors. Therefore, these errors are called random
`errors. A burst, which is subject to a high error rate, is
`caused by only partially understood phenomena which
`during an interval of time degrade the channel reliabil-
`ity, such as thunderstorms, or movement of the trans- 45
`mitting or receiving station. Especially in mobile radio
`systems the presence of high-rise buildings also may
`influence the channel properties. The burst error rate
`may be for example, 10-1, the random error rate IQ-3
`or less.
`
`SUMMARY OF THE INVENTION
`It is an object of the present invention to provide
`improved encoding which, among other things, pro(cid:173)
`vides a high coding efficiency (high rate), allows for 55
`easy decoding, gives an increased protection level of
`more significant bits with respect to less significant bits
`in a data word, and furthermore allows for matching the
`error correction capability to the expected burst length.
`Such object is realized according to the invention by 60
`a system for transmitting data words each comprising a
`sequence of bits of successive significance levels, such
`system comprising a transmitting station and a receiver
`station interconnected by a transmitting medium, the
`transmitting station comprising an encoder system for 65
`encoding data words by means of redundancy _ bits.
`Such encoder system comprises a first encoder which
`by matrix multiplication block encodes a set of less
`
`BRIEF DESCRIPTION OF THE FIGURES
`The invention is further explained by reference to the
`following Figures, in which:
`FIG. 1 is an elementry block diagram of an encoder
`system;
`FIGS. 2a-2d show a first set of code generator matri-
`ces;
`FIGS. 3a-3b show a second set of code generator
`so matrices;
`FIG. 4 is an elemntary block diagram of an en(cid:173)
`coder/decoder system;
`FIG. 5 is an elementary flow diagram of the decoding
`process.
`
`DESCRIPTION OF A PREFERRED
`EMBODIMENT OF AN ENCODER SYSTEM
`In the following, only the encoding and decoding
`aspects of the system are considered. The remainder,
`such as modulating and demodulating the code bits to
`derive channel bits, the physical realization of the chan(cid:173)
`nels, the various fields of use of the invention besides
`mobile telephony, and the construction of the stations
`and their respective data processing subsystems are
`ignored, as not relating to the invention proper. FIG. 1
`is an elementary block diagram of an encoder system for
`a (9, 7) convolutional code. Input 20 receives the seven
`data bits of a data word in parallel. Element 22 is an
`
`IPR2018-1556
`HTC EX1021, Page 5
`
`

`

`3
`encoder for deriving a block protocode word, which
`becomes part of a convolutional code word. In this
`block code, the five less significant bits of the input
`seven bit data word are encoded by means of matrix
`multiplication to form a nine bit proto-code word. In 5
`itself, matrix multiplication is a conventional technique.
`Moreover, there are only 32=25 different values of the
`five data bits and therefore, a five bit input, nine bit
`output programmable logic array would be sufficient.
`Other types of read-only storage arrays would function 10
`equally well. The two most significant bits are entered
`into encoder 24 which has a two bit wide input, nine bit
`wide output. Similar technology can be used as for
`encoder 22. However, due to the very low number of
`input leads, encoder 24 can also be realized by known 15
`wild logic or semicustom cellular logic circuits.
`Elements 24, 26, 28 are further encoders which by
`matrix multiplication generate further nine bit proto(cid:173)
`code words from the two most significant bits of the
`input data word. Notably, element 28 has exactly the 20
`same algorithmic structure as element 24; note the indi(cid:173)
`cation X in both. Specifically, it has been found that this
`equality is an advantageous realization; however, it is
`not a necessary requirement.
`Elements 30, 32, 34, 36 are delay elements. In their 25
`simplest form the delay incurred is equal to the interval
`between the presentation of successive data words to
`input 20. Therefore, the coexistent proto-code words at
`the outputs of elements 24, 26, 28 relate to data words of
`zero one, and two, respectively, positions earlier in the 30
`.... sequence of data words. Convolutional encoding of
`these proto-code words of nine bits each is realized by
`applying them to inputs of an EXCLUSIVE-OR-ele-
`. ment 38. Here, all bits of corresponding significance
`levels in the proto-code words are added modulo-2, to 35
`produce a final nine bit code word on output 40. Paral-
`. lel to serial conversion at output 40 has been omitted for
`brevity.
`An alternative set-up to FIG. 1 would be to combine
`encoders 24, 28 (since they implement identical matrix 4-0
`multiplications) and to replace the delay elements 32, 36
`by a corresponding delay introduced at the output of
`the encoder 24. This represents a trade off between
`leaving out decoder 28 and delay elements 32, 36, ver(cid:173)
`sus introducing a then necessary delay of a further 45
`proto-code word (nine bits) over two word intervals.
`In FIG. 1 the operation of the delay elements and
`possibly of the further elements therein may be synchro(cid:173)
`nized by a clock system. For brevity, this has not been
`shown.
`
`so
`
`DESCRIPTION OF EXEMPLARY CODES
`FIG. 2a-2d show a first set of code generator matri(cid:173)
`ces. FIG. 2a shows the matrices X, Y and Zl imple(cid:173)
`mented in encoders 24/28, 26 and 22 ofFIG.1. FIG. 2d 55
`shows the combined code generator matrix which re(cid:173)
`sults in the final code word at output 40. The dimen(cid:173)
`sions of the combined generator matrix are seven rows
`of nine columns. The "D" symbolizes the delay opera(cid:173)
`tor, with a length of one data word recurrence time. 60
`The powers ofD symbolize delays by one and two data
`word recurrence times, respectively, realized by delay
`elements 30-36. If all delays were doubled, the expo(cid:173)
`nents of D would be doubled.
`Next the minimum distance profile is presented, 65
`which for the less significant bits is (3 0 ••• ). The first
`integer should be always be non-zero and represents the
`minimum distance for the individual code word. Thus,
`
`4,747,104
`
`4
`the code is single bit error correcting. The further
`zeroes indicate that upon erasure of a code word, no
`further information on these less significant bits exists.
`The minimum distance profile for the more significant
`bits is (6 4 2 0 ••• ). The first integer is the minimum
`distance if no erasured code word occurs in a sequence
`of three successive code words. Thus the code is double
`bit error correcting and triple bit error detecting. If one
`code word in a sequence of three words is erased, the
`minimum distance is 4: single bit error correcting, dou(cid:173)
`ble bit error detecting. If two code words of a sequence
`of three are erased, the minimum distance is 2: single bit
`error detection. If three or more successive code words
`are erased, no further information on these more signifi(cid:173)
`cant bits exists.
`The improved error protection of the more signifi(cid:173)
`cant bits allows for an additional feature to be realized
`by the invention in that these more significant bit posi(cid:173)
`tions may be used to transfer data bits, for example as
`derived from calculations or digital coding of charac(cid:173)
`ters. In that case the less significant bits are preferably
`represented by trailing zeroes or other valued non-sig(cid:173)
`nificant bits.
`FIG. 2b shows a further generator matrix Z2 that can
`be used in lieu of matrix Zl, of FIG. 2a, the matrices X
`and Y retaining their respective functions. The first row
`of Z2 is produced by adding the first and third rows of
`matrix Zl. The second row of matrix Z2 is produced by
`adding the second and third rows of matrix Zl. The
`third and fourth rows of matrix Z2 are equal to the
`fourth and fifth rows, respectively of matrix Zl. This
`effective suppressing of a row in a generator matrix is
`conventionally called "expurgation" of a code. In this
`way a (9, 6) code is realized. For the more significant
`bits the same minimum distance profile is realized as for
`the earlier code. For the less significant bits a minimum
`distance of 4 is realized now: single bit error correction,
`double bit error detection.
`FIG. 2c shows a further generator matrix Z3 that can
`be used in lieu of matrices Zl, Z2 of earlier Figures. The
`first five columns of Z3 are identical to the first five
`columns of Zl. The sixth column of matrix Z3 is pro(cid:173)
`duced by adding and inverting all columns six through
`nine of matrix Zl. The seventh and eighth columns of
`matrix Z3 are identical to the eighth and seventh col(cid:173)
`umns of matrix Zl, respectively. Furthermore, the first
`column of the matrix X+DY +D2X is left out to get G3
`in the same way as Gt, G2 herebefore. This leaving out
`of a column is conventionally called "puncturing" of a
`code. In this way an (8, 7) code is defined by G3. The
`minimum distance profile for the more significant bits is
`(3 2 1 0 . . . ): this gives: single bit error correction,
`double bit error detection, single bit error detection,
`and no error detection, respectively. However, even
`with two code words out of three erased, the data con(cid:173)
`tent of the more significant bits can be salvaged if no
`further error occurs. For the less significant bits the
`minimum distance is (2 0 ... ): single bit error detection.
`FIGS. 3a, 3b show a second set of code generator
`matrices. Herein, matrix Z4 is derived from matrix Zl
`by omitting both the first row and the first column.
`Matrix X' is derived from matrix X by omitting the first
`column and inverting the second row. Matrix Y' is
`derived from matrix Y by omitting the first column and
`by inverting both rows. The matrix Z'(D) is derived
`from matrices X', Y' in the same way as matrix Z(D)
`from matrices X, Y. The (8, 6) code generated by matrix
`G4(D) is of course generated by puncturing. The (8, 6)
`
`IPR2018-1556
`HTC EX1021, Page 6
`
`

`

`5
`code so generated has therefore the same minimum
`distance profile as the (9, 7) code described earlier; the
`later having a higher efficiency (rate).
`The respective codes explained herebefore can be
`generated by similar apparatus to that shown in FIG. 1, 5
`the differences being represented by the entries in the
`generating matrices. Furthermore, the codes disclosed
`are intended to be exemplary. It should be clear that
`other combinations of block codes can and convolution
`codes be used to realize the advantageous unequal error 10
`protection level in a data stream as described. Therein,
`data rate, code word length, and amount of protection
`realized should be matched to the specific requirements
`of the case. Notably, the number of terms in the first
`row of FIG. 2d should be higher for better protection of 15
`the more significant bits against longer error burst
`lenths.
`
`GENERAL DESCRIPTION OF DECODING
`The preferred decoder to be used in the receiver of 20
`the invention is of the general Viterbi decoder type. A
`tutorial paper thereon has been published by G. David
`Forney, Proceedings of the IEEE, Vol. 61, No. 3,
`March 1973, pages 268-278. Applications of Viterbi
`decoding are, in addition to error correcting convolu- 25
`tional coding, also: interference between neighbouring
`data symbols in an information stream, continual phase
`shift keying (FSK) and text recognition. Each data
`word represents a path in the so-called "trellis" of
`Viterbi, said path having a beginning and an end. The 30
`path consists of a number of intervals and each interval
`has an associated partial metric. The sum of the partial
`metrics of a path constitutes the metric or general attri(cid:173)
`bute number, and the intention is to find the path with
`the lowest value of the overall metric. In principle the 35
`path progresses together with the incoming stream of
`channel or code bits, so that continually row intervals
`have to be taken into account. In this progress paths
`may split and also reunite; in the latter case only the
`lowest one of the associated metric values is considered 40
`further. The Viterbi decoder is therefore an example of
`a "maximum likelihood decoder": which tries to find
`the most probable information content of the code
`stream received.
`For evaluating the codes used herein the following 45
`property is defined: if the degree of the j-th row of an
`encoder matrix G(D) is called mj, then the complexity
`M(G(D)) is the row-wise sum of these degrees. In the
`elementary cases considered herebefore, the highest
`degree of D was always equal to 2, and therefore, the 50
`complexity of all encoder matrices is equal to 4. This
`parameter M is important because the associated Viterbi
`decoder has a hardware.,complexity that is strongly
`related to zM(G(D)). Thus for each increase ofM by one,
`the necessary hardware roughly doubles.
`Now, FIG. 4 gives an elementary block diagram of
`an encoder/decoder system, wherein the encoder is
`only shown as block 47. On input 45 user data input in
`the form of data words. At output 49 a sequence of code
`words is produced. Between output 49 and decoder 60
`input 50 the transmitting medium 48 is present. Now,
`for the decoder, in view of the extensive amount of
`back-tracking in a Viterbi process advantageously a
`programmed microcomputer is used. Furthermore, as
`explained supra, the decoding often need only be ef- 65
`fected in a base station and the incurred costs of such a
`complicated arrangement are relatively small. The code
`considered is the one of FIG. 2a. The nine bit code
`
`55
`
`4,747,104
`
`6
`words are received on input 50. Serial-to-parallel con(cid:173)
`version has been omitted for bevity. Code words are
`transiently stored in register 52. Element 54 is a data
`processing element, for example, a stored program mi(cid:173)
`crocomputer. Element 56 is a local memory for storing
`all kinds of intermediate information. Element 58 is a
`register for transiently storing a decoded data word.
`These are presented to a user on output 60. Digital to
`analog conversion, e.g. of speech data, has been omitted
`for brevity. Element 62 is a bus, interconnecting ele(cid:173)
`ments as indicated for routing data between those ele(cid:173)
`ments.
`The decoding process generally is preferably exe(cid:173)
`cuted as follows. First, the more significant data bits are
`decoded by Viterbi decoding. Thus, the Viterbi trellis is
`built up completely as based on the respective possible,
`but unkown, values of those data bits. Next, the Viterbi
`metric is determined as by the weight of the deviation
`between the reconstructed contribution of the less sig(cid:173)
`nificant data bits to the code word, and the actually
`received contribution therefrom. The optimized Viterbi
`metric, determining the path to be followed, gives the
`referred predecessor to the actual state and the se(cid:173)
`quence of preferred predecessors will finally yield the
`predecessor giving the lowest value of the accumulated
`weights. Finally, after the most likely contents of the
`more significant data bits are retrieved, the less signifi(cid:173)
`cant data bits are decoded, and if necessary and consid(cid:173)
`ered feasible, corrected, while in certain cases an addi(cid:173)
`tional signalization of remaining errors may occur.
`SPECIFIC DESCRIPTION OF DECODING
`For the decoding, any present state of the decoder is
`considered as being given by two successive pairs of
`values of the more significant data bits. Earlier states are
`non-determinative, as the first row of matrix Gi (FIG.
`2d) has only three terms. Note that in case the number
`of more significant bits were different, each state would
`have different k-tuples (k is the number of more signifi(cid:173)
`cant bits). Therefore, in the trellis used for the decoding
`at any time instant sixteen states (2M') are associated.
`Any of these sixteen states has four predecessors and
`four successors, each of these being given by an associ(cid:173)
`ated value pair of the more significant data bits in ques(cid:173)
`tion. A state at instant (t-1) is specified by the fourtu(cid:173)
`ple of information bits: al(t-2), a2(t-2), al(t-1),
`a2(t- l). At instant t the two last of these four binary
`digits take the places of the first two ones and two new
`digits al(t) and a2(t) take the places of the two last
`binary digits. This specifies the new state a(t). To the
`transmission a(t-1)-a(t) corresponds a partial specifi(cid:173)
`cation c(t) of that part of the.code word determined by
`the convolutional encoding of the more significant data
`bits: u(t)=c(t)+b(t). Therein, the nine bit vector c(t) is
`the contribution of the first two rows of the generator
`matrix Gi(D) to the code word. In other words, it repre(cid:173)
`sents the "convolutional component" of the code word
`u(t). The vector b(t) is the contribution of the last k-2
`(i.e. 4 or 5, respectively) rows of the generator matrix
`Gi(D) to the code word u(t). In other words, it repre(cid:173)
`sents the "block code component" of the code word
`u(t). Now, the convolutional component of the code
`word only depends on the transition a(t-1)-a(t). The
`block component only depends on the actually pro(cid:173)
`duced (k-2)-tuple a3(t) ... ak(t) of information bits at
`time t.
`The decoding now proceeds as follows: the decoder
`receives an n bit vector v(t) that is a noisy version of the
`
`IPR2018-1556
`HTC EX1021, Page 7
`
`

`

`4,747,104
`
`.
`
`a•(t-2)=p(t- l)(a(t-1))
`
`8
`7
`The pair (a•(t-N-1), a•(t-N)) gives the estimated
`nominal code word (vector) u(t). First, we decode for
`value of c'(t-N) of the conventional component
`any possible transition a(t-1)-a(t), the vector v(t)-
`c(t)(a(t-1),a(t)) determined by this transition, into a
`c(t-N)
`of
`the
`code word
`u(t-N),
`and
`decoded estimate b(t) of the block code part generated
`b(t-N)(a•(t-N)) gives the estimated value of the block
`by Zi. There are 64 possible transitions. To each de- 5 code component b'(t-N) of the code word u(t-N).
`coded quantity a "weight quantity" is associated. The
`The second step of the decoding is to update the
`choice depends on the actual Hamming distance be-
`16-tuples referred to above. The updating of W is a
`tween v(t)-c(t) and b(t)' and also on the channel charac-
`classical problem in Viterbi decoding. Solution is
`teristics. Two extreme cases are the following:
`known, and for brevity no further discussion is given. It
`{a) the transmission channel is pure memoryless at the 10 should be noted that at each updating a certain positive
`bit level, which means that the transition probability
`quantity may be subtracted from each of these compo-
`from a code bit value to any erroneous bit value is
`nents, in order to keep them sufficiently small (one
`independent on the actual outcome of this probabilis-
`solution is to take the smallest component as this quan-
`tic process for any other bit transmitted in the same
`tity). Furtherore, the decoder produces two 16-tuples
`code word or in any code word transmitted earlier. In 15 p(t) and b(t) at time t The oldest (least recently devel-
`that case the weight is chosen as the Hamming dis-
`oped) 16-tuples p(t-N) may be discarted, while the
`tance between v(t)-c(t) and b(t)'. Thus, the value of
`remaining 16-tuples are all shifted one position in the
`this weight varies linearly with the Hamming dis-
`time hierarchy.
`By way of summarizing, FIG. 5 gives an elementary
`tance.
`{b) the channel is a pure word channel, which means 20 flow diagram of the decoding process.
`In block 70 the process is started, for example by
`that within a code word either no errors occur or the
`word is completely random. In this situation, the
`initializing the relevant address counters and clearing
`weight is assigned the value O if b{t)' and v(t)-c(t)
`the data memory. In block 72 the next nine bit code
`word is received. In block 74 for each S from O through
`coincide and if not, it is assigned the value 1.
`There are intermediate cases, but these are not discussed 25 63, S being the index of the various transition possibili-
`ties, the contribution of the convolutional part is put
`for brevity. Herein, generally, for low values of the
`Hamming distance, the weight should vary linearly
`forward, the contribution of the block part is extracted
`with this distance, while for higher, a certain asymp-
`from the code word, and compared with the actually
`totic value is reached for the weight quantity.
`received contribution, while also the weight of the dif-
`The Viterbi decoder at time t operates on the follow- 30 ference is determined. In block 76 for each a(t), a(t)
`. ing objects:
`being the index of a state, the minimum value of the
`. (a) a 16-tuple W, having components W(a) that are
`incremented weight as varying over the four predeces-
`sors of state a(t) is determined. The p(a(t)) is the pre-
`indexed by the earlier discussed 16 states "a" of the
`preceding layer in the decoding trellis. The content
`ferred predecessor, and b(a(t))=b'(t)(S) if the system
`of any W(a) is the minimum weight necessary for 35 goes from p(a(t)) to a(t). Thus the index Sis known. In
`reaching the state "a" at time (t-1) from still earlier
`block 78 for each S the minimum value of the incre-
`states.
`mented weight is saved, and also the now known ulti-
`. (b) a set of N (N to be discussed infra) of 16-tuples
`mately ~r~ferred ~redecessors a(t-N) is outputted.
`p(t-i), with i= 1, 2 ... N. The 16 elements p(t-i)(a) of
`From this 1;11format1on, the less signifi':ant data bits are
`p(t-i) are also indexed by the sixteen states "a" of a 40 calculated m known block code decodmg manner. The
`layer in the decoding trellis. The element p(t-i)(a)
`result may be, according to the block code protection in
`contains the "preferred predecessor" of present state
`case, a corre~ted (k- 2~-tuple, or a (k-2)-tuple de-
`"a" at time (t-1), that is the predecessor of"a" in the
`tected to be m error. Fmally, the value oft is incre-
`minimum weight path leading to "a".
`mented and the system goes back to block 72.
`(c) a further set of N 16-tuples b(t-i); (i< 1 •. N). The 45
`If we assume that .transmission of one word takes 0.5
`sixteen elements b(t-i)(a) of b(t-i) are also indexed
`msec. and the decodmg delay may be 20 msec., then the
`value ?f N may be_ a?out 40. Note. that a del_ay of 20
`by the 16 states a of a layer in the decoding trellis.
`The element b(t-i)(a) contains the estimate of the
`msec .. m !he trll!1sm1ss1on of speech IS not ~ons1dered as
`"block code component" for the information trans-
`a subJe~~1ve nwsance .. Howe".'er, for certam reasons (as
`mitted at instant (t-i) under the condition that the 50 an add1t10nal protection agamst a sequence of word-
`burst errors), it could be considered useful! to introduce
`state reached is a.
`At time t the., decoding is executed as follows. First the
`an interleaving schem~ (interleaving itsel~ is we!l known
`decoder estimates the information transmitted at time
`~d not _related to gist of the present mvent1on). An
`(t-N); N to be specified. To do that, the decoder picks
`mterleavmg degree of 3 or 4 o~ the lev~l of the ~ode
`an arbitrary state a(t-1) in the layer of the trellis associ- 55 words leads to a val~e of N which thus 1s 3 to 4 trmes
`ated with instant (t-1) and it computes successively
`smaller than the earlier value of 40. Of course, other
`while back tracking:
`b?unds for the decoding delay would be valid in spe(cid:173)
`cific cases.
`The choice between the several codes proposed may
`60 depend on several criteria. It should be noted that the
`(8, 6) code generated by G4(D) has several advantages
`as here 8 bit quantities are transmitted, whereas 8-bit
`microprocessor systems are widely used. Each byte
`transferred in the decoder may therefore be used to
`65 store two informations of four bits each time. This is
`exactly the place necessary to store a pair (p(t-i)(a),
`b(t-i)(a)) since both quantities need four bits. How(cid:173)
`ever, some part of this information is contained already
`
`{ltere, p contains the predecessor)
`
`a•(t-3)=p(t-2)(a•(t-2))
`
`and so on, until
`
`a•(t-N-l)=p(t-N)(a•(t-N)).
`
`IPR2018-1556
`HTC EX1021, Page 8
`
`

`

`10
`
`9
`in the index a, specifying the associated component of
`p(t-i). Since each state here has only four predeces(cid:173)
`sors, only two bits are necessary to store the estimates
`p(t-i)(a).
`It is

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket