`Piret
`
`Patent Number:
`11
`(45. Date of Patent:
`
`4,747,104
`May 24, 1988
`
`73) Assignee:
`
`(54) DATA TRANSMISSION SYSTEM
`EMPLOYNGA COMBINATION OF BLOCK
`ENCODING AND CONVOLUTION
`ENCODNG FOR ERROR PROTECTION
`75 Inventor: Philippe M. O. A. Piret, Lasne,
`Belgium
`U.S. Philips Corporation, New York,
`N.Y.
`(21) Appl. No.: 871,013
`(22
`Filed:
`Jun, 5, 1986
`30
`Foreign Application Priority Data
`Jun. 14, 1985 (EP)
`European Pat. Off. ........ 85200939.8
`51) Int. C.'.............................................. G06F 11/10
`52 U.S.C. ......................................... 371/39; 371/44
`58 Field of Search ....................... 371/37, 39, 40, 43,
`371/44, 45
`
`56
`
`References Cited
`U.S. PATENT DOCUMENTS
`3,891,959 6/1975 Tsuji et al. ............................ 371/43
`3,988,677 10/1976
`4,038,636 7/1977
`4,178,550 12/1979
`4,240,156 12/1980
`4,276,646 6/1981 Haggard et al. ...................... 371/37
`
`4,355,392 10/1982 Doi et al. .......................... 371/43X
`4,369,512 1/1983 Brossard et al. ...................... 371/43
`4,536,878 8/1985 Rattlingourd et al. ............... 371/43
`Primary Examiner-Charles E. Atkinson
`Attorney, Agent, or Firm-Thomas A. Briody; Jack
`Oisher
`ABSTRACT
`57
`A data transmission system for providing error protec
`tion of transmitted data words. The less significant bits
`of a data word are, by means of matrix multiplication,
`encoded into a first redundant proto-code word and the
`more significant bits are, by means of further matrix
`multiplication and delay by different word recurrence
`intervals encoded in a set of further redundant proto
`code words. A composite of the proto-code word is
`formed by means of a modulo-2-addition of code words,
`so that for the less significant data bits a block code is
`realized, while for the more significant data bits a con
`volutional encoding is realized. In the decoding, the
`more significant bits of the composite code word are
`decoded by means of Viterbi decoding, the Viterbi
`metric being determined from the deviation between
`the reconstructed contribution of the less significant bits
`to such code word and the actually received contribu
`tion of such bits to such code word.
`
`7 Claims, 3 Drawing Sheets
`
`
`
`
`
`
`
`
`
`
`
`OUTPUT CODE
`WORD
`
`DATA WORD BT
`INPUTS
`
`EXCLUSIVE-OR
`
`ENCODER
`
`IPR2018-01474
`Apple Inc. EX1005 Page 1
`
`
`
`U.S. Patent May 24, 1988
`
`Sheet 1 of 3
`
`4,747,104
`
`
`
`
`
`
`
`
`
`OUTPUT CODE
`WORD
`
`DATA WORD Bl
`INPUTS
`
`EXCLUSIVE-OR
`
`ENCODER
`
`7 4
`
`IPR2018-01474
`Apple Inc. EX1005 Page 2
`
`
`
`U.S. Patent May 24, 1988
`
`Sheet 2 of 3
`
`4,747,104
`
`(
`
`O
`
`1
`
`1
`
`1
`O
`O
`O
`
`1
`
`O
`
`O
`1
`O
`O
`
`1
`
`1
`
`0
`
`1
`
`O
`O
`1
`O
`
`O
`
`1
`
`1
`O
`
`O
`O
`O
`1
`
`O
`
`O
`
`0
`1
`
`1
`
`1
`
`1
`1
`1
`
`1
`
`1
`
`O
`
`O
`
`1
`
`0
`
`1
`
`1
`
`O
`
`O
`O
`
`0
`
`1
`
`O
`
`O
`1
`
`1
`
`0
`
`O
`1
`
`1
`
`0
`
`O
`
`O
`
`O
`
`1
`
`O
`
`O
`
`1
`
`O
`
`O
`C
`
`1
`
`1
`
`1
`1
`
`FIG20
`
`FIG.2b
`
`71
`
`Z2=
`
`\O O
`
`O
`
`1
`0
`
`0
`0
`O
`
`s
`
`Gi
`
`O
`
`O
`
`1
`
`O
`
`O
`
`1
`
`1
`
`1
`
`FIG.2C
`
`( X
`
`--
`
`D Y
`
`--
`
`D
`
`2 X )
`
`FIG. 2C
`
`IPR2018-01474
`Apple Inc. EX1005 Page 3
`
`
`
`U.S. Patent May 24, 1988
`
`Sheet 3 of 3
`
`4,747,104
`
`
`
`52
`
`54
`
`MICROCOMPUTER
`
`
`
`REGISTER
`
`60
`DATA WoRD
`REGISTER
`
`
`
`
`
`Wo (a)=W, (a); O. a .63
`
`FIG.5
`
`IPR2018-01474
`Apple Inc. EX1005 Page 4
`
`
`
`10
`
`BACKGROUND OF THE INVENTION
`The invention relates to a system for transmitting
`data words from a transmitting station to a receiving
`station, wherein redundancy bits are added for protec
`tion against bit errors arising in the transmission. With
`increasing bit rates, and generally noisy transmitting
`media, various error protection systems have been pro
`posed.
`15
`DESCRIPTION OF THE RELATED ART
`An example of such a system is described in U.S. Pat.
`No. 4,312,070 to Coombes et al, which relates to a digi
`tal data processing system for use in a mobile trunked
`dispatch communication system. The system of the
`20
`present invention is generally applicable to data com
`munication systems wherein communication between
`stations is subject to burst error phenomena, and each
`transmitted data word comprises more significant bits
`and less significant bits. In such situation a higher effec
`25
`tive level of reliability can be attained if the more signif
`icant bits are better protected than the less significant
`bits. Such situation typically occurs in audio transmis
`sion, notably speech transmission, where errors in the
`less significant bits make the speech less agreeable or
`30
`somewhat difficult to understand but an error in a more
`significant bit can easily make the speech completely
`uncomprehensible. The transmitting medium may be a
`broadcast medium, or also, for example, a data commu
`nication line, or a storage medium, such as magnetic
`35
`tape.
`A burst refers to a series of channel bits wherein the
`error probability is relatively high. Outside a burst the
`error probability is relatively low, and the chance for an
`error to occur is usually independent of any other non
`40
`burst errors. Therefore, these errors are called random
`errors. A burst, which is subject to a high error rate, is
`caused by only partially understood phenomena which
`during an interval of time degrade the channel reliabil
`ity, such as thunderstorms, or movement of the trans
`45
`mitting or receiving station. Especially in mobile radio
`systems the presence of high-rise buildings also may
`influence the channel properties. The burst error rate
`may be for example, 101, the random error rate 10-3
`or less.
`
`4,747,104
`2
`significant bits of each data word into a first proto
`codeword, and also a second encoder which by further
`matrix multiplication block encodes the remaining,
`more significant, bits of each data word into a set of n
`further proto-code words. The second encoder further
`comprises delay elements for imparting respective dif
`ferent delays to the further proto-code words relative to
`the recurrence times of successive data words, and also
`comprises modulo-two adding means for bitwise adding
`the first proto-code word and n further proto-code
`words, the latter being derived from as many different
`data words, thereby convolution encoding the more
`significant bits. The output of the modulo-two adding
`means is supplied to the transmitting medium for trans
`mission to the receiver station.
`In certain preferred embodiments the number n may
`be 3, and from each each data word the two most signif
`icant bits are encoded by what effectively is a convolu
`tional type encoding. It was found that in this way
`appreciable error protection could be obtained at only a
`limited cost of apparatus requirements.
`The invention also relates to a transmitter and a re
`ceiver for use in such systems. The error protection in a
`mobile radio system can be provided in both directions,
`i.e. from a mobile station to a fixed base station, and also
`in the reverse direction. However, in the latter situation,
`other factors can also increase the communication reli
`ability, such as increasing the power level transmitted
`or the antenna configuration. These two measures often
`are not feasible for applying to a mobile station. The
`decoding at the received is preferably effected by
`Viterbi decoding, which is a special kind of maximum
`likelihood decoding. The Viterbi metric denotes the
`total incured amount of deviation from the "true' solu
`tion, and which thus must be minimized. It is deter
`mined from the deviation between the reconstructed
`contribution stemming from the less significant bits of
`the code word, and the actually received contribution
`from those less significant bits.
`BRIEF DESCRIPTION OF THE FIGURES
`The invention is further explained by reference to the
`following Figures, in which:
`FIG. 1 is an elementry block diagram of an encoder
`system;
`FIGS. 2a-2d show a first set of code generator matri
`ces;
`FIGS. 3a-3b show a second set of code generator
`matrices;
`FIG. 4 is an elemntary block diagram of an en
`coder/decoder system;
`FIG. 5 is an elementary flow diagram of the decoding
`process.
`DESCRIPTION OF A PREFERRED
`EMBODIMENT OF AN ENCODER SYSTEM
`In the following, only the encoding and decoding
`aspects of the system are considered. The remainder,
`such as modulating and demodulating the code bits to
`derive channel bits, the physical realization of the chan
`nels, the various fields of use of the invention besides
`mobile telephony, and the construction of the stations
`and their respective data processing subsystems are
`ignored, as not relating to the invention proper. FIG. 1
`is an elementary block diagram of an encoder system for
`a (9, 7) convolutional code. Input 20 receives the seven
`data bits of a data word in parallel. Element 22 is an
`
`50
`
`1.
`
`DATA TRANSMISSION SYSTEM EMPLOYNGA
`COMBINATION OF BOCKENCOOING AND
`CONVOLUTION ENCOONG FOR ERROR
`PROTECTION
`
`SUMMARY OF THE INVENTION
`It is an object of the present invention to provide
`improved encoding which, among other things, pro
`vides a high coding efficiency (high rate), allows for
`55
`easy decoding, gives an increased protection level of
`more significant bits with respect to less significant bits
`in a data word, and furthermore allows for matching the
`error correction capability to the expected burst length.
`Such object is realized according to the invention by
`a system for transmitting data words each comprising a
`sequence of bits of successive significance levels, such
`system comprising a transmitting station and a receiver
`station interconnected by a transmitting medium, the
`transmitting station comprising an encoder system for
`encoding data words by means of redundancy bits.
`Such encoder system comprises a first encoder which
`by matrix multiplication block encodes a set of less
`
`65
`
`IPR2018-01474
`Apple Inc. EX1005 Page 5
`
`
`
`10
`
`25
`
`4,747,104
`4.
`3
`the code is single bit error correcting. The further
`encoder for deriving a block protocode word, which
`zeroes indicate that upon erasure of a code word, no
`becomes part of a convolutional code word. In this
`further information on these less significant bits exists.
`block code, the five less significant bits of the input
`The minimum distance profile for the more significant
`seven bit data word are encoded by means of matrix
`bits is (6.42 0 . . . ). The first integer is the minimum
`multiplication to form a nine bit proto-code word. In
`distance if no erasured code word occurs in a sequence
`itself, matrix multiplication is a conventional technique.
`of three successive code words. Thus the code is double
`Moreover, there are only 32=25 different values of the
`bit error correcting and triple bit error detecting. If one
`five data bits and therefore, a five bit input, nine bit
`code word in a sequence of three words is erased, the
`output programmable logic array would be sufficient.
`minimum distance is 4: single bit error correcting, dou
`Other types of read-only storage arrays would function
`ble bit error detecting. If two code words of a sequence
`equally well. The two most significant bits are entered
`of three are erased, the minimum distance is 2: single bit
`into encoder 24 which has a two bit wide input, nine bit
`error detection. If three or more successive code words
`wide output. Similar technology can be used as for
`are erased, no further information on these more signifi
`encoder 22. However, due to the very low number of
`cant bits exists.
`input leads, encoder 24 can also be realized by known
`15
`The improved error protection of the more signifi
`wild logic or semicustom cellular logic circuits.
`cant bits allows for an additional feature to be realized
`Elements 24, 26, 28 are further encoders which by
`by the invention in that these more significant bit posi
`matrix multiplication generate further nine bit proto
`tions may be used to transfer data bits, for example as
`code words from the two most significant bits of the
`derived from calculations or digital coding of charac
`input data word. Notably, element 28 has exactly the
`20
`ters. In that case the less significant bits are preferably
`same algorithmic structure as element 24; note the indi
`represented by trailing zeroes or other valued non-sig
`cationX in both. Specifically, it has been found that this
`equality is an advantageous realization; however, it is
`nificant bits.
`FIG.2b shows a further generator matrix Z2 that can
`not a necessary requirement.
`be used in lieu of matrix Z1, of FIG. 2a, the matrices X
`Elements 30, 32, 34, 36 are delay elements. In their
`and Y retaining their respective functions. The first row
`simplest form the delay incurred is equal to the interval
`of Z2 is produced by adding the first and third rows of
`between the presentation of successive data words to
`matrix Z1. The second row of matrix Z2 is produced by
`input 20. Therefore, the coexistent proto-code words at
`adding the second and third rows of matrix Z1. The
`the outputs of elements 24, 26, 28 relate to data words of
`third and fourth rows of matrix Z2 are equal to the
`... zero one, and two, respectively, positions earlier in the
`30
`fourth and fifth rows, respectively of matrix Z1. This
`... sequence of data words. Convolutional encoding of
`effective suppressing of a row in a generator matrix is
`these proto-code words of nine bits each is realized by
`conventionally called "expurgation' of a code. In this
`applying them to inputs of an EXCLUSIVE-OR-ele
`way a (9, 6) code is realized. For the more significant
`... ment 38. Here, all bits of corresponding significance
`bits the same minimum distance profile is realized as for
`levels in the proto-code words are added modulo-2, to
`the earlier code. For the less significant bits a minimum
`produce a final nine bit code word on output 40. Paral
`distance of 4 is realized now: single bit error correction,
`lel to serial conversion at output 40 has been omitted for
`... brevity.
`double bit error detection.
`FIG.2c shows a further generator matrix Z3 that can
`... An alternative set-up to FIG. 1 would be to combine
`be used in lieu of matrices Z1, Z2 of earlier Figures. The
`... encoders 24, 28 (since they implement identical matrix
`40
`"... multiplications) and to replace the delay elements 32,36
`first five columns of Z3 are identical to the first five
`columns of Z1. The sixth column of matrix Z3 is pro
`by a corresponding delay introduced at the output of
`duced by adding and inverting all columns six through
`the encoder 24. This represents a trade off between
`nine of matrix Z1, The seventh and eighth columns of
`leaving out decoder 28 and delay elements 32, 36, ver
`matrix Z3 are identical to the eighth and seventh col
`sus introducing a then necessary delay of a further
`45
`umns of matrix Z1, respectively. Furthermore, the first
`proto-code word (nine bits) over two word intervals.
`column of the matrix X--DY--D2X is left out to get G3
`In FIG. 1 the operation of the delay elements and
`in the same way as G1, G2 herebefore. This leaving out
`possibly of the further elements therein may be synchro
`of a column is conventionally called "puncturing' of a
`nized by a clock system. For brevity, this has not been
`code. In this way an (8, 7) code is defined by G3. The
`shown.
`50
`minimum distance profile for the more significant bits is
`DESCRIPTION OF EXEMPLARY CODES
`(3 2 1 0 . . . ); this gives: single bit error correction,
`double bit error detection, single bit error detection,
`FIG. 2a-2a show a first set of code generator matri
`and no error detection, respectively. However, even
`ces. FIG. 2a shows the matrices X, Y and Z1 imple
`with two code words out of three erased, the data con
`mented in encoders 24/28, 26 and 22 of FIG. 1. FIG. 2d
`55
`tent of the more significant bits can be salvaged if no
`shows the combined code generator matrix which re
`further error occurs. For the less significant bits the
`sults in the final code word at output 40. The dimen
`minimum distance is (20...): single bit error detection.
`sions of the combined generator matrix are seven rows
`FIGS. 3a, 3b show a second set of code generator
`of nine columns. The "D' symbolizes the delay opera
`matrices. Herein, matrix Z4 is derived from matrix Z1
`tor, with a length of one data word recurrence time.
`60
`by omitting both the first row and the first column.
`The powers of D symbolize delays by one and two data
`Matrix X is derived from matrix X by omitting the first
`word recurrence times, respectively, realized by delay
`column and inverting the second row. Matrix Y is
`elements 30-36. If all delays were doubled, the expo
`derived from matrix Y by omitting the first column and
`nents of D would be doubled.
`by inverting both rows. The matrix Z"(D) is derived
`Next the minimum distance profile is presented,
`from matrices X, Y in the same way as matrix Z(D)
`which for the less significant bits is (30...). The first
`integer should be always be non-zero and represents the
`from matrices X, Y. The (8, 6) code generated by matrix
`G4(D) is of course generated by puncturing. The (8, 6)
`minimum distance for the individual code word. Thus,
`
`35
`
`65
`
`IPR2018-01474
`Apple Inc. EX1005 Page 6
`
`
`
`O
`
`5
`
`4,747,104
`6
`5
`words are received on input 50. Serial-to-parallel con
`code so generated has therefore the same minimum
`distance profile as the (9, 7) code described earlier; the
`version has been omitted for bevity. Code words are
`later having a higher efficiency (rate).
`transiently stored in register 52. Element 54 is a data
`The respective codes explained herebefore can be
`processing element, for example, a stored program mi
`generated by similar apparatus to that shown in FIG. 1,
`crocomputer. Element 56 is a local memory for storing
`the differences being represented by the entries in the
`all kinds of intermediate information. Element 58 is a
`register for transiently storing a decoded data word.
`generating matrices. Furthermore, the codes disclosed
`are intended to be exemplary. It should be clear that
`These are presented to a user on output 60. Digital to
`analog conversion, e.g. of speech data, has been omitted
`other combinations of block codes can and convolution
`codes be used to realize the advantageous unequal error
`for brevity. Element 62 is a bus, interconnecting ele
`protection level in a data stream as described. Therein,
`ments as indicated for routing data between those ele
`data rate, code word length, and amount of protection
`ments.
`The decoding process generally is preferably exe
`realized should be matched to the specific requirements
`of the case. Notably, the number of terms in the first
`cuted as follows. First, the more significant data bits are
`row of FIG. 2d should be higher for better protection of
`decoded by Viterbi decoding. Thus, the Viterbitrellis is
`the more significant bits against longer error burst
`built up completely as based on the respective possible,
`but unkown, values of those data bits. Next, the Viterbi
`lenths.
`metric is determined as by the weight of the deviation
`GENERAL DESCRIPTION OF DECODING
`between the reconstructed contribution of the less sig
`The preferred decoder to be used in the receiver of
`nificant data bits to the code word, and the actually
`20
`the invention is of the general Viterbi decoder type. A
`received contribution therefrom. The optimized Viterbi
`tutorial paper thereon has been published by G. David
`metric, determining the path to be followed, gives the
`Forney, Proceedings of the IEEE, Vol. 61, No. 3,
`referred predecessor to the actual state and the se
`quence of preferred predecessors will finally yield the
`March 1973, pages 268-278. Applications of Viterbi
`decoding are, in addition to error correcting convolu
`predecessor giving the lowest value of the accumulated
`25
`tional coding, also: interference between neighbouring
`weights. Finally, after the most likely contents of the
`data symbols in an information stream, continual phase
`more significant data bits are retrieved, the less signifi
`shift keying (FSK) and text recognition. Each data
`cant data bits are decoded, and if necessary and consid
`word represents a path in the so-called "trellis' of
`ered feasible, corrected, while in certain cases an addi
`Viterbi, said path having a beginning and an end. The
`tional signalization of remaining errors may occur.
`30
`path consists of a number of intervals and each interval
`SPECIFIC DESCRIPTION OF DECODING
`has an associated partial metric. The sum of the partial
`metrics of a path constitutes the metric or general attri
`For the decoding, any present state of the decoder is
`bute number, and the intention is to find the path with
`considered as being given by two successive pairs of
`the lowest value of the overall metric. In principle the
`values of the more significant data bits. Earlier states are
`35
`path progresses together with the incoming stream of
`non-determinative, as the first row of matrix Gi (FIG.
`channel or code bits, so that continually row intervals
`2d) has only three terms. Note that in case the number
`have to be taken into account. In this progress paths
`of more significant bits were different, each state would
`may split and also reunite; in the latter case only the
`have different k-tuples (k is the number of more signifi
`cant bits). Therefore, in the trellis used for the decoding
`lowest one of the associated metric values is considered
`40
`further. The Viterbi decoder is therefore an example of
`at any time instant sixteen states (2) are associated.
`Any of these sixteen states has four predecessors and
`a "maximum likelihood decoder': which tries to find
`the most probable information content of the code
`four successors, each of these being given by an associ
`ated value pair of the more significant data bits in ques
`stream received.
`For evaluating the codes used herein the following 45
`tion. A state at instant (t-1) is specified by the fourtu
`property is defined: if the degree of the j-th row of an
`ple of information bits: al(t-2), a2(t-2), al(t-1),
`encoder matrix G(D) is called mi, then the complexity
`a2(t–1). At instant t the two last of these four binary
`M(G(D)) is the row-wise sum of these degrees. In the
`digits take the places of the first two ones and two new
`elementary cases considered herebefore, the highest
`digits al(t) and a2(t) take the places of the two last
`degree of D was always equal to 2, and therefore, the
`binary digits. This specifies the new state a(t). To the
`complexity of all encoder matrices is equal to 4. This
`transmission a(t-1)->act) corresponds a partial specifi
`parameter M is important because the associated Viterbi
`cation c(t) of that part of the code word determined by
`decoder has a hardwareocomplexity that is strongly
`the convolutional encoding of the more significant data
`related to 2M(GOP). Thus for each increase of M by one,
`bits: u(t)=c(t)--b(t). Therein, the nine bit vector cGt) is
`the necessary hardware roughly doubles.
`the contribution of the first two rows of the generator
`55
`Now, FIG. 4 gives an elementary block diagram of
`matrix Gi(D) to the code word. In other words, it repre
`an encoder/decoder system, wherein the encoder is
`sents the "convolutional component' of the code word
`only shown as block 47. On input 45 user data input in
`u(t). The vector b(t) is the contribution of the last k-2
`the form of data words. At output 49 a sequence of code
`(i.e. 4 or 5, respectively) rows of the generator matrix
`words is produced. Between output 49 and decoder
`Gi(D) to the code word u(t). In other words, it repre
`60
`input 50 the transmitting medium 48 is present. Now,
`sents the "block code component' of the code word
`for the decoder, in view of the extensive amount of
`u(t). Now, the convolutional component of the code
`back-tracking in a Viterbi process advantageously a
`word only depends on the transition a(t-1)-a(t). The
`programmed microcomputer is used. Furthermore, as
`block component only depends on the actually pro
`explained supra, the decoding often need only be ef
`duced (k-2)-tuple a3(t) . . . ak(t) of information bits at
`65
`fected in a base station and the incurred costs of such a
`time t.
`complicated arrangement are relatively small. The code
`The decoding now proceeds as follows: the decoder
`considered is the one of FIG. 2a. The nine bit code
`receives an n bit vector v(t) that is a noisy version of the
`
`50
`
`IPR2018-01474
`Apple Inc. EX1005 Page 7
`
`
`
`10
`
`15
`
`25
`
`35
`
`4,747,104
`8
`7
`The pair (ae(t-N-1), a (t-N)) gives the estimated
`nominal code word (vector) u(t). First, we decode for
`value of c'(t-N) of the conventional component
`any possible transition a(t-1)->act), the vector v(t)-
`c(t-N) of the code word u(t-N), and
`c(t)(a(t-1),a(t)) determined by this transition, into a
`b(t-N)(a(t-ND) gives the estimated value of the block
`decoded estimate b(t) of the block code part generated
`code component b'(t-N) of the code word u(t-N).
`by Zi. There are 64 possible transitions. To each de- 5
`The second step of the decoding is to update the
`coded quantity a "weight quantity” is associated. The
`16-tuples referred to above. The updating of W is a
`choice depends on the actual Hamming distance be
`classical problem in Viterbi decoding. Solution is
`tween v(t)-c(t) and b(t)' and also on the channel charac
`known, and for brevity no further discussion is given. It
`teristics. Two extreme cases are the following:
`should be noted that at each updating a certain positive
`(a) the transmission channel is pure memoryless at the
`quantity may be subtracted from each of these compo
`bit level, which means that the transition probability
`nents, in order to keep them sufficiently small (one
`from a code bit value to any erroneous bit value is
`solution is to take the smallest component as this quan
`independent on the actual outcome of this probabilis
`tity). Furtherore, the decoder produces two 16-tuples
`tic process for any other bit transmitted in the same
`p(t) and b(t) at time t. The oldest (least recently devel
`code word or in any code word transmitted earlier. In
`oped) 16-tuples p(t-N) may be discarted, while the
`that case the weight is chosen as the Hamming dis
`remaining 16-tuples are all shifted one position in the
`tance between v(t)-c(t) and b(t)'. Thus, the value of
`time hierarchy.
`this weight varies linearly with the Hamming dis
`By way of summarizing, FIG. 5 gives an elementary
`tance.
`flow diagram of the decoding process.
`20
`(b) the channel is a pure word channel, which means
`In block 70 the process is started, for example by
`that within a code word either no errors occur or the
`initializing the relevant address counters and clearing
`word is completely random. In this situation, the
`the data memory. In block 72 the next nine bit code
`weight is assigned the value 0 if b0t)' and v(t)-c(t)
`word is received. In block 74 for each S from 0 through
`coincide and if not, it is assigned the value 1.
`63, S being the index of the various transition possibili
`There are intermediate cases, but these are not discussed
`ties, the contribution of the convolutional part is put
`for brevity. Herein, generally, for low values of the
`forward, the contribution of the block part is extracted
`Hamming distance, the weight should vary linearly
`from the code word, and compared with the actually
`with this distance, while for higher, a certain asymp
`received contribution, while also the weight of the dif
`totic value is reached for the weight quantity.
`ference is determined. In block 76 for each a(t), a(t)
`The Viterbi decoder at time t operates on the follow- 80
`being the index of a state, the minimum value of the
`I.ing objects:
`incremented weight as varying over the four predeces
`(a) a 16-tuple W, having components W(a) that are
`sors of state a(t) is determined. The p(a(t)) is the pre
`indexed by the earlier discussed 16 states “a” of the
`ferred predecessor, and b(a(t)=b'(t)(S) if the system
`preceding layer in the decoding trellis. The content
`goes from p(a(t) to a(t). Thus the index S is known. In
`of any W(a) is the minimum weight necessary for
`block 78 for each S the minimum value of the incre
`reaching the state 'a' at time (t-1) from still earlier
`mented weight is saved, and also the now known ulti
`States.
`mately preferred predecessors a(t-N) is outputted.
`(b) a set of N (N to be discussed infra) of 16-tuples
`From this information, the less significant data bits are
`p(t-i), with i=1,2...N. The 16 elements p(t-i)(a) of
`40 calculated in known block code decoding manner. The
`p(t-i) are also indexed by the sixteen states 'a' of a
`result may be, according to the block code protection in
`layer in the decoding trellis. The element p(t-i)(a)
`case, a corrected (k-2)-tuple, or a (k-2)-tuple de
`contains the "preferred predecessor' of present state
`tected to be in error. Finally, the value of t is incre
`"a' at time (t-1), that is the predecessor of “a” in the
`mented and the system goes back to block 72.
`minimum weight path leading to “a”.
`If we assume that transmission of one word takes 0.5
`45
`(c) a further set of N 16-tuples b(t-i); (i(1 ... N). The
`msec. and the decoding delay may be 20 msec., then the
`sixteen elements b(t-i)(a) of bot-i) are also indexed
`value of N may be about 40. Note that a delay of 20
`by the 16 states a of a layer in the decoding trellis.
`msec. in the transmission of speech is not considered as
`The element b(t-i)(a) contains the estimate of the
`a subjective nuisance. However, for certain reasons (as
`"block code component' for the information trans
`an additional protection against a sequence of word
`mitted at instant (t-i) under the condition that the
`burst errors), it could be considered usefull to introduce
`state reached is a.
`an interleaving scheme (interleaving itself is well known
`At time t the decoding is executed as follows. First the
`and not related to gist of the present invention). An
`decoder estimates the information transmitted at time
`interleaving degree of 3 or 4 on the level of the code
`(t-N); N to be specified. To do that, the decoder picks
`words leads to a value of N which thus is 3 to 4 times
`an arbitrary state a(t-1) in the layer of the trellis associ
`smaller than the earlier value of 40. Of course, other
`ated with instant (t-1) and it computes successively
`bounds for the decoding delay would be valid in spe
`while back tracking:
`cific cases.
`The choice between the several codes proposed may
`as(t-2)=p(t-1)(a(t-1))
`depend on several criteria. It should be noted that the
`(8, 6) code generated by G4(D) has several advantages
`as here 8 bit quantities are transmitted, whereas 8-bit
`microprocessor systems are widely used. Each byte
`transferred in the decoder may therefore be used to
`store two informations of four bits each time. This is
`exactly the place necessary to store a pair (p(t-i)(a),
`b(t-i)(a)) since both quantities need four bits. How
`ever, some part of this information is contained already
`
`(here, p contains the predecessor)
`a(t-3)=p(t-2)(as (t-2))
`
`50
`
`55
`
`65
`
`and so on, until
`
`IPR2018-01474
`Apple Inc. EX1005 Page 8
`
`
`
`5
`
`10
`
`4,747,104
`10
`adding means having an output connected to said
`in the index a, specifying the associated component of
`p(t-i). Since each state here has only four predeces
`transmission medium.
`2. A receiving station for receiving code words from
`sors, only two bits are necessary to store the estimates
`a transmitting station in a system as claimed in claim 1,
`p(t-i)(a).
`comprising:
`It is clear that the proto-code word for the less signifi
`Viterbi decoding means for decoding all relevant
`cant bits should have at least a minimum Hamming
`possible states of more significant bits of the “n”
`distance of two for those less significant bits if this
`transmitted data words contributing to a received
`proto-code word is only used once. On the other hand,
`code word;
`this latter proto-code word could also be used twice for
`means connected to such decoding means for deter
`generating as many final code words. It is not necessary
`mining respective deviations between a reconstruc
`that all proto-code words have exactly the same bit
`tion of the contribution of less significant bits to a
`length. Also, the data word may be split into three or
`received code word and the actual contribution of
`more parts, the protection of the more significant part(s)
`such bits to such received code word;
`always being greater than the protection of the less
`means connected to shoh deviation determining
`significant part(s).
`means for determining, from said