`CODING AND DECODING : TURBO-CODES (1)
`Claude Berrou, Alain Glavieux and Punya Thitimajshima
`
`Claude Berrou, Integrated Circuits for Telecommunication Laboratory
`
`Alain Glavieux and Punya Thitimajshima, Digital Communication Laboratory
`
`Ecole Nationale Superieure des Telecommunications de Bretagne, France
`
`(1) Patents W 9105279 (France), W 92460011.7 (Europe), W 07/870,483 (USA)
`
`Absfrqct- This paper deals with a new class of convolutional
`codes called Turbo-codes, whose perfonnances in tenns of
`Bit Error Rate (BER) are close to the SHANNON limit. The
`Turbo-Code encoder is built using a parallel concatenation of
`two Recursive Systematic Convolutional codes and the
`associated decoder, using a feedback decoding rule, is
`implemented as P pipelined identical elementary decoders.
`
`(la)
`
`I - INTRODUCTION
`Consider a binary rate R= 112 convolutional encoder with
`constraint length K and memory M=K-1. The input to the
`encoder at time k is a bit dk and the corresponding codeword
`Ck is the binary couple (Xk. Yk) with
`K-1
`mod. 2 g1; = o, 1
`xk = X glidk-i
`i=O
`K-1
`
`(lb)
`
`(5)
`
`e=O,l.
`
`P,{at =Oia1 =e1, .. at_ 1 =ek_1) = Pr{dk =e) =112 (4)
`withe is equal to
`K-l
`e= .rr;e; mod.2
`1=1
`Thus the trellis structure is identical for the RSC
`code and the NSC code and these two codes have the same
`free distance dj- However, the two output sequences {Xk} and
`{ Y k } do not correspond to the same input sequence { dk} for
`RSC and NSC codes. This is the main difference between the
`two codes.
`When punctured code is considered, some output
`bits Xk or Yk are deleted according to a chosen puncturing
`pattern defined by a matrix P. For instance, starting from a
`rate R=1/2 code, the matrix P of rate 2/3 punctured code is
`
`p= [: ~]
`
`Fig. 1 a Classical Non Systematic code.
`
`dk--.---------------------~
`
`Yk = L g2;dk-i mod. 2 g2; = 0,1
`i=O
`where G1: {gu}. G2: {gz;} are the two encoder generators,
`generally expressed in octal fonn.
`It is well known, that the BER of a classical Non
`Systematic Convolutional (NSC) code is lower than that of a
`classical Systematic code with the same memory M at large
`SNR. At low SNR, it is in general the other way round. The
`new class of Recursive Systematic Convolutional (RSC)
`codes, proposed in this paper, can be better than the best NSC
`code at any SNR for high code rates.
`A binary rate R=l/2 RSC code is obtained from a
`NSC code by using a feedback loop and setting one of the
`two outputs Xk or Yk equal to the input bit dk. For an RSC
`code, the shift register (memory) input is no longer the bit dk
`but is a new binary variable ak. If Xk=dk (respectively
`Yk=dk). the output Yk (resp. Xk) is equal to equation (lb)
`(resp. 1a) by substituting ak for dk and the variable ak is
`recursively calculated as
`K-1
`ak = dk + X riak-i
`i=1
`where Yi is respectively equal to gu if Xk=dk and to gz; if
`Y k=dk. Equation (2) can be rewritten as
`K-1
`dt = Xr;at-i
`i=O
`One RSC encoder with memory M=4 obtained from an NSC
`encoder defined by generators Gt =37, Gz=21 is depicted in
`Fig.l.
`
`mod. 2
`
`(2)
`
`mod.2.
`
`(3)
`
`Generally, we assume that the input bit dk takes
`values 0 or 1 with the same probability. From equation (2),
`we can show that variable ak exhibits the same statistical
`property
`
`0-7803-0950-2/93/$3.00©1993IEEE
`
`1064
`
`Fig. 1 b Recursive Systematic code.
`
`Hughes, Exh. 1061, p. 1
`
`
`
`II. PARALLEL CONCATENATION OF RSC CODES
`With RSC codes, a new concatenation scheme,
`called parallel concatenation can be used. In Fig. 2, an
`example of two identical RSC codes with parallel
`concatenation is shown. Both elementary encoder (Ct and
`C2) inputs use the same bit dk but according to a different
`sequence due to the presence of an interleaver. For an input
`bit sequence { dk}, encoder outputs Xk and Y k at time k are
`respectively equal to dk (systematic encoder) and to encoder
`Ct output Y1k· or to encoder C1 output Y2k· If the coded
`outputs (Ylk, Y2k) of encoders Ct and C 1 are used
`respectively n1 times and n2 times and so on, the encoder Ct
`rate R 1 and encoder C1 rate R2 are equal to
`
`(6)
`
`dk~--------------------------~
`
`c1
`Recursive
`Systematk:
`Code (37,21)
`
`c2
`Recursive
`Systematic
`Code (37,21)
`
`Fig. 2 Recursive Systematic codes
`with parallel concatenation.
`
`The decoder DEC depicted in Fig. 3a, is made up of two
`elementary decoders (DECt and DEC2)
`in a serial
`concatenation scheme. The first elementary decoder DECt is
`associated with the lower rate R1 encoder Ct and yields a
`soft (weighted) decision. The error bursts at the decoder
`DECt output are scattered by the interleaver and the encoder
`delay Lt is inserted to take the decoder DECt delay into
`account. Parallel concatenation is a very attractive scheme
`because both elementary encoder and decoder use a single
`frequency clock.
`For a discrete memoryless gaussian channel and a
`binary modulation, the decoder DEC input is made up of a
`couple Rk of two random variables Xk and Yk. at time k
`(7a)
`xk = (2dk -l) + ik
`(7b)
`Yt =(2Yk -l)+qk,
`where ik and qk are two independent noises with the same
`variance a2. The redundant information Yk is demultiplexed
`and sent to decoder DECt when Yk =Ytk and toward decoder
`DEC2 when Yk =Y2k· When the redundant information of a
`
`1065
`
`given encoder (Ct or C2) is not emitted, the corresponding
`decoder input is set to zero. This is performed by the
`DEMUX/INSERTION block.
`It is well known that soft decoding is better than
`hard decoding, therefore the first decoder DECt must deliver
`to the second decoder DEC2 a weighted (soft) decision. The
`Logarithm of Likelihood Ratio (LLR), A 1 (dk ) associated
`with each decoded bit dk by the first decoder DECt is a
`relevant piece of information for the second decoder DEC2
`P,. { dk = 1 I observation}
`}' (8)
`A 1(dt)=Log
`{
`Pr dk = 0 I observation
`where Pr{dk =i /observation}, i = 0, 1 is the a posteriori
`probability (APP) of the data bit dk.
`
`I
`
`1\
`dn·'-2
`!\1(dn)
`I ...------. I
`16-STATE
`I DECODER
`DEC2
`Latency : L 2
`y2k
`
`I
`
`yk
`
`:)._1_:
`DEMUX/
`INSERTION
`
`Fig. 3a Principle of the decoder according to
`a serial concatenation scheme.
`
`III • OPTIMAL DECODING OF RSC CODES WITH
`WEIGHTED DECISION
`The VITERBI algorithm is an optimal decoding
`method which minimizes the probability of sequence error
`for convolutional codes. Unfortunately this algorithm is not
`able to yield the APP for each decoded bit. A relevant
`algorithm for this purpose has been proposed by BAHL et al.
`[1]. This algorithm minimizes the bit error probability in
`decoding linear block and convolutional codes and yields the
`APP for each decoded bit. For RSC codes, the BAHL et al.
`algorithm must be modified in order to take into account their
`recursive character.
`
`III • 1 Modified BAHL et al. algorithm for RSC codes
`Consider a RSC code with constraint length K; at
`time k the encoder state Skis represented by a K-uple
`
`Sk = (ak,ak-i"'""ak-K+i)·
`(9)
`Also suppose that the information bit sequence {dk} is made
`up of N independent bits dk. taking values 0 and 1 with equal
`probability and that the encoder initial state So and final state
`SNare both equal to zero, i.e
`So= SN= (0, 0 ...... 0) = 0.
`(10)
`The encoder output codeword sequence, noted
`Ct = { C, ....... Ck ........ CN} is
`the
`input tO a discrete
`gaussian memoryless channel whose output is the sequence
`Rt = { R1 ........ Rk ........ RN} where Rk =(Xk.YV is defined by
`relations (7a) and (7b).
`
`Hughes, Exh. 1061, p. 2
`
`
`
`The APP of a decoded data bit dk can be derived
`from the joint probability ).~(m) defined by
`A.~( m) = P, { dk = i, sk = m I Rf}
`(11)
`and thus, the APP of a decoded data bit dk is equal to
`Pr{dk=iiRf}=I A.~(m), i=O,t (12)
`m
`From relations (8) and (12), the LLR A ( dk) associated with
`a decoded bit dk can be written as
`I A.~(m)
`
`A(dk) =Log I ).~(m).
`
`(13)
`
`dk = 1
`
`m
`Finally the decoder can make a decision by comparing
`A ( dk)
`to a threshold equal to zero
`if A ( dk) > 0
`if A(dk)<O.
`dk=O
`(14)
`In order to compute the probability ).~(m), let us introduce
`the probability functions a~ ( m), {Jk ( m) and rJ Rk, m ', m)
`P { d - i S - m Rk}
`1 Pr{dk=i,Sk=m!Rt}(15)
`r k-'r\} '
`pr Rl
`
`a~(m)
`
`(17)
`The joint probability ).~ ( m) can be rewritten using BAYES
`rule
`
`Thus we obtain
`P,{dk = i, Sk = m, Rtj P,.{ Rf+l ld~c = i,Sk = m, Rtj
`-
`Pr{ Rf+1 I Rtj
`Pr{ Rtj
`
`).~(m)
`
`(19)
`Taking into account that events after time k are not
`influenced by observation Rf and bit dk if state Sk is known,
`the probability A.~(m) is equal
`A.~(m) = aL(m){Jk(m).
`(20)
`The probabilities ai(m) and {Jk(m) can be recursively
`calculated from probability Yi ( Rk, m ', m). From annex I, we
`obtain
`
`and
`
`I
`I IrJRk+l•m,m'Jf3~c+1 (m')
`m'i=O
`I
`1
`.
`I I I IrJRk+l•m',mJai(m')
`m m'i=Oj=O
`The probability rJ Rk, m ', m) can be determined from
`transition probabilities of the discrete gaussian memoryless
`
`(22)
`
`1066
`
`channel and transition probabilities of the encoder trellis.
`From relation (17), rJ Rk> m ', m) is given by
`
`yJR~c,m',m)=p(Rkld~c =i,S~c =m,Sk-1 =m')
`q(dk =i!Sk =m,Sk-l =m')tr(Sk =miSk-l =m')
`(23)
`where p(.l.) is the transition probability of the discrete
`gaussian memoryless channel. Conditionally
`to
`(dk = i, Sk = m, Sk-1 = m'), Xk and Yk are two uncorrelated
`gaussian variables and thus we obtain
`p(R~c ld~c = i,Sk = m,S~c_1 = m') =
`p(xkldk =i,Sk =m,Sk-l =m')
`p(y~cldk =i,Sk =m,Sk-l =m').
`(24)
`Since the convolutional encoder is a deterministic machine,
`q( dk = i I sk = m, sk-I = m I)
`is equal to 0 or 1. The
`transition state probabilities tr(Sk = miS~c_1 = m') of the
`trellis are defined by the encoder input statistic.
`Generally,Pr{d~c =1} = Pr{dk =0} =112 and since there
`are
`two
`possible
`transitions
`from
`each
`state, tr( S~c = m I S1c _1 = m ') = 1/2
`for each of
`these
`transitions.
`
`Different steps of modified BAHL et al. algorithm
`-Step 0 : Probabilities ab(m) and fJN(m) are
`initialized according to relation (12)
`ab(0)=1 ab(m)=O Vm~O. i=O,l
`(25a)
`fJN(O)=l {JN(m)=O Vm~O.
`(25b)
`-Step 1 : For each observation Rk. the probabilities
`a~( m) and rJ R1c, m ', m) are computed using relations (21)
`and (23) respectively.
`-Step 2
`: When the sequence Rf has been
`completely received, probabilities f3~c ( m) are computed using
`relation (22), and probabilities aL(m) and {Jk(m) are
`multiplied in order to obtain ).~ ( m). Finally the LLR
`associated with each decoded bit dk is computed from
`relation (13).
`
`IV- THE EXTRINSIC INFORMATION OF THE RSC
`DECODER
`In this chapter, we will show that the LLR A(dk)
`associated with each decoded bit dk, is the sum of the LLR
`of dk at the decoder input and of another information called
`extrinsic information, generated by the decoder.
`Using the LLR A(dk) definition (13) and relations (20) and
`(21), we obtain
`I I I y1 ( R~c. m',m)ai_1 (m')fJ~c(m)
`A(dk)=Log mm'j~O
`• (26)
`.
`I I I y0 ( Rk, m', m)ai_1 ( m' >f3~c(m)
`m m' j=O
`Since the encoder is systematic (Xk = dk). the transition
`probability p( xk I dk = i, sk = m, sk-I = m,) in expression
`yJR~c,m',m) is independent of state values Sk and Sk-1·
`Therefore we can factorize this transition probability in the
`numerator and in the denominator of relation (26)
`
`I
`
`.
`
`Hughes, Exh. 1061, p. 3
`
`
`
`feedback loo
`
`16-STATE
`DECODER t------l
`DEC1
`
`16-STATE
`t-----1 DECODER
`DEC2
`
`DEMUX/
`INSERTION
`
`Fig. 3b Feedback decoder (under 0 Internal delay assumption).
`
`decoded output
`A
`dk
`V-1 Decoding with a feedback loop
`We consider now that both decoders DEC1 and
`DEC1 use the modified BAHL et al. algorithm. We have
`seen in section IV that the LLR at the decoder output can be
`expressed as a sum of two terms if the decoder inputs were
`independent. Hence if the decoder DEC1 inputs A 1 (dk) and
`Y2k are independent, the LLR Az(dt) at the decoder DEC1
`output can be written as
`A2(dk)=J(A 1(dk))+W2k
`
`(30)
`
`with
`
`(31)
`
`2
`A1(dk) = -2 xk + ")k
`(J
`From relation (29), we can see that the decoder DEC1
`extrinsic information W 2k is a function of the sequence
`{At (dnJln;tk' Since A 1 (dn) depends on observationR{",
`extrinsic information W2k is correlated with observations Xk
`and Ylk· Nevertheless from relation (29), the greater I n-k I is,
`the less correlated are At (dn) and observations Xk, Yk· Thus,
`due to the presence of interleaving between decoders DEC1
`and DEC1, extrinsic information Wzk and observations Xk>
`Ytk are weakly correlated. Therefore extrinsic information
`Wzk and observations Xk, Ylk can be jointly used for carrying
`out a new decoding of bit dk, the extrinsic information
`Zk = W2k acting as a diversity effect in an iterative process.
`In Fig. 3b, we have depicted a new decoding scheme
`using the extrinsic information Wzk generated by decoder
`DEC1 in a feedback loop. This decoder does not take into
`account the different delays introduced by decoder DEC1
`and DEC2 and a more realistic decoding structure will be
`presented later.
`The frrst decoder DEC1 now has three data inputs,
`(xk, Ytk> zk) and probabilities a/k(m) andf31k(m) are
`computed in substituting Rk ={Xk, Ytk} by Rk =<rk. Ylk· (k) in
`relations (21) and (22). Taking into account that Zk is weakly
`correlated with Xk and Ylk and supposing that Zk can be
`approximated by a gaussian variable with variance ai :# a 2,
`the transition probability of the discrete gaussian memory less
`channel can be now factored in three terms
`
`I
`
`.
`
`.r .r .r rl (Jt. m', m)a£_1 (m')/3k(m)
`
`Lo m m'j=O
`g
`I
`I.E Iro<Yk·m',m)aL(m')/3k(m)
`m m 'j=O
`Conditionally to dk = 1 (resp. dk =0), variables Xk are
`gaussian with mean 1 (resp. -1) and variance cr2. thus the
`LLR A (dk) is still equal to
`2
`A ( dk) = -:2 xk + wk
`(J
`
`(27)
`
`(28)
`
`where
`wk = A ( dk) I _0 =
`
`Xt-
`
`I
`.
`I Y1 (Yk• m', m)al_1 (m')/3k(m)
`I I
`Logmm'j~O
`.(29)
`.E Yo (Yk• m', m)af_1 (m' )/3k(m)
`I I
`m m' j=O
`wk is a function of the redundant information introduced by
`the encoder. In general Wk has the same sign as dk; therefore
`Wk may improve the LLR associated with each decoded data
`bit dk. This quantity represents the extrinsic information
`supplied by the decoder and does not depend on decoder
`input Xk . This property will be used for decoding the two
`parallel concatenated encoders.
`
`V ·DECODING SCHEME OF PARALLEL
`CONCATENATION CODES
`In the decoding scheme represented in Fig. 3a,
`decoder DEC 1 computes ·LLR A 1 (dk) for each transmitted
`bit dk from sequences {Xk} and {YkL then the decoder DEC1
`performs the decoding of sequence{dk} from sequences
`{At(dk)} and {Yk}. Decoder DECt uses the modified BAHL
`et a/. algorithm and decoder DEC1 may use the VITERBI
`algorithm. The global decoding rule is not optimal because
`the first decoder uses only a fraction of the available
`redundant information. Therefore it is possible to improve the
`performance of this serial decoder by using a feedback loop.
`
`1067
`
`Hughes, Exh. 1061, p. 4
`
`
`
`p(R~cldk =i,S~c =m,S/c-l =m')= p(x~cl.)p(ykl.)p(z~cl.) (32)
`The encoder Ct with initial rate R 1, through the feedback
`loop, is now equivalent to a rate R'1 encoder with
`
`(33)
`
`Rl
`R' -
`1 -l+R I
`The first decoder obtains an additional redundant information
`with Zk that may significantly improve its performances; the
`term Turbo-codes is given for this iterative decoder scheme
`with reference to the turbo engine principle.
`With the feedback decoder, the LLR A 1 (dtJ generated by
`to
`decoder DECt is now equal
`2
`2
`A1 (d~c) = -2 x1c +-2 z~c + ~k (34)
`a
`az
`where Wlk depends on sequence {znln;tk· As indicated
`above, information Zk has been built by decoder DEC2 at the
`previous decoding step. Therefore Zk must not be used as
`input information for decoder DEC2. Thus decoder DEC2
`input sequences at step p (p ~ 2) will be sequences
`
`{AI (dn)} and {Y2kl with
`
`(36)
`
`(35)
`Al(dn)=AI(dn)z,=O·
`Finally from relation (30), decoder DEC2 extrinsic
`information Zk = Wzk, after deinlerleaving, can be written as
`z~c = W21c = A1 (d~cJI li,!d.J=o
`and the decision at the decoder DEC output is
`dk =sign{A2 (d~cJ].
`(37)
`The decoding delays introduced by decoder DE C
`(DEC=DECt+DEC2), the interleaver and the deinterleaver
`imply that the feedback information Zk must be used through
`an iterative process as represented in Fig. 4a, 4b. In fact, the
`global decoder circuit is composed of P pipelined identical
`elementary decoders (Fig. 4a). The pth decoder DEC
`(Fig. 4b) input, is made up of demodulator output sequences
`(x)p and (y)p through a delay line and of extrinsic information
`(z)p generated by the (p-l)th decoder DEC. Note that the
`
`variance a; of the extrinsic information and the variance of
`A 1 ( d k ) must be estimated at each decoding step p.
`
`V -2 Interleaving
`The interleaver uses a square matrix and bits { dk}
`are written row by row and read pseudo-randomly. This non(cid:173)
`uniform reading rule is able to spread the residual error
`blocks of rectangular form, that may set up in the interleaver
`located behind the first decoder DECt. and to give the
`greater free distance as possible to the concatenated (parallel)
`code.
`
`VI-RESULTS
`For a rate R=l/2 encoder with constraint length K=5,
`generators G 1 =3 7, G2=2l and parallel concatenation
`(R1=R2=2/3), we have computed the Bit Error Rate (BER)
`after each decoding step using the Monte Carlo method, as a
`function of signal to noise ratio Eb!No where Eb is the
`energy received per information bit dk and No is the noise
`mono lateral power spectral density. The interleaver consists
`of a 256x256 matrix and the modified BAHL et at. algorithm
`has been used with length data block of N=65536 bits. In
`
`From
`demodulator
`
`decoded.
`output
`(d)p
`
`Fig. 4a Modular pipelined decoder, corresponding to an
`iterative processus of the feedback decoding.
`
`(z)p-,
`
`(Y)p-1
`
`DELAY UNE
`
`Fig. 4b Decoding module (level p).
`
`y
`(~p
`illenneciate
`dealded
`output
`
`order to evaluate a BER equal to 10"5, we have considered
`128 data blocks i.e. approximatively 8 x106 bits dk. The BER
`versus Eb!No, for different values of pis plotted in Fig. 5.
`For any given signal to noise ratio greater than 0 dB, the BER
`decreases as a function of the decoding step p. The coding
`gain is fairly high for the first values of p (p=1,2,3) and
`carries on increasing for the subsequent values of p. For p=18
`for instance, the BER is lower than 10-5 at Eb/No= 0,7 dB.
`Remember that the Shannon limit for a binary modulation
`with R= 1/2, is P e = 0 (several authors take P e = 10·5 as a
`reference) for Eb!No= 0 dB. With parallel concatenation of
`RSC convolutional codes and feedback decoding, the
`performances are at 0, 7 dB from Shannon's limit.
`The influence of the constraint length on the BER
`has also been examined. For K greater than 5, at
`EbiNo= 0,7 dB, the BER is slightly worst at the first (p=l)
`decoding step and the feedback decoding is inefficient to
`improve the final BER. For K smaller than 5, at
`EbiNo= 0,7 dB, the BER is slightly better at the first
`decoding step than for K equal to 5, but the correction
`capacity of encoders C 1 and C2 is too weak to improve the
`feedback decoding. For K=4 (i.e. 8-state
`BER with
`elementary decoders) and after iteration 18, a BER of 10·5 is
`achieved at Eb!No = 0,9 dB. ForK equal to 5, we have tested
`several generators (G1, G2 ) and the best results were
`achieved with G1 =37, G2=21.
`
`1068
`
`Hughes, Exh. 1061, p. 5
`
`
`
`Binary ................... , ................. , .............. , ............................ ,, ................ r- ................ ,
`Error
`,.. ....
`Rate
`
`_2
`
`i
`
`\
`
`10 .,
`
`.........
`
`• ••••.••••• ,
`
`········
`
`-3
`1 0
`
`l
`
`l
`
`I
`
`1\
`~un~oded
`s·~· ,:Si(~···············••l±·.'······-· .... ·· ... ·.·.·.· ......... ·.·.··.· .. ·.··.~@2;~
`........
`..........
`\ ...................... L .... :.....
`'
`..... ·····
`\
`I
`......................... ·+·· .... :v; .................. ~ ........................................... .
`10~ ••••••••••••••••·••·•· 1 x F ; \ \~~·····!···········+···········'····················!
`, \ ~--~~~~ ................. , ............. ~.::.:_~ .... \\·· .. ··•···· ........... , ............... , ............... ,
`I h~r.:an 01
`. -;~,~H@I~ IJi.: .... ::_: ... :: .. :::·:······ .. 2 ... j: .... : ..... ::, ..................................... , ................... , ............... J .................. + ............. ,
`
`", ........................... ,
`
`1 o _,
`
`\
`
`\
`
`.
`
`....... L .................................. l. .............. , ................ , .................... + ............... , ................. + ...................... , .............. + ............... , .................... ,
`
`'
`
`2
`
`3
`
`4
`
`5 Eb/No (dB)
`
`1
`
`0
`theoretical
`limit
`
`Fig. 5 Binary error rate given by Iterative decoding (p=1, ... ,18)
`of code of fig. 2 (rate:1/2); interleaving 256x256.
`
`For low signal to noise ratios, we have sometimes noticed
`that BER could increase during the iterative decoding
`process. In order to overcome this effect, we have divided the
`extrinsic information Zk by [ 1 + BIA!(dk~ ]withe= 0,15.
`
`In Fig. 6, the histogram of extrinsic information (z)p
`has been drawn for several values of iteration p, with all data
`bits equal to 1 and for a low signal to noise ratio
`(Eb!No= 0,8 dB). For p=1 (first iteration), extrinsic
`information (Z)p is very poor about bit dk, furthermore the
`gaussian hypothesis made above for extrinsic information
`(z)p. is not satisfied! Nevertheless when iteration p increases,
`the histogram merges towards a gaussian law with a mean
`equal to 1. For instance, for p=l3, extrinsic information (Z)p
`becomes relevant information concerning data bits.
`
`VII CONCLUSION
`In this paper, we have presented a new class of
`convolutional codes called Turbo-codes whose performances
`in terms of BER are very close to SHANNON's limit. The
`decoder is made up of P pipelined identical elementary
`modules and rank p elementary module uses the data
`information coming from the demodulator and the extrinsic
`information generated by the rank (p-1) module. Each
`elementary module uses a modified BAHL et al. algorithm
`which is rather complex. A much simpler algorithm yielding
`weighted (soft) decisions has also been investigated for
`Turbo-codes decoding [2], whose complexity is only twice
`the complexity of the VITERBI algorithm, and with
`performances which are very close to those of the BAHL et
`al. algorithm. This new algorithm will enable encoders and
`
`1069
`
`Hughes, Exh. 1061, p. 6
`
`
`
`Pr{ R" I Rt-1} =I I
`_± _± y;(R", m'm)a£_1(m' ). (A6)
`m m'
`i=O j=O
`Finally probability ai ( m) can be expressed from probability
`ai_1 ( m) by the following relation
`I .r Y;( R", m' m)al_1 (m')
`m' j=O
`a" ( m) =
`.
`.
`1
`1
`I y;( R~c,m'm)al_1 (m')
`I I
`I
`i=O j=O
`m m'
`In the same way, probability ~k ( m) can be recursively
`calculated from probability ~k+1 (m). From relation (16), we
`have
`
`j
`
`( A7)
`
`I
`
`.
`
`_ Pr{Rf+11S" =m}
`f3~c(m)-
`{ N
`"}
`Pr Rk+t I Rt
`
`I
`I I P, { dt+t = i, Sk+t = m, 'Rf+2 , Rk+l IS" = m}
`. (A8)
`m'i=O
`k
`N
`Pr{Rt+11Rt}
`By using BAYES rule, the numerator is equal to
`
`Pr{ Rf+t IS"= m} = L .± Pr{ R{+ 2 I St+t = m'}
`
`m'•=O
`Pr{dt+t=i,St+t=m,'Rt+tiS~c=m}.
`(A9)
`By taking into account expressions of y;(Rk+ vn,m') and
`~k+1 (m), we can write
`I
`I L r;( Rk+t• m, m' )f3t+t (m')
`f3t(m)=m'i-o
`
`(AlO)
`
`P,{Rk+tiRt"j
`In substituting k by (k+ 1) in relation (A6), the denominator
`of (AIO) is equal to
`.
`1
`I
`"}
`IrJRt+t•m'mJal(m'). (All)
`Pr Rt+11R1 =II I
`{
`j=O
`i=O
`m m'
`Finally probability ~k (m)can be expressed from probability
`~k+1 (m), by the following relation
`I
`L L y;( Rk+l• m, m' ){3k+t (m')
`m'i=~
`f3~c(m)=
`. (A12)
`I
`.
`.r.r I
`Ir;(Rk+l•m'm)al(m')
`m m'
`i=O j=O
`REFERENCES
`
`[1] L.R. Bahl, J. Cocke, F. Jeinek and J. Raviv,
`"Optimal decoding of linear codes for minimizing symbol
`error rate", IEEE Trans. Inform. Theory, vol. IT-20, pp. 248-
`287, March 1974.
`
`[2] C. Berrou, P. Adde, E. Angui and S. Faudeil, "A
`low complexity soft-output Viterbi decoder architecture", to
`appear at ICC' 93.
`
`decoders to be integrated in silicon with error correcting
`performances unmatched at the present time.
`
`15%
`
`10%
`
`5%
`
`i
`
`I
`
`I
`'
`i
`
`i
`l
`
`j
`
`l
`I
`I
`j
`
`I
`I
`l
`
`.........
`
`11;-ool?d ,.?'~~
`~-~
`""t~....;
`~-
`+2
`
`) #13
`
`+1
`
`14
`
`0
`
`-1
`
`1 ... #0·--·--""r--··r
`0%
`NORMALIZED
`SAMPLES
`Fig. 6 Histograms of extrinsic information z after
`iterations 111 ,4, 13 at EbiNo = 0.8 dB;
`all information bits d=1.
`ANNEX I : EVALUATION OF PROBABILITIES a~ ( m)
`AND f31Jm).
`From relation (15) probability a~(m) is equal to
`Pr{dt =i,S" =m,Rt- 1 ,R~c}
`;
`a~c(m) =
`{ 1c-1
`}
`,Rt
`Pr R1
`Pr{ d" = i, St = m, Rt I R1t-t}
`Pr{ R" I Rt-t}
`
`(AI)
`
`I
`
`The numerator of ai(m) can be expressed from state Sk-1
`and bit dk -1·
`Pr{d~c = i, S" = m,R" I Rt-1} =
`.r IPr{dk =i,dk-1 =j,Sk =m, sk-1 =m',R~ciRt-l} (A2)
`m' j=O
`By using BAYES rule, we can write
`Pr{dt = i,S" = m,R" I Rf-1} =
`1 Pr{d~c_1 =j,Sk-l =m',Rt-1}
`.r .r - - - - !o - - - . - - , . . . - - - , : : - - -L .
`m' j=O
`pr { Rt-1}
`P,{dt =i,S" =m,R~cld~c_1 =j,Sk-t =m',Rf-1]. (A3)
`By taking into account that events after time (k-1) are not
`influenced by observation Rt-t and bit dk-1 if state Sk-1 is
`known and from relation (17) we obtain
`Pr{d~c =i,S" =m,R~ciRf- 1 } =
`I
`.r IrJR~c,m'mJal_1 (m'). (A4)
`m' j=O
`The denominator can be also expressed from bit dk and state
`sk
`Pr { R~c I Rt 1} = L L Pr { d~c = i, S1c = m, Rt I R1"-1} ( A5)
`m •=0
`and from relation (A4), we can write :
`
`.
`
`I
`
`1070
`
`Hughes, Exh. 1061, p. 7
`
`