`CODING AND DECODING: TURBO-CODES(1)
`Claude Berrou, Alain Glavieux and Punya Thitimajshima
`Claude Berrou,Integrated Circuits for Telecommunication Laboratory
`Alain Glavieux and Punya Thitimajshima, Digital Communication Laboratory
`Ecole Nationale Supérieure des Télécommunications de Bretagne, France
`
`(1) Patents N° 9105279 (France), N° 92460011.7 (Europe), N° 07/870,483 (USA)
`
`Abstract - This paper deals with a new class of convolutional
`codes called Turbo-codes, whose performances in terms of
`Bit Error Rate (BER) are close to the SHANNONlimit. The
`Turbo-Code encoderis built using a parallel concatenation of
`two Recursive Systematic Convolutional codes and the
`associated decoder, using a feedback decoding rule,
`is
`implemented as P pipelined identical elementary decoders.
`
`I- INTRODUCTION
`Consider a binary rate R=1/2 convolutional encoder with
`constraint length K and memory M=K-1. The input to the
`encoder at time & is a bit dy and the corresponding codeword
`Cx is the binary couple (Xx, ¥; ) with
`K-1
`X, = » Bid;
`i=0
`K-1
`Y, = » 85;4,_;
`i=0
`
`mod.2 89; =0,1
`
`(1b)
`
`mod.2 Bij = (0,1
`
`(la)
`
`where G1: {g1;}, G2: {g2; } are the two encoder generators,
`generally expressed in octal form.
`It is well known, that the BER of a classical Non
`Systematic Convolutional (NSC) code is lower than that of a
`classical Systematic code with the same memory M at large
`SNR. At low SNR,it is in general the other way round, The
`new class of Recursive Systematic Convolutional (RSC)
`codes, proposed in this paper, can be better than the best NSC
`code at any SNR for high coderates.
`A binary rate R=1/2 RSC code is obtained from a
`NSC code by using a feedback loop andsetting one of the
`two outputs X, or ¥, equal to the input bit dy. For an RSC
`code, the shift register (memory) inputis no longerthe bit dy
`but is a new binary variable ay. If X,=dg (respectively
`¥x=d,), the output Y, (resp. X,) is equal to equation (1b)
`(resp. la) by substituting ag for dy and the variable a, is
`recursively calculated asK-1
`(2)
`mod, 2
`a =d + 2 VMe-i
`where yj; is respectively equal to g1; if Xp=dy and to 99; if
`¥,=dy. Equation (2) can be rewritten as
`K-1
`(3)
`dy = EVO
`mod.2.
`One RSC encoder with memory M=4 obtained from an NSC
`encoder defined by generators G]=37, G2=21 is depicted in
`Generally, we assume that the input bit dg takes
`values 0 or 1 with the same probability. From equation (2),
`we can show that variable a, exhibits the samestatistical
`property
`
`Fig.1.°
`
`0-7803-0950-2/93/$3.00©1993IEEE
`
`1064
`
`mod.2
`
`e=0,1.
`
`(5)
`
`P, {ay =0/a, = €;,..d,-, = € 4} = P.{d, =e} =1/2 (4)
`with €
`is equal toK-1
`e= Dye;
`[1
`Thus the trellis structure is identical for the RSC
`code and the NSC code and these two codes have the same
`free distance dy. However, the two output sequences {X,} and
`{¥, } do not correspondto the same input sequence {d;) for
`RSC and NSC codes. This is the main difference between the
`two codes.
`When punctured code is considered, some output
`bits X, or ¥x are deleted according to a chosen puncturing
`pattern defined by a matrix P. Forinstance, starting from a
`rate R=1/2 code, the matrix P of rate 2/3 punctured codeis
`
`[id
`
`Fig. 1b Recursive Systematic code.
`
`Apple 1006
`
`Apple 1006
`
`
`
`with parallel concatenation.
`
`Recursive
`matic
`Code (37,21)
`
`Fig. 2 Recursive Systematic codes
`
`II - PARALLEL CONCATENATION OF RSC CODES
`With RSC codes, a new concatenation scheme,
`called parallel concatenation can be used. In Fig. 2, an
`example of two identical RSC codes with parallel
`concatenation is shown. Both elementary encoder (Cy and
`C2) inputs use the same bit dy but according to a different
`sequence due to the presence of an interleaver. For an input
`bit sequence {d;}, encoder outputs X; and Y, at time k are
`respectively equal to dy (systematic encoder) and to encoder
`Cy output ¥y,%, or to encoder C2 output ¥2,. If the coded
`outputs (¥y4, Yaz) of encoders Cy and C2 are used
`respectively nm, times and m2 times and so on,the encoder Cy
`rate R, and encoder C2 rate R2 are equal to
`Rete ei
`2n, +My
`2n, +n,
`
`(6)
`
`Recursive
`Systematic
`Code (37,21)
`
`given encoder (Cy or C2) is not emitted, the corresponding
`decoder input is set to zero. This is performed by the
`DEMUX/INSERTIONblock.
`It is well known that soft decoding is better than
`hard decoding, therefore the first decoder DEC, mustdeliver
`to the second decoder DEC, a weighted (soft) decision. The
`Logarithm of Likelihood Ratio (LLR), A; (d; ) associated
`with each decoded bit dy by the first decoder DEC, is a
`relevantpiece of information for the second decoder DEC,
`Pp {dy = 1/observation }
`P, {d, =0/observation}°
`where P,(dy =i /observation},i = 0,
`1
`is the a posteriori
`probability (APP) ofthe data bit dg.
`
`A,(d,) = Log
`
`(8)
`
`a serial concatenation scheme.
`
`Vig
`
`INSERTION
`
`Fig. 3a Principle of the decoder according to
`
`Ill - OPTIMAL DECODING OF RSC CODES WITH
`WEIGHTEDDECISION
`The VITERBIalgorithm is an optimal decoding
`method which minimizes the probability of sequence error
`for convolutional codes. Unfortunately this algorithm is not
`able to yield the APP for each decoded bit. A relevant
`algorithm for this purpose has been proposed by BAHL etal.
`[1]. This algorithm minimizes the bit error probability in
`decoding linear block and convolutional codes andyields the
`APP for each decoded bit. For RSC codes, the BAHL etal.
`The decoder DEC depicted in Fig. 3a, is made up of two
`algorithm must be modified in order to take into accounttheir
`
`elementary decoders (DEC; inaserialand DEC)
`
`recursive character.
`concatenation scheme. The first elementary decoder DEC, is
`associated with the lower rate R,; encoder Cy and yields a
`soft (weighted) decision. The error bursts at the decoder
`DEC, outputare scattered by the interleaver and the encoder
`delay Ly is inserted to take the decoder DEC, delay into
`account. Parallel] concatenation is a very attractive scheme
`because both elementary encoder and decoder use a single
`frequency clock.
`For a discrete memoryless gaussian channel and a
`binary modulation, the decoder DEC input
`is made up of a
`couple Ry of two random variables xj, and yy, at time k
`
`III - 1 Modified BAHL ef al. algorithm for RSC codes
`Consider a RSC code with constraint length K;at
`time k the encoderstate S, is represented by a K-uple
`Se = (Og, Qype Ayngi)
`(9)
`Also suppose that the information bit sequence {d,} is made
`up of N independentbits dy, taking values 0 and 1 with equal
`probability and that the encoderinitial state Sg andfinal state
`Sy are both equal to zero,i.e
`(10)
`So = Sy= (0, 0......0)=0.
`encoder output
`codeword sequence, noted
`The
`CY = {C....1. Cyrene Cypis the input
`to a discrete
`gaussian memoryless channel whose output is the sequence
`RY = {Ryu Rye. Ry } where Ry =(xg,yy) is defined by
`relations (7a) and (7b).
`
`(7b)
`Ye =(2%-U4+q,
`where i, and gg are two independent noises with the same
`variance 6”. The redundantinformation yx is demultiplexed
`and sent to decoder DEC, when Y, =Y , and toward decoder
`DECwhen ¥, =¥2,%. When the redundant information of a
`
`1065
`
`
`
`The APP of a decoded data bit dy can be derived
`from the joint probability Ailm) defined by
`(11)
`di,(m) = P {dy =i,5, =m/RN}
`and thus, the APP of a decoded data bit d, is equal to
`P. {dp =i/RY} = Ay (m), i=0,1 (12)
`From relations (8) and (12),the LLR A(d,) associated with
`a decodedbit d; can be written as
`TA,(m)
`8S aam)
`(d,)
`A(d,) = Log 2=——_.
`Finally the decoder can make a decision by comparing
`A(d,)
`toa threshold equal to zero
`d,=1
`if A(d,)>0
`(14)
`d,=0 if A(d)<0.
`In order to computethe probability A‘,(m), let us introduce
`the probability functions ai (m), B,(m) and y;(R,,m’,m)
`P{d, =i,S, = m, Rt}
`p.{Rt}
`
`ai(m)=
`
`P.{d, =i, 5, =m/Rt } (15)
`
`(13)
`
`B,(m) =
`
`P, {Re 1S, = m}
`PLR/Rt}
`(17)
`7;(Ry,m',m) = P{dy =i, Ry, Sy =m/S,_,=m')}.
`Thejoint probability A, (m) can be rewritten using BAYES
`rule
`
`(16)
`
`P.Jd, =i,S, =m, RE, RM
`Ai, (m) = LS 018)
`Pop Ry Rea
`Thus we obtain
`P,{d, =i,5, =m, Rf
`P{RE}
`
`P.{ Resi /dy =i,5, =m, Rt
`PRM / RE
`
`Ai.(m) =
`
`(19)
`that events after time k are not
`Taking into account
`influenced by observation Ri andbit dgif state Sy is known,
`the probability Ai,(m) is equal
`(20)
`Ai.(m) = oti (m)B,(m).
`The probabilities aj(m) and 8,(m) can be recursively
`calculated from probability y;(.R,,m’,m). From annex I, we
`obtain
`
`channel and transition probabilities of the encodertrellis.
`From relation (17), ¥;(R,,m’, m) is given by
`
`yj(R,,m’,m) = p(R, /d, =i, S, =m, S,_, =m’)
`(23)
`ad, =i/S, =m, S,_, = m‘*)x(S, =m/S,_, =m’)
`where p(./.) is the transition probability of the discrete
`gaussian memoryless
`channel. Conditionally to
`(dy = i, Sy =m, Sy.j =m’), x, and yz are two uncorrelated
`gaussian variables and thus we obtain
`p(R, /d, =i, 8, =m,S,_, =m’) =
`p(x, /d, =i, S, =m, S,_, =m’)
`(24)
`Ply, /d, =i, 8, =m,S,_) =m’).
`Since the convolutional encoder is a deterministic machine,
`q(d, =i/S, =m,S,.,; =m’)
`is equal
`to 0 or 1. The
`transition state probabilities 2(S, =m/S5,_, =m’) of the
`trellis are defined by the encoder
`input statistic.
`Generally, P. {d, =1} = P. {d, =0} =1/2 and since there
`are
`two
`possible
`transitions
`from each
`state, 1(S, =m/S,_,=m’)=1/2
`for each of
`these
`transitions.
`
`Different steps of modified BAHLetal. algorithm
`-Step 0 : Probabilities aj(m) and By(m) are
`initialized according to relation (12)
`(25a)
`a,(0)=1 aj(m)=0 Vm #0, i=0,1
`(25b)
`By(0)=1 By(m)=0 Vn #0.
`-Step 1: For each observation Rx, the probabilities
`a.(m) and y;(R,,m’,m) are computed using relations (21)
`and (23) respectively,
`-Siep 2 : When the sequence RN has been
`completely received, probabilities 8,(m) are computed using
`relation (22), and probabilities aj(m) and B,(m) are
`multiplied in order to obtain Ai,(m). Finally the LLR
`associated with each decoded bit dy is computed from
`relation (13).
`
`IV- THE EXTRINSIC INFORMATION OF THE RSC
`DECODER
`In this chapter, we will show that the LLR A(d,)
`associated with each decoded bit dy , is the sum of the LLR
`of dy at the decoder input and of another information called
`extrinsic information, generated by the decoder.
`Using the LLR A(d,) definition (13) and relations (20) and
`(21), we obtain
`
`:
`1
`.
`1
`X Ly (Rp.m’, mat_,(m’)
`ZL LICR m’, mat_,(m*)B,(m)
`or, (m) =2______ (2)
`A(d,) = Log iSSSOe (26)
`Le ZVi(Re Mm’, m)ott_;(m’)
`xz 2Yo(Re, m’, m)oj_y (m’)B,(m)
`mm‘i=0s=
`mm‘je
`Since the encoder is systematic (X,; = d,), the transition
`L
`probability p(x, /d, =i,5,; =m,S,_, =m’) in expression
`- = VAR, +)" mm Bee (m *)
`m‘i=0
`it
`y;(R,,m’,m) is independent of state values S, and Sz.1.
`Le 2, ZXVi(Resim, may (m’)
`Therefore we can factorize this transition probability in the
`mm‘i=Oj=
`numerator and in the denominatorofrelation (26)
`The probability y;(R,,m‘’,m)
`can be determined from
`transition probabilities of the discrete gaussian memoryless
`
`and
`
`B,(m) =
`
`.
`
`(22)
`
`1066
`
`
`
`
`
`feedback loop
`
`16-STATE
`DECODER
`DEC;
`
`16-STATE
`DECODER
`DEC2
`
`jeaving
`
`DEMUX/
`INSERTION
`
`Fig. 3b Feedback decoder (under0 internal delay assumption),
`
`decoded output
`
`A d
`
`x
`
`A(d,)
`
`p(x, /d, =1)
`= 108 (x, /d, =0)*
`1
`:
`LELNYem’, mag_,(m")B,(m)
`
`Log mm a.zz Erolm)or,_,(m’)B,(m).
`
`(27)
`
`mm’ j=0
`Conditionally to dy =1 (resp. dy =0), variables xy are
`gaussian with mean 1 (resp. -1) and variance 62, thus the
`LLR A (dx)is still equal to
`
`Ady) =X + W,
`
`(28)
`
`where
`
`Log
`
`. (29)
`
`W, = A(d,) |x,=0 —
`zz EnOnm’m)oe_,(m’)B,(m)
`mim i=
`22 EYolom)or},(m“)B,(m)
`mm’ j=0
`is a function of the redundant information introduced by
`W,
`the encoder. In general W; has the samesign as dy; therefore
`W, may improve the LLR associated with each decoded data
`bit dg. This quantity represents the extrinsic information
`supplied by the decoder and does not depend on decoder
`input x;. This property will be used for decoding the two
`parallel concatenated encoders.
`
`V - DECODING SCHEME OF PARALLEL
`CONCATENATION CODES
`In the decoding scheme represented in Fig. 3a,
`decoder DEC, computes ‘LLR Aj(d,) for each transmitted
`bit d, from sequences {xz} and {y,}, then the decoder DEC
`performs the decoding of sequence{d,} from sequences
`{A,(d)} and {yx}. Decoder DEC, uses the modified BAHL
`et al. algorithm and decoder DECz may use the VITERBI
`algorithm. The global decoding rule is not optimal because
`the first decoder uses only a fraction of the available
`redundant information. Therefore it is possible to improve the
`performanceofthis serial decoder by using a feedback loop.
`
`1067
`
`V-1 Decoding with a feedback loop
`We consider now that both decoders DEC; and
`DECuse the modified BAHL et al. algorithm. We have
`seen in section IV that the LLRat the decoder output can be
`expressed as a sum of two terms if the decoder inputs were
`independent. Henceif the decoder DEC2 inputs Aj (dj) and
`y2x are independent, the LLR A2(d,) at the decoder DEC2
`output can be written as
`Az(d,) = f(A\(d,)) + Woy
`
`with
`
`(30)
`
`A\(d,) = zor +Wy
`(31)
`From relation (29), wwe can see that the decoder DEC2
`extrinsic information W, is a function of the sequence
`{Aj dn) }nek: Since Aj(dq) depends on observationR’,
`extrinsic information W2, is correlated with observations x,
`and yi,. Nevertheless from relation (29), the greater | n-k | is,
`the less correlated are Aj (d,,) and observations x; , yy. Thus,
`due to the presence ofinterleaving between decoders DEC,
`and DEC3, extrinsic information W2, and observations x,
`Yix are weakly correlated. Therefore extrinsic information
`W2, and observationsxz, y14 can be jointly used for carrying
`out a new decoding of bit dy, the extrinsic information
`Ze = Wx acting as a diversity effect in an iterative process.
`In Fig. 3b, we have depicted a new decoding scheme
`using the extrinsic information W2,% generated by decoder
`DEC)in a feedback loop. This decoder does not take into
`account the different delays introduced by decoder DEC,
`and DEC2 and a more realistic decoding structure will be
`presented later.
`The first decoder DEC; now hasthree data inputs,
`Qe Vik» %) and probabilities aj,(m)
`andB,,(m) are
`computed in substituting Ry =(xp, y1x4) by Re =k Vik, &)in
`relations (21) and (22). Taking into accountthat z,
`is weakly
`correlated with x, and y,, and supposing that z, can be
`approximated by a gaussian variable with variance 0? # 0”,
`the transition probability of the discrete gaussian memoryless
`channel can be now factored in three terms
`
`
`
` demodulator
`
`Fig. 4b Decoding module (level p).
`
`termediate
`decoded
`output
`
`p(R, /dy =i, Sp = m, Spy =m’)= p(x,/. )PCY_f. )p(z,/.) (32)
`The encoder Cy with initial rate R;, through the feedback
`loop, is now equivalentto a rate R', encoder with
`
`(33)
`Thefirst decoder obtains an additional redundant information
`with z, that may significantly improve its performances; the
`term Turbo-codes is given for this iterative decoder scheme
`with referenceto the turbo engine principle.
`With the feedback decoder, the LLR Aj (dj) generated by
`decoder DEC, is now equal
`to
`Fig. 4a Modular pipelined decoder, corresponding to an
`2
`2
`
`A\(dy) =>Xptet 34
`iterative processus of the feedback decoding.
`i(d,) git ik
`(34)
`<
`where W1, depends on sequence (Z_),4,. As indicated
`above, information z;, has been built by decoder DEC;at the
`previous decoding step. Therefore z; must not be used as
`input information for decoder DEC. Thus decoder DEC2
`input sequences at step p (p22) will be sequences
`{A,(d,)} and {y2q) with
`(35)
`A, (d,) = A\(dy),, 20:
`extrinsic
`Finally from relation (30), decoder DEC2
`information zp = W,, after deinterleaving, can be written as
`z, = Wy = Az(d, | Ay(d,)=0
`(36)
`and the decision at the decoder DEC output is
`d, =sign[A,(d,)].
`(37)
`The decoding delays introduced by decoder DEC
`(DEC=DEC,+DEC}), the interleaver and the deinterleaver
`imply that the feedback information z, must be used through
`an iterative process as represented in Fig. 4a, 4b. In fact, the
`global decodercircuit is composed of P pipelined identical
`elementary decoders (Fig. 4a). The pth decoder DEC
`(Fig. 4b) input, is made up of demodulator output sequences
`(x)p and (y)p througha delay line and of extrinsic information
`(2)p generated by the (p-1)th decoder DEC. Note that the
`variance o? ofthe extrinsic information and the variance of
`A (d,,) must be estimatedat each decodingstep p.
`
`»"TR
`
`(or
`
`order to evaluate a BER equal to 10°>, we have considered
`128 data blocks i.e. approximatively 8 x10° bits dy. The BER
`versus E,/No, for different values of p is plotted in Fig. 5.
`For any given signal to noise ratio greater than 0 dB, the BER
`decreases as a function of the decoding step p. The coding
`gain is fairly high for the first values of p (p=1,2,3) and
`carries on increasing for the subsequent values of p. For p=18
`for instance, the BERis lower than 10°> at Ep/No= 0,7 GB.
`Rememberthat the Shannon limit for a binary modulation
`with R=1/2, is Pe= 0 (several authors take P,=10° asa
`reference) for Ep/Ng= 0 dB. With paralle] concatenation of
`RSC convolutional codes and feedback decoding,
`the
`performancesare at 0,7 dB from Shannon's limit.
`The influence of the constraint length on the BER
`has also been examined. For K greater
`than 5, at
`Ep/No= 0,7 GB, the BERis slightly worstat the first (p=1)
`decoding step and the feedback decoding is inefficient to
`improve the final BER. For K smaller
`than 5, at
`E,/No= 0,7 dB,
`the BERis slightly better at the first
`decoding step than for K equal to 5, but the correction
`capacity of encoders Cy and Cz is too weak to improve the
`BER with
`feedback decoding. For K=4 (i.e. 8-state
`elementary decoders) and after iteration 18, a BER of 10°is
`achieved at E,/No = 0,9 dB. For K equal to 5, we havetested
`several generators (G}, G2 ) and the best results were
`achieved with G,=37, G2=21.
`
`V-2 Interleaving
`The interleaver uses a square matrix and bits {dg}
`are written row by row and read pseudo-randomly. This non-
`uniform reading rule is able to spread the residual error
`blocks of rectangular form, that mayset up in the interleaver
`located behind the first decoder DECy, and to give the
`greater free distance as possible to the concatenated(parallel)
`code.
`
`VI- RESULTS
`For a rate R=1/2 encoder with constraint length K=5,
`generators G,}=37, G2=21 and parallel concatenation
`(R|=R2=2/3), we have computed the Bit Error Rate (BER)
`after each decoding step using the Monte Carlo method, as a
`function of signal to noise ratio Ey/Ng where Ep is the
`energy received per information bit d, and No is the noise
`monolateral power spectral density. The interleaver consists
`of a 256x256 matrix and the modified BAHL etal. algorithm
`has been used with length data block of N=65536bits. In
`
`
`
`
`
`theoretical
`limit
`
`Fig. 5 Binary error rate given by Iterative decoding (p=1,...,18)
`of code of fig. 2 (rate:1/2); interleaving 256x256.
`
`5 Eb/No (dB)
`
`For low signal to noise ratios, we have sometimes noticed
`that BER could increase during the iterative decoding
`process.In order to overcomethis effect, we have divided the
`extrinsic information zx by [ 1+ 6|A,(d, Jwith @ = 0,15.
`
`In Fig.6, the histogram of extrinsic information (z)p
`has been drawn for several values of iteration p, with all data
`bits equal
`to 1 and for a low signal
`to noise ratio
`(E,/No= 0,8 dB). For p=1 (first
`iteration), extrinsic
`information (z)p is very poor about bit dx, furthermore the
`gaussian hypothesis made above for extrinsic information
`(2)p, is not satisfied! Nevertheless wheniteration p increases,
`the histogram merges towards a gaussian law with a mean
`equalto 1. Forinstance, for p=13, extrinsic information (z)p
`becomes relevant information concerning data bits.
`
`1069
`
`VII CONCLUSION
`In this paper, we have presented a new class of
`convolutional codescalled Turbo-codes whose performances
`in terms of BER are very close to SHANNON'slimit. The
`decoder is made up of P pipelined identical elementary
`modules and rank p elementary module uses the data
`information coming from the demodulator and the extrinsic
`information generated by the rank (p-1) module. Each
`elementary module uses a modified BAHL ef al. algorithm
`which is rather complex. A much simpler algorithm yielding
`weighted (soft) decisions has also been investigated for
`Turbo-codes decoding [2], whose complexity is only twice
`the complexity of the VITERBI algorithm, and with
`performances which are very close to those of the BAHL er
`al, algorithm, This new algorithm will enable encoders and
`
`
`
`decoders to be integrated in silicon with error correcting
`performances unmatched atthe present time.
`
`Pr{R,/Rt*} = ZZ s E7i(Res!m)or_,(m'). (A6)
`i=0 j=
`mm'
`Finally probability aj (m) can be expressed from probability
`ari_,(m) bythe foliving relation
`z zionm)ot}_,(m')
`pr s E7(Rem'm)aj_,(m')
`
`ai.(m) =
`
`m' j=
`
`(A7)
`
`i=0 j=0
`mm’
`In the same way, probability By (m) can be recursively
`calculated from probability By,1 (m). From relation (16), we
`have
`
`
`
`
`NORMALIZED
`SAMPLES
`
`Fig. 6 Histograms of extrinsic information z after
`iterations #1,4,13 at Eb/No = 0.8 dB;
`all information bits d=1.
`ANNEX I: EVALUATION OF PROBABILITIES ai (m)
`AND B,(m) .
`From relation (15) probability a.m) is equal to
`j
`Pr d, =1,5, =m,RE",R,
`a,(m) =
`Pr{Re R,
`Pr{d, =i, S, =m,R,/Ri }
`Pr{R/Ri}
`The numerator of a;.(m) can be expressed from state Sx.;
`andbit dy 1.
`Pr{dy =i, S_ =m,R,/RE} =
`x EP,{a, ei,dpa Hj Syp em, Sy. =/R,/RE} (A2)
`
`7
`
`(Al)
`
`m' j=0
`By using BAYES mule, we can write
`Pr{d, =i, 5, =m,R,/Rf*} =
`
`1 Pr{d,.. =j, Sp =’ RE
`22
`—e
`m' j=0
`£; {Ri
`}
`P {dy =i,S, =m, Ry /dy_y = j, Sy, =m’, RE}. (A3)
`By taking into accountthat events after time (k-1) are not
`influenced by observation R‘~! and bit dj; if state Sp. is
`known and from relation (17) we obtain
`Pr{d, =i,S, =m,R,/RE} =
`x E7(Re m’m)ot}_,(m'’).
`
`m' j=0
`The denominator can be also expressed from bit d and state
`Sk
`
`(A4)
`
`P.{R,/RE"} = rb Pr{dy =i, Sy = m,R,/Ri} (AS)
`
`m i=0
`and from relation (A4), we can write :
`
`1070
`
`P.{RN,, /S, =m
`PRN /Rt}
`P {dee = 6, Spo =e’ Reva Res Sp =m}
`PRN, /RE}
`By using BAYESrule, the numeratoris equal to
`1
`
`. (A8)
`
`P.{Rpe /Sy = m} = 2 2F, {Re2 Sky = m'}
`P.{deay =i Spay =m, Ry/S, =m}.
`(A9)
`By taking into account expressions of y;(Rx+1,,m') and
`Be-1 (m’), we can write
`
`B,(m) =
`
`(A10)
`
`x E7i(Reat m, m')By,,(m')
`Pp {Rev /Ri}
`In substituting k by (k+/) in relation (A6), the denominator
`of (A10)is equal to
`
`Pr{Reu/R}=TE y En(Beatmari(m'). (AIL)
`
`mim’
`
`i=0
`
`Finally probability By (m)can be expressed from probability
`Beri Cm’), by the si relation
`z Er(Res m,m')B,,,(m')
`B, (m) = —=__—________—_. (A112)
`Zz = ZVCRevom)aj(m')
`mm = j=
`REFERENCES
`
`{1] L.R. Bahl, J. Cocke, F. Jeinek and J. Raviv,
`"Optimal decoding of linear codes for minimizing symbol
`error rate", IEEE Trans. Inform. Theory, vol. IT-20, pp. 248-
`287, March 1974.
`
`[2] C. Berrou, P. Adde, E. Angui and S. Faudeil, "A
`low complexity soft-output Viterbi decoder architecture", fo
`appear at ICC' 93.
`
`me
`
`1E
`
`m_t='
`
`(m) =
`
`=
`
`