throbber
NEAR SHANNON LIMIT ERROR - CORRECTING
`CODING AND DECODING : TURBO-CODES (1)
`
`Claude Berrou, Alain Glavieux and Punya Thitimajshima
`
`Claude Berrou, Integrated Circuits for Telecommunication Laboratory
`
`Alain Glavieux and Punya Thitimajshima, Digital Communication Laboratory
`
`Ecole Nationale Supérieure des Telecommunications de Bretagne, France
`
`(1) Patents N° 9105279 (France), N° 92460011] (Europe), N° 07/870,483 (USA)
`
`Ahszzacz - This paper deals with a new class of convolutional
`codes called Turbo-codes, whose performances in terms of
`Bit Error Rate (BER) are close to the SHANNON limit The
`Turbo—Code encoder is built using a parallel concatenation of
`two Recursive Systematic Convolutional codes and the
`associated decoder, using a feedback decoding rule,
`is
`implemented as P pipelined identical elementary decoders.
`
`I - INTRODUCTION
`Consider a binary rate R=1/2 Convolutional encoder with
`constraint length K and memory M=K-1. The input to the
`encoder at time k is a bit dk and the corresponding codeword
`Ck is the binary couple (Xk, Yk) with
`K-t
`
`Xk Z Eglidk—i
`i=0
`K-1
`
`m0d.2 g1, =0,1
`
`(la)
`
`Y1: = Eg2idk—i
`i=0
`
`mod.2 g2,- =0,1
`
`(lb)
`
`P,{a,, =0/a,=£l,..ak_,=£,,_1}=I1{d,, =5} =1/2 (4)
`with eis equal to
`(5)
`e=0,1.
`mod.2
`e=K£_]y,-5,.
`(=1
`Thus the trellis structure is identical for the RSC
`code and the NSC code and these two codes have the same
`free distance df. However, the two output sequences {Xk} and
`{Yk } do not correspond to the same input sequence {d;,._) for
`RSC and NSC codes. This is the main difference between the
`two codes.
`
`When punctured code is considered, some output
`bits Xk or Yk are deleted according to a chosen puncturing
`pattern defined by a matrix P. For instance, starting from a
`rate R=1/2 code, the matrix P of rate 2/3 punctured code is
`11
`
`where G1: {g1,-}, G2: {ggg } are the two encoder generators,
`generally expressed in octal form.
`It is well known, that the BER of a classical Non
`Systematic Convolutional (NSC) code is lower than that of a
`classical Systematic code with the same memory M at large
`SNR. At low SNR, it is in general the other way round. The
`new class of Recursive Systematic Convolutional (RSC)
`codes, proposed in this paper, can be better than the best NSC
`code at any SNR for high code rates.
`A binary rate R=1/2 RSC code is obtained from a
`NSC code by using a feedback loop and setting one of the
`two outputs Xk or Yk equal to the input bit dk. For an RSC
`code, the shift register (memory) input is no longer the bit dk
`but is a new binary variable ak. If Xk=dk (respectively
`Yk=dk), the output Yk (resp. Xk) is equal to equation (lb)
`(resp. la) by substituting ak for dk and the variable ak is
`recursively calculated asK-1
`ak =dk + Zy,-a,c_,
`i=1
`
`mod.2
`
`(2)
`
`where 7; is respectively equal to 3'“ if Xk=dk and to g2; if
`Yk=dk. Equation (2) can be rewritten as
`[(-1
`
`mod.2.
`
`(3)
`
`dk = _E0l’rak-i
`One RSC encoder with memory M=4 obtained from an NSC
`encoder defined by generators G1=37, G2=21 is depicted in
`Fig.1.
`
`Generally, we assume that the input bit dk takes
`values 0 or 1 with the same probability. From equation (2),
`we can show that variable ak exhibits the same statistical
`property
`
`0-7803-0950-2/93/$3.00©l993IEEE
`
`Fig. 1b Recursive Systematic code.
`
`Apple 1106
`
`

`
`[I - PARALLEL CONCATENATION OF RSC CODES
`With RSC codes, a new concatenation scheme,
`called parallel concatenation can be used. In Fig. 2, an
`example of two identical RSC codes with parallel
`concatenation is shown. Both elementary encoder (C1 and
`C;) inputs use the same bit dk but according to a different
`sequence due to the presence of an interleaver. For an input
`bit sequence {dk}, encoder outputs X), and Yk at time k are
`respectively equal to dk (systematic encoder) and to encoder
`C1 output Y 1),, or to encoder C; output Y;k. If the coded
`outputs (Y1k, Y;k) of encoders C1 and C ; are used
`respectively ru times and n; times and so on. the encoder C1
`rate R1 and encoder C; rate R; are equal to
`
`R1 =31.
`2n1+rr2
`
`R2 =_"li’_‘L.
`2n, +71,
`
`(5)
`
`given encoder (C1 or C;) is not emitted, the corresponding
`decoder input is set to zero. This is performed by the
`DEMUX/INSERTION block.
`It is well known that soft decoding is better than
`hard decoding, therefore the first decoder DEC; must deliver
`to the second decoder DEC; a weighted (soft) decision. The
`Logarithm of Likelihood Ratio (LLR), Al(dk ) associated
`with each decoded bit dk by the first decoder DEC1 is a
`relevant piece of information for the second decoder DEC;
`P,{d,, =l/observation}
`(8)
`P, {d,, = 0/observation}'
`where P,{dk =i /observation},i = 0, 1
`is the (1 porleriori
`probability (APP) of the data bitdk.
`
`Al(dk) =1-08
`
`INSEFITION
`
`Fig. 3a Principle of the decoder according to
`a serial concatenation scheme.
`
`III - OPTIMAL DECODING OF RSC CODES WITH
`WEIGHTED DECISION
`
`The VITERBI algorithm is an optimal decoding
`method which minimizes the probability of sequence error
`for convolutional codes. Unfortunately this algorithm is not
`able to yield the APP for each decoded bit. A relevant
`algorithm for this purpose has been proposed by BAHL er al.
`[1]. This algorithm minimizes the bit error probability in
`decoding linear block and convolutional codes and yields the
`APP for each decoded bit. For RSC codes, the BAHL er al.
`algorithm must be modified in order to take into account their
`recursive character.
`
`III - 1 Modified BAHL et al. algorithm for RSC codes
`Consider a RSC code with constraint length K; at
`time k the encoder state Sk is represented by a K-uple
`
`(9)
`St = (“Ivar-1 ----- -'ak—l(+l)'
`Also suppose that the information bit sequence ldk} is made
`up of N independent bits dk. taking values 0 and l with equal
`probability and that the encoder initial state So and final state
`SN are both equal to zero, i.e
`(10)
`So=SN=(0, 0......0)=0.
`encoder output
`codeword sequence, noted
`The
`C1” = {C1 ..... ..C,,...... ..CN}is
`the input
`to a discrete
`gaussian memoryless channel whose output is the sequence
`R1" = {R, ...... ..R,, ...... ..RN} where Rk =(xk,yk) is defined by
`relations (7a) and (7b).
`
`Fig. 2 Recursive Systematic codes
`with parallel concatenation.
`
`The decoder DEC depicted in Fig. 3a, is made up of two
`elementary decoders (DEC1 and DEC;)
`in a serial
`concatenation scheme. The first elementary decoder DEC1 is
`associated with the lower rate R1 encoder C1 and yields a
`soft (weighted) decision. The error bursts at the decoder
`DEC1 output are scattered by the interleaver and the encoder
`delay L1 is inserted to take the decoder DEC1 delay into
`account. Parallel concatenation is a very attractive scheme
`because both elementary encoder and decoder use a single
`frequency clock.
`For a discrete memoryless gaussian channel and a
`binary modulation, the decoder DEC input
`is made up of a
`couple R1, of two random variables xk and yk, at time k
`x,,=(2d,,-1)+i,,
`(7a)
`y,,=(2Y,‘—-1)+q,‘,
`(7b)
`where ik and qk are two independent noises with the same
`variance G2. The redundant information yk is demultiplexed
`and sent to decoder DEC1 when Yk =Y1k and toward decoder
`DEC; when Yk =Y;k. When the redundant information of a
`
`

`
`The APP of a decoded data bit dk can be derived
`
`from the joint probability mm) defined by
`(11)
`,i‘,(m)=P,{d,, =i,s, =m/Rf}
`and thus. the APP of a decoded data bit dk is equal to
`P,{d,, =i/R,"} =§ mm), r=o,l
`(12)
`From relations (8) and (12). the LLR /l(d,‘) associated with
`a decoded bit dk can be written as
`
`Z/litm)
`
`Finally the decoder can make a decision by comparing
`A (d,,)
`to a threshold equal to zero
`
`ii, =1
`
`if A(d,,)>0
`
`(14)
`1)‘ A(d,,) <0.
`ti, =0
`In order to compute the probability /'l;(m), let us introduce
`
`the probability functions a,:(m), ,Bk( m) and y,(Rk, m’, m)
`P,{d,, = i,S,, = m, Rf}
`P,{Rf}
`
`P,{d,, = i,s,, = m/Rf} (15)
`
`a;’,(m)=
`
`,,,,,,,=%::~_t
`P.{R£'.l /Rf}
`(17)
`7,-(R,,,m‘, m) = P, {(1, = i, R,,,S,, = m/S,,_1= m')}.
`The joint probability 2.’), (m) can be rewritten using BAYES
`
`(,6,
`
`Thus we obtain
`
`Alon) =
`
`P,{d,, =1‘, 3,, =m,R1"}f‘,{R,f:.1/d,, =i.S,, =m,R1"}
`P,{Rf}
`P,{R£”.i/Rf}
`
`Taking into account
`
`<19)
`that events after time k are not
`
`influenced by observation R{‘ and bit dk if state sk is lmown,
`
`the probability /mm) is equal
`
`A';,(m)=a;;(m)ri,,tm).
`
`(20)
`
`The probabilities a,‘,(m) and /3,,(m) can be recursively
`calculated from probability 7,-(Rk, m’, m). From annex I, we
`obtain
`
`E, )1:7i(Rk-mlvm)aI{—l(m’)
`__'1'_1:"j..___j. (21)
`
`l’i(Rk+l» m»m’)flk+l(ml)
`fik(m)= . (22)
`E2 2 Z 7i(R:r+lrm,>m)aI:(’”/)
`,,. ,,.t.-=0,-=0
`
`can be determined from
`The probability y,«(R,,,m’,m)
`transition probabilities of the discrete gaussian memoryless
`
`channel and transition probabilities of the encoder trellis.
`From relation (17), y,.(R,‘, m’, m) is given by
`
`y,.(Rk,m’,m)=p(R,, /d,, =l',S,, =m,S,,_1=m’)
`
`(23)
`q(d,, =i/S,‘ =m,S,_, = m’)7r(S, =m/S,,_1=m')
`where p(./.) is the transition probability of the discrete
`gaussian memoryless
`channel. Conditionally to
`(dk = i, Sk = m, SH = m’), xk and yk are two uncorrelated
`gaussian variables and thus we obtain
`p(Rp /d,‘ =i,Sk = m,S,_1=m’)=
`
`p(xk/dk =l',S,, =m,S,,_, =m’)
`
`(24)
`p(y,, /d,, =r',S,, =m,S,,_1=m’).
`Since the convolutional encoder is a deterministic machine,
`q(d,, =1‘/Sp =m,Sk_, =m’)
`is equal
`to 0 or
`1. The
`transition state probabilities 7r(S,, =m/S,,_1 =
`’) of the
`trellis are defined by the encoder
`input statistic.
`Generally,P,{d,, =1} = P, {d,, = 0} = 1/2 and since there
`are
`two
`possible
`transitions
`from each
`state,7r(Sk =m/Sk_, =m’)=l/2
`for
`each of
`these
`transitions.
`
`Different steps of modified BAHL et al. algorithm
`—Step 0 : Probabilities a{,(m) and /3N(m) are
`initialized according to relation (12)
`ot,§(o)=1 ag',(m)=o Vma=0, r=0,1
`
`(25a)
`
`(2517)
`t/m:=0.
`[3N(0)=l fi,,,(m)=()
`-Step 1 : For each observation Rk, the probabilities
`a,';(m) and y,(R,,m’, m) are computed using relations (21)
`and (23) respectively.
`
`-Step 2 : When the sequence R," has been
`completely received, probabilities fik (m) are computed using
`
`relation (22), and probabilities a},(m) and fi,,(m) are
`multiplied in order to obtain l'},(m). Finally the LLR
`associated with each decoded bit dk is computed from
`relation (13).
`
`IV- THE EXTRINSIC INFORMATION OF THE RSC
`DECODER
`
`In this chapter, we will show that the LLR A(dk)
`associated with each decoded bit dk , is the sum of the LLR
`of dk at the decoder input and of anotller information called
`extrinsic illforrnation, generated by the decoder.
`Using the LLR A(d;,») definition (13) and relations (20) and
`(21), we obtain
`2:; 2137r(R,,,m’.m)a,{_1(m’)fi;,(m)
`/l(d,,) =mg . (26)
`E: E 70 (Rka ml: "val-1
`m m’j=0
`
`the transition
`Since the encoder is systematic (Xk = die),
`probability p(x,, /(1,, = LS,‘ = m, Sk_1 = m’) in expression
`y,(Rk,m’,m) is independent of state values Sk and Sk_1.
`Therefore we can factorize this transition probability in the
`numerator and in the denominator of relation (26)
`
`

`
`feedback loo -
`
`16-STATE
`DECODER
`DEC 1
`
`inter-
`
`leaving
`
`16-STATE
`DECODEFI
`
`DEC?
`
`2.‘
`
`DEMUXI
`INSERTION
`
`Fig. 3b Feedback decoder (under 0 internal delay assumption).
`
`deinter-
`
`leaving
`
`deinter
`
`leaving
`
`decoded output
`A
`
`d k
`
`A(dk}=L0g
`1
`
`p(xk /d, =1) +
`p(x,, /d,‘ =0)
`.
`
`E2: E7i()’k~’"'»’”)ai—1(’"')Bk(m)
`mg . (27)
`E2 _}:0l’o()’k»m’»’")‘1I:-i(’"’)5k(m)
`m m /=
`Conditionally to dk =1 (resp. dk =0), variables xk are
`gaussian with mean 1 (resp. -1) and variance 02, thus the
`LLR A (dk) is still equal to
`A(d,,)=;21-x,‘+VV,‘
`
`(28)
`
`where
`
`W. =A(d.) l,,-., =
`2; Er.(y..m’.m2a.i-.(m'w..(mi
`mg . (29)
`E: ZTo()’k»'"’» m)“;-1(m’)l8k('")
`m m’j=0
`Wk is a function of the redundant infonnation introduced by
`the encoder. In general Wk has the same sign as dk; therefore
`Wk may improve the LLR associated with each decoded data
`bit dk. This quantity represents the extrinsic information
`supplied by the decoder and does not depend on decoder
`input xk. This property will be used for decoding the two
`parallel concatenated encoders.
`
`V - DECODING SCHEME OF PARALLEL
`CONCATENATION CODES
`
`In the decoding scheme represented in Fig. 3a,
`decoder DEC; computes ‘LLR A1(dk) for each transmitted
`bit dk from sequences {xk} and {yk}, then the decoder DEC;
`performs the decoding of sequence{dk} from sequences
`{A1(dk)} and {yk}. Decoder DEC1 uses the modified BAHL
`et al. algorithm and decoder DEC; may use the VITERBI
`algorithm. The global decoding rule is not optimal because
`the first decoder uses only a fraction of the available
`redundant information. Therefore it is possible to improve the
`performance of this serial decoder by using a feedback loop.
`
`V-1 Decoding with a feedback loop
`We consider now that both decoders DEC1 and
`DEC; use the modified BAHL er al. algorithm. We have
`seen in section IV that the LLR at the decoder output can be
`expressed as a sum of two terms if the decoder inputs were
`independent. Hence if the decoder DEC; inputs A1(dk) and
`y;k are independent, the LLR A;(dk) at the decoder DEC;
`output can be written as
`A2(d,.)=f(A,(dk))+W2t
`
`(30)
`
`with
`
`(31)
`A1(d,,)=%x,,+VV1,,
`From relation (29), we can see that the decoder DEC;
`extrinsic information W;k is a function of the sequence
`{A1(d,,)}n¢k. Since A1(d,,) depends on observationR1N,
`extrinsic infonnation W;k is correlated with observations xk
`and y1k. Nevertheless from relation (29), the greater In-k I is,
`the less correlated are A1(d,,) and observations xk, yk. Thus,
`due to the presence of interleaving between decoders DEC1
`and DEC;, extrinsic information W;k and observations xi»,
`yik are weakly correlated. Therefore extrinsic information
`Wgk and observations xk, yik can be jointly used for carrying
`out a new decoding of bit dk, the extrinsic information
`zk = Wzk acting as a diversity effect in an iterative process.
`In Fig. 3b, we have depicted a new decoding scheme
`using the extrinsic information W2]; generated by decoder
`DEC; in a feedback loop. This decoder does not take into
`account the different delays introduced by decoder DEC1
`and DEC; and a more realistic decoding structure will be
`presented later.
`The first decoder DEC 1 now has three data inputs,
`
`andfi,,,(m) are
`(xk, ylk, 2],») and probabilities a{,,(m)
`computed in substituting Rk ={xk, yik} by Rk =(xk, y1 k, zk) in
`relations (21) and (22). Taking into account that 2;;
`is weakly
`correlated with xk and yik and supposing that zk can be
`
`approximated by a gaussian variable with variance of ¢ 0'2,
`the transition probability of the discrete gaussian memoryless
`channel can be now factored in three terms
`
`

`
`p(R,, /d,, =i,S,, =m,S,,_, =m’) = p(x,,/.)p(y,,/.)p(zk/.) (32)
`The encoder C1 with initial rate R1, through the feedback
`loop, is now equivalent to a rate R'1 encoder with
`I
`
`The first decoder obtains an additional redundant information
`with zk that may significantly improve its performances; the
`term Turbo-codes is given for this iterative decoder scheme
`with reference to the turbo engine principle.
`With the feedback decoder, the LLR A1(dk) generated by
`decoder DEC1 is now equal
`to
`2
`2
`(34)
`Al(dk)=;§'-xk +?Zk +Wii
`Z
`where Wlk depends on sequence {Zn]n¢k. As indicated
`above, information zk has been built by decoder DEC; at the
`previous decoding step. Therefore zk must not be used as
`input information for decoder DEC2. Thus decoder DEC;
`input sequences at step p (p22) will be sequences
`{ziird.)}and {ml with
`(35)
`/l1(d,,)= A1(d,,)z_,0.
`extrinsic
`Finally from relation (30), decoder DEC;
`information zk = Wgk, after deinterleaving, can be written as
`
`zk :W2k=A2(d,,)| Mdlfio
`and the decision at the decoder DEC output is
`
`(36)
`
`(37)
`d,, = srgn[A,(d,, )].
`The decoding delays
`introduced by decoder D E C
`(DEC=DEC1+DEC2), the interleaver and the deinterleaver
`imply that the feedback information zk must be used through
`an iterative process as represented in Fig. 4a, 4b. In fact, the
`global decoder circuit is composed of P pipelined identical
`elementary decoders (Fig. 4a). The pth decoder DEC
`(Fig. 4b) input, is made up of demodulator output sequences
`(Jr)? and 01),, through a delay line and of extrinsic infomtation
`(1),, generated by the (p-1)th decoder DEC. Note that the
`variance of of the extrinsic information and the variance of
`./l1(d,,) must be estimated at each decoding step p.
`
`V-2 Inlerleaving
`The interleaver uses a square matrix and bits ldk}
`are written row by row and read pseudo-randomly. This non-
`uniform reading rule is able to spread the residual error
`blocks of rectangular form, that may set up in the interleaver
`located behind the first decoder DEC1, and to give the
`greater free distance as possible to the concatenated (parallel)
`code.
`
`VI - RESULTS
`
`For a rate R=l/2 encoder with constraint length K=5,
`generators G1=37, G2=21 and parallel concatenation
`(R1=R;=2/3), we have computed the Bit Error Rate (BER)
`after each decoding step using the Monte Carlo method, as a
`function of signal to noise ratio Eb/N0 where E1,
`is the
`energy received per information bit dk and No is the noise
`monolateral power spectral density. The interleaver consists
`of a 256x256 matrix and the modified BAHL et al. algorithm
`has been used with length data block of N=65536 bits. In
`
`(
`
`lti)
`
`Ffom
`demodulator
`
`Fig. 4a Modular pipelined decoder, corresponding to an
`iterative processus of the feedback decoding.
`
`(1) pl
`
`16-STATE
`
`DEC
`
`16-STATE
`
`-2
`
`DEC,
`
`Fig. 4b Decoding module (level p).
`
`order to evaluate a BER equal to 1O'5, we have considered
`128 data blocks i.e. approximatively 8 x106 bits dk. The BER
`versus Eb/N0, for different values of p is plotted in Fig. 5.
`For any given signal to noise ratio greater than 0 dB, the BER
`decreases as a function of the decoding step p. The coding
`gain is fairly high for the first values of p (p=1,2,3) and
`carries on increasing for the subsequent values of p. For p=18
`for instance, the BER is lower than 10'5 at Eb/N0= 0,7 dB.
`Remember that the Shannon limit for a binary modulation
`with R=1/2, is Pg: 0 (several authors take P. =10‘5 as a
`reference) for Eb/N0= 0 dB. With parallel concatenation of
`RSC convolutional codes and feedback decoding,
`the
`performances are at 0,7 dB from Shannon's limit.
`The influence of the constraint length on the BER
`has also been examined. For X greater
`than 5, at
`Eb/N0: 0,7 dB, the BER is slightly worst at the first (p=l)
`decoding step and the feedback decoding is inefficient to
`improve the final BER. For K smaller
`than 5, at
`E},/N0: 0,7 dB,
`the BER is slightly better at the first
`decoding step than for K equal to 5, but the correction
`capacity of encoders C1 and C1 is too weak to improve the
`BER with
`feedback decoding. For K=4 (i.e. 8—state
`elementary decoders) and after iteration 18, a BER of 10'5 is
`achieved at Eb/N0 = 0,9 dB. For K equal to 5, we have tested
`several generators (G1, G2 ) and the best results were
`achieved with G1=37, G2=21.
`
`

`
`uncoded
`
`Iteration #1
`
`theoretical
`llmlt
`
`Fig. 5 Binary error rate given by Iterative decoding (p=1,...,18)
`of code of ilg. 2 (rate:1/2); interleaving 256x256.
`
`5 Eb/No (dB)
`
`For low signal to noise ratios. we have sometimes noticed
`that BER could increase during the iterative decoding
`process. In order to overcome this effect, we have divided the
`extrinsic information 2;, by [ 1 + 9|/i1(d,,)| ]with 8 = 0,15.
`
`In Fig. 6, the histogram of extrinsic infomiation (z)p
`has been drawn for several values of iteration p, with all data
`bits equal tol and for a low signal
`to noise ratio
`(Eb/No: 0,8 dB). For p=l
`(first
`iteration), extrinsic
`information (2), is very poor about bit dk, furthermore the
`gaussian hypothesis made above for extrinsic information
`(z)p, is not satisfied! Nevertheless when iteration p increases,
`the histogram merges towards a gaussian law with a mean
`equal to 1. For instance, for #13, extrinsic information (2),,
`becomes relevant information concerning data bits.
`
`VII CONCLUSION
`In this paper, we have presented a new class of
`convolutional codes called Turbo-codes whose performances
`in terms of BER are very close to SHANNON's litnit. The
`decoder is made up of P pipelined identical elementary
`modules and rank p elementary module uses the data
`information coming from the demodulator and the extrinsic
`information generated by the rank (p-1) module. Each
`elementary module uses a modified BAHL et al. algorithm
`which is rather complex. A much simpler algorithm yielding
`weighted (soft) decisions has also been investigated for
`Turbo-codes decoding [2], whose complexity is only twice
`the complexity of the VITERBI algorithm, and with
`perfonnances which are very close to those of the BAHL et
`al. algorithm. This new algorithm will enable encoders and
`
`

`
`decoders to be integrated in silicon with error correcting
`performances unmatched at the present time.
`
`.
`'
`.
`1
`1
`I:—1
`Prpe,/R1 }=):z )3 Zy,~(R,,,m m)a;_,(m ). (A6)
`i=0 j=0
`in m’
`
`Finally probability 01,’; ( m) can be expressed from probability
`
`04.1 (m) by the following relation
`2 2:7.-(R..m'm)ai-l(m')
` , (A7)
`2): Z E‘r;(R,,.m'm)a,£-1(m')
`m m’
`i=0 j=0
`
`a};(m)=
`
`In the same way, probability Bk (m) can be recursively
`calculated from probability Bk“ (m). From relation (16), we
`have
`
`Pr-{RI£v+1/SI: = m} =
`”*"’”‘ P{R;m/Rt}
`
`iteration#1
`
`, ..»--4"",
`f
`0%
`NORMALIZED
`.1
`SAMPLES
`
`i
`
`0
`
`Fig. 6 Histograms of extrinsic information 2 after
`iterations 1! 1.4.13 at Eb/No = 0.8 dB;
`all information bits d=1.
`
`By using BAYES rule, the numerator is equal to
`.
`N
`1
`N
`Pr{Rk+l/SI: =m} =2 zP,{Rm/Sm =m}
`m'i=0
`
`(A9)
`Pr{dk+1=i'Sk+l =’"-'Rk+1/St = mjl’
`By taking into account expressions of 1,-(Rk+1,m,m') and
`Bk+1 (m’), we can write
`2 i‘7.'(Rk+1-V": m').Bk+1(m')
`P,{R;+i/Rik}
`
`W"
`
`(A10)
`
`In substituting k by (k+1) in relation (A6), the denominator
`of (A10) is equal to
`)1: y,~(R,H,,m’m)a,{(m’). (A11)
`Pr{R,H1/Rf} =2): )1:
`in nu {=0 j=0
`Finally probability Bk (m)can be expressed from probability
`Bk“ (m’), by the following relation
`Z _i(;7'i(Rk+1»m: m')fik+1(m')
`3 (m):
`2:: E $:mRm.m‘m)a.:(m')
`'‘
`m m’
`i=0 j=U
`REFERENCES
`
`. (A12)
`
`[1] L.R. Bahl, J. Cocke, F. Jeinek and J. Raviv,
`"Optimal decoding of linear codes for minimizing symbol
`error rate", IEEE Trans. Infomt Theory, vol. IT-20, pp. 248-
`287, March 1974.
`
`[2] C. Berrou, P. Adde, E. Angui and S. Faudeil, "A
`low complexity soft-output Viterbi decoder architecture", to
`appear at ICC‘ 93.
`
`ANNEX I : EVALUATION OF PROBABILITIES 01,‘; (m)
`AND fi,,(m) .
`
`From relation (15) probability 0;}; (m) is equal to
`Pr{d,, =1‘, 5,, = m,Rf“,R,}
`Pr{R1"",R,,}
`
`a,';(m) =
`
`(A1)
`
`Pr{d,, = i,S,, = m, R,‘/R1""}
`'
`Pr{R,/R{‘"}
`The numerator of r1,i( m) can be expressed from state Sk_1
`and bl! dk .1 .
`Pr{d,, =1, 5,, = m,R,‘/R1"'1}=
`1
`2 zP,{d,, = .',d,,_, = j, s,, = m, s,,_1= m',R,, /Rf-‘} (A2)
`m‘ j=0
`By using BAYES rule, we can write
`Pr{d,, = i,s,, = m,Rk/R1"'1} =
`
`1 Pr{d,,_1=j,S,H= m‘,R1""}
`Ego
`P,{R1"_'}
`P,{d, :1’, S) =m, Rk /d,,_1 =1, s,,_, = m',R{‘“}. (A3)
`By taking into account that events after time (k—1) are not
`
`influenced by observation Rf“ and bit dk_1 if state Sk_1 is
`known and from relation (17) we obtain
`Pr{d,, : 1‘, S,‘ = m, 1a,, /R1"“} =
`
`2: E 7,-(R,,,m'm)a,{_1(m').
`m’ _r'=O
`The denominator can be also expressed from bit dk and state
`5k
`
`1
`P,{R,,/R,""} =z_z0 Pr{d,, =i,s,, =m,R,,/R,""} (A5)
`HI l:
`and from relation (A4), we can write :
`
`(A4)

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket