throbber
NEAR SHANNON LIMIT ERROR - CORRECTING
`CODING AND DECODING :TURBO-CODES (1)
`
`Claude Berrou, Alain Glavieux and Punya Thitimajshima
`
`Claude Berton. Integrated Circuits for Telecommunication Laboratory
`
`Alain Glavieux and Punya Thitimajshima. Digital Communication Laboratory
`
`Ecole Nationale Supérieure des Téleconununications de Bretagne. France
`
`(1) Patents N” 91052?9 (France). N° 9246001 1.? (Europe). N° 019870.433 (USA)
`
`P,{a,, =0/{I1 =e,...a,,_1 = em} = P,{a',. =5} = 1/2 (4)
`withsis equal totr-i
`£=Zr.-c,-
`J’-I
`
`s=0,1.
`
`(5)
`
`mod.2
`
`Thus the trellis structure is identical for the RSC
`code and the NSC code and these two codes have the same
`free distance df. However. the two output sequences {Xx} and
`{Yk } do not correspond to the same input sequence {tip} for
`RSC and NSC codes. This is the main difference between the
`two codes.
`
`When punctured code is considered. some output
`bits Xk or Y;.- are deleted according to a chosen puncturing
`pattem defined by a matrix P. For instance, starting from a
`rate R=l.-'2 code, the matrix P ofrate 2.-'3 punctured code is
`ll
`
`Afismct - This paper deals with a new class of convolntional
`codes called Turbo-codes. whose performances in terms of
`Hit Error Rate (BER) are close to the SHANNON limit. The
`Turbo-Code encoder is built using a parallel concatenation of
`two Recursive Systematic Convolutional codes and the
`associated decoder. using a feedback decoding rule,
`is
`implemented as P pipelined identical elementary decoders.
`
`I - INTRODUCTION
`Consider a binary rate R=1l2 convolutional encoder with
`constraint length K and memory M=K-1. The input to the
`encoder at time It is a bit dk and the corresponding codeword
`Cg is die binary couple (Xi. Yk ) with
`
`X,, = x2lg,.d,._.-
`i=0
`[(-1
`
`)1 = zg2id*_i
`i=0
`
`mod.2 3,, =0,1
`
`(la)
`
`32;-'10,]
`
`where 61: {gn}, G2: {ggg } are the two encoder generators,
`generally expressed in octal form.
`It is well known. that the BER of a classical Non
`Systematic Convolulional (NSC) code is lower than that of a
`classical Systematic code with the same memory M at large
`SNR. At low SNR, it is in general the other way round. The
`new class of Recursive Systematic Convolutional (RSC)
`oodes. proposed in this paper. can be better than the best NSC
`code at any SNR for high code rates.
`A binary rate R=1l2 RSC code is obtained from a
`NSC code by using a feedback loop and setting one of the
`two outputs X; or Pk equal to the input bit d;,. For an RSC
`code. the shift register (memory) input is no longer the bit d;;
`but is a new binary variable cry. If Xg=d;, (respectively
`Y;.=d.t). the output Y; (resp. Xk) is equal to equation (lb)
`(resp. in) by substituting up for air and the variable tn, is
`recursively calculated asK-1
`:1, =dk + }:y;a,_.-
`i=1
`
`mod.2
`
`(2)
`
`where y,- is respectively equal to 31,- if X;..=d;.- and to 3-2; if
`l’;,=dk. Equation (2) can be rewritten as
`K-I
`
`mod.2.
`
`(3)
`
`d}: = _.%?’.'0‘t—u'
`One RSC encoder with memory M=4 obtained from an NSC
`encoder defined by generators (31:37. (32:21 is depicted in
`Fig.1.
`
`Generally. we assume that the input bit dg takes
`values 0 or 1 with the same probability. From equation (2),
`we can show that variable a;, exhibits the same statistical
`property
`
`0—7B03—0950—Z'93!'$3.00©1993IEEE
`
`Fig. 1b Recursive Systematic coda.
`
`Apple 1106
`
`

`
`ll - PARALLEL CONCATENATION OF RSC CODES
`With RSC codes. a new concatenation scheme,
`called parallel concatenation can be used. In Fig. 2. an
`example of two identical RSC codes with parallel
`concatenation is shown. Both elementary encoder (C1 and
`C2) inputs use the same bit dk but according to a different
`sequence due to the presence of an interleaver. For an input
`bit sequence {rig}, encoder outputs X; and Y}, at Lime I: are
`respectively equal to d;_. (systematic encoder) and to encoder
`C1 output Y”, or to encoder Cg output Ygk. If the coded
`outputs (Ylk, Y;;,) of encoders C1 and C 2 are used
`respectively :11 times and rig times and so on. the encoder C1
`rateR1andenooderC1 rateRg are equal to
`
`RI =_"iflL
`2r11-+112
`
`R1 =_”lfl’L_
`211; Hz,
`
`(5)
`
`Al(d,J = Log
`
`given encoder (C1 or C2) is not emitted. the corresponding
`decoder input is set to zero. This is performed by the
`DEMUXIINSERTION block.
`It is well known that soil decoding is better than
`hard decoding, therefore the first decoder DEC] must deliver
`to the second decoder DEC; 3 weighted (soft) decision. The
`Logarithm of Likelihood Ratio (LLR), A1(d;, ) associated
`with each decoded bit (ft; by the first decoder DEC1 is a
`relevant piece of information for the second decoder DEC;
`P, id, =1/observation}.
`P,{d, =0/observation}
`where P,[dg =£ {observation}. i = O.
`1
`is the a porteriorr’
`probability (APP) of the data bit dg.
`
`.
`
`(8)
`
`INSEFITION
`
`Fig. 3e Principle of the decoder according to
`a serial concatenation scheme.
`
`III - OPTIMAL DECODING OF RSC CODES WITH
`WEIGHTED DECISION
`
`The VITERBI algorithm is an optimal decoding
`method which minimizes the probability of sequence error
`for convolutional codes. Unfortunately this algorithm is not
`able to yield the APP for each decoded bit. A relevant
`algorithm for this purpose has been proposed by BAHL at at
`[1]. This algorithm minimizes the bit error probability in
`decoding linear block and convolutional codes and yields the
`APP for each decoded bit. For RSC codes. the BAHL at al.
`algorithm must be modified in order to take into account their
`recursive character.
`
`III — 1 Modified BAHL ei‘ af. algorithm for RSC codes
`Consider a RSC code with constraint length K; at
`time k the encoder state Sr is represented by a it’-uple
`
`(9)
`S, =(a,,.a,,_1.......a*_,m}.
`Also suppose that the information bit sequence {rig} is made
`up of N independent bits dk. taking values 0 and 1 with equal
`probability and that the encoder initial state So and final state
`SN are both equal to zero. i.e
`(10)
`Sg=SN= (0.0 .... ..0)=0.
`encoder output
`codeword sequence. noted
`The
`ClN={C, .......C,........CN}is the input
`to a discrete
`gaussian memoryless channel whose output is the sequence
`R,” = {R,........R,,.......RN} where R,-¢=(xg,y;¢) is defined by
`relations (in) and (7b).
`
`Fig. 2 Recursive Systematic codes
`with parallel concatenation.
`
`The decoder DEC depicted in Fig. 3a, is made up of two
`elementary decoders (DEC;
`and DEC2)
`in a serial
`concatenation scheme. The fast elementary decoder DEC1 is
`associated with the lower rate R1 encoder C1 and yields a
`soft (weighted) decision. The error bursts at the decoder
`DEC] output are scattered by the interleaver and the encoder
`delay L1 is inserted to take the decoder DEC; delay into
`account. Parallel concatenation is a very attractive scheme
`because both elementary encoder and decoder use a single
`frequency clock.
`For a discrete memoryless gaussian channel and a
`binary modulation. the decoder DEC input
`is made up of a
`couple R1; of two random variables xi; and yr, at time it
`.rk=(2d*—lJ+t‘,
`(Ta)
`yt=f2Yt -U+qr-
`(751
`where it and qt are two independent noises with the same
`variance oz. The redundant information y; is demultiplexed
`and sent to decoder DEC; when Y1, =l’lk and toward decoder
`DEC; when 1’; =Y2k. When the redundant information of a
`
`

`
`The APP of a decoded data bit if; can be derived
`
`from the joint probability litre) defined by
`on
`2.;rmJ=.°,{d, =:,s, =mxs,"}
`and thus. the APP of a decoded data bit 0'; is equal to
`p,{a, =mi,"} =2 airml, i=0.l.
`(12)
`From relations (3) and (l2).mthe LLR A(d,,) associated with
`a decoded bit dg can be written as
`
`2 lion:
`A(dkj=L0g .
`
`(13)
`
`Finally the decoder can make a decision by comparing
`Atdyl
`to a threshold equal to zero
`
`:15, =1
`
`gr A(d,)>0
`
`(14)
`if A(d_tJ<0.
`ci,,=o
`In order to compute the probability ititm). let us introduce
`the probability functions o:,';(m).fi,(m) and 1*,-rRg,m’,m)
`
`P,{a; =:,s, =m/sf} us;
`
`firtm)=
`P R”; /S =
`P.{Ri'.1/Rf}
`(IT)
`7;(R,,,m',m,l = P,{d, =i,R*,S,, =m/S,‘_1=m’)].
`The joint probability firm) can be rewritten using BAYES
`rule
`
`(15)
`
`lirm)=R d.t"rSkk""::R*-Rk+l
`
`N
`
`lug)
`
`_.
`
`_
`
`P. R:-Rm
`
`Thusweoblain
`
`lifm)=
`
`P.fdi=e'.S.=rmRt} P. Ri“.t/d.=i.st=m.Rt
`P.{R."}
`e{Ri‘Li/Rf
`{[9}
`
`Taking into account
`that events after time It are not
`influenced by observation R,‘ and bit d}; if state Si, is known,
`the probability ?.'},(m) is equal
`(20)
`1:rm)=a;rm)s,.rm).
`The probabilities tzifmj and fig (m) can be recursively
`calculated from probability 3',-(Rk, m’, or). From annex I, we
`obtain
`
`.2? fr.-(R,.m’.mJa,f_,(m’)
`airm)=—“—';"“,ej mi
`22 E E 7’;(R,. m’. mlai_,rm’)
`fl‘! Ir.t’i=Dj=O
`
`and
`
`xi 7.-rRm.m.m’Jfii+.t’m’l
`mm) =we (22)
`£5 E }Eo?.vtR.+,.m’.mJa,{(m’J'
`ilit-.rI’t'=0j=
`The probability y,-(R,,m’.ml
`can be determined from
`transition probabilities of the discrete gaussian memoryless
`
`channel and transition probabilities of the encoder trellis.
`From relation (l'.-'), y,- (Ry. m’. in) is given by
`
`Y"(.Rg.,a'fl’, /d‘. =£.S* :m,S‘._1 =91’)
`
`(23)
`qtdk =r'/S,, = m,S,_1= m’)rrfS,, =m/S,,_, =m’)
`where p(..'.) is the transition probability of the discrete
`gaussian mernoryless
`channel. Conditionally to
`(rig =1‘. S; = M, SH = m’), I; and yk are two uncorrelated
`gaussian variables and thus we obtain
`pfllg Id,‘ =i.S,, = m.S,_, =m’J =
`
`p(x, Id, =i,S,, =m,S,,_, =m’J
`
`(24)
`ply,‘ /dy =i.S,, = m,S,_, = m’).
`Since the convolutional encoder is a deterministic machine,
`qfd, =i/Sy, =m.S,_, =m’J'
`is equal
`to D or
`I. The
`transition state probabilities Jf(S* =m/SH =
`’) of the
`trellis are defined by the encoder
`input statistic.
`Generally.P,[dy =1} = P,{d, = U} =1/2 and since there
`are
`two
`possible
`transitions
`from each
`state,.rrt'S,, =m/SH = in’): 1/2
`for each of
`these
`transitions.
`
`Different steps of modified BAHL at at‘. algorithm
`-Step 0 : Probabilities trfiytml and flN(m) are
`initialized according to relation (12)
`a;'.(0)=1 agtm;=o l7’m#0. r‘=0,l
`
`(25.2;
`
`(255)
`l3N{0)=l ,BN(mJ=0 Vmattl.
`-Step 1
`: For each observation Rt. the probabilities
`aifml and y,-(R,.m’,mJ are computed using relations (21)
`and (23) respectively.
`-Step 2 : When the sequence Rf” has been
`completely received. probabilities fikfm) are computed using
`
`relation (22). and probabilities aim) and ,6,,t’m) are
`multiplied in order to obtain Aifm}. Finally the LLR
`associated with each decoded bit d;.; is computed from
`relation (13).
`
`IV- THE EXTRINSIC INFORMATION OF THE RSC
`DECODER
`
`In this chapter, we will show that the LLR Mdk}
`associated with each decoded bit dk . is the sum of the LLR
`of tip at the decoder input and of another information called
`extrinsic information. generated by the decoder.
`Using the Ll.R Atdkl definition (13) and relations (20) and
`(21). we obtain
`2); 3:mR...m'.m)at-.rm’:fltrm)
`/.{d,J =mg _ (261
`.:z 2 YofR,,,m’.m)tri-1tm’l;'3,,(ml
`In m'_j=0
`
`the transition
`Since the encoder is systematic (Xi; = dig}.
`probability p(x,, /d, = :13, = m,S,,_, =m’l in expression
`y..rR*.m’.m) is independent of state values 5;; and SH.
`Therefore we can factorize this transition probability in the
`numerator and in the denominator of relation (26)
`
`

`
`feedback loo -
`
`16-STATE
`DECODER
`
`16-STATE
`DECODER
`DEC2
`
`INSEHTION
`
`Fig. 3b Feedback decoder [under 0 Internal delay assumption].
`
`decoded output
`A
`
`dk
`
`pfxk/dk =1) ‘'1
`p(I*/0';
`
`as ir.o»..m'.m)at-.rm'Jnrm»
`mg . (27)
`E5 i‘}’n(J*t»m'»m5“i—t('"’}t3ilmJ
`m In‘ j=0
`Conditionally to rig =1 (resp. dk =0), variables 1;; are
`gaussian with mean 1 (resp. -1) and variance (:2. thus the
`LLR A (rig) is still equal to
`xl(d,,)=é-x,,+li’;
`
`(23;
`
`where
`
`IWI: =Ardt> I.,-., =
`22.: s r.ry.,m'.mJat-.rm')nrm»
`Log :29;
`E}: 2 Ya (Ya: "1 Iv m}“5:—t(”1’)lBtlm-l
`In rrI'j=fl
`is a function of the redundant information introduced by
`W;
`the encoder. In general W;, has the same sign as (it; therefore
`Wk may improve the LLR associated witlt each decoded data
`bit d;,. This quantity represents the extrinsic information
`supplied by the decoder and does not depend on decoder
`input x;,. This property will be used for decoding the two
`parallel concatenated encoders.
`
`V - DECODING SCHEME OF PARALLEL
`CONCATENATION CODES
`In the decoding scheme represented in Fig. 3a,
`decoder DEC; computes 'LLR A;(d;,) for each transmitted
`bit :1‘; from sequences {xk} and {y;,]. dten tlte decoder DEC;
`performs the decoding of sequence(d;,} from sequences
`{A;(d,u‘] and {ya}. Decoder DEC; uses H16 modified BAHL
`et at. algorithm and decoder DEC; may use the VITERBI
`algorithm. The global decoding rule is not optimal because
`the first decoder uses only a fraction of the available
`redundant information. Therefore it is possible to improve the
`performance of this serial decoder by using a feedback loop.
`
`V-1 Decoding with a feedback loop
`We consider now that both decoders DEC; and
`DEC; use the modified BA!-IL at at‘. algorithm. We have
`seen in section IV that the LLR at the decoder output can be
`expressed as a sum of two terms if the decoder inputs were
`independent. Hence if the decoder DEC; inputs A;(d;,) and
`yzg are independent. the LLR Afldg) at the decoder DEC;
`output can be written as
`/11fdg)=f(A1(dk)’+W2§
`
`with
`
`At{d_gJ=%Xg +9!“
`From relation (29). we can see that the decoder DEC;
`extrinsic information W2}, is a function of the sequence
`{A;(d,,)}m.,,. Since A;(a',,} depends on observationR~,
`extrinsic information W3 is correlated with observations xi
`and y;;;. Nevertheless from relation (29). the greater la-l: I is.
`the less correlated are A; (d...) and observations x}, , yk. Thus.
`due to the presence of interleaving between decoders DEC;
`and DEC-2. extrinsic information W3; and observations xi,
`yuc are weakly correlated. Therefore extrinsic information
`W2}, and observations xg . y“, can be jointly used for carrying
`out a new decoding of bit dig, the extrinsic information
`z; = Wgg acting as a diversity effect in an iterative process.
`In Fig. 3b. we have depicted a new decoding scheme
`using the extrinsic information W2; generated by decoder
`DEC; in a feedback loop. This decoder does not take into
`account the different delays introduced by decoder DEC;
`and DEC; and a more realistic decoding structure will be
`presented later.
`The first decoder DEC; now has Lltree data inputs.
`(xi, y;g. an and probabilities a;',rm;
`and;31;(m) are
`computed in substituting R}, ={.r;¢. y 1 1;} by R}; =(x;,, y; k, arc) in
`relations (21) and (22). Taking into account that 2* is weakly
`correlated with Jo; and y;;; and supposing that 2;; can be
`
`approximated by a gaussian variable with variance of a! 03.
`the transition probability of die discrete gaussian ntemorylcss
`channel can be now factored in three terms
`
`

`
`.002,‘ /d,. = i. 3,, = m.SH = m’) = pfxy/. Jpfyt/. lpfzt/J (32)
`The encoder C; witl1 initial rate R1. through the feedback
`loop. is now equivalent to a rate R‘; encoder with
`. _
`R;
`(33)
`R1 — 1+ RI .
`The first decoder obtains an additional redundant information
`with I]; that may significantly improve its performances: the
`tenn Turbo-codes is given for this iterative decoder scheme
`with reference to the turbo engine principle.
`With the feedback decoder. the LLR ."\1(d,t-_) generated by
`decoder DEC1 is now equal
`to
`2
`2
`A](dt}:?Ik+:EZ*+“1_*
`where Wu; depends on sequence lznlmek. As indicated
`above. information at has been built by decoder DEC; at the
`previous decoding step. Therefore zrc must not be used as
`input information for decoder DECg. Thus decoder DEC;
`input sequences at step .0 (.022) will be sequences
`{«itrd..;}and {ml with
`(35)
`/i1(d,,)=A,rd,,)z_=o.
`extrinsic
`Finally from relation (30). decoder DEC;
`infonnation z;, = Wu, after deinterleaving, can be written as
`
`,-W_ M
`3,. = W“ = A2(dk)|
`and the decision at the decoder DEC output is
`
`(36)
`
`(3?)
`ti. = sigrt[A2(d,)].
`The decoding delays introduced by decoder D E C
`(DEC=DECl+DEC;). the interleaver and the deinterleaver
`imply that the feedback infonnation zk must be used through
`an iterative process as represented in Fig. 4a, 4b. In fact. the
`global decoder circuit is composed of P pipelined identical
`elementary decoders (Fig. 4a). The pth decoder DEC
`(Fig. 4b} input. is made up of demodulator output sequences
`(X)? and 01),, through a delay line and of extrinsic information
`(zlp generated by the (p-l)th decoder DEC. Note that the
`variance 0: of the extrinsic information and the variance of
`til (dk) must be estimated at each decoding step p.
`
`V-2 interleaving
`The interleaver uses a square Inauix and bits {dig}
`are written row by row and read pseudo-randomly. This non-
`uniform reading rule is able to spread the residual error
`blocks of rectangular form. that may set up in the interleaver
`located behind the first decoder DEC}. and to give the
`greater free distance as possible to the concatenated (parallel)
`code.
`
`VI — RESULTS
`For a rate R=1.’2 encoder with constraint length K=5.
`generators G ;=37. G2=2l and parallel concatenation
`(R1=R2=2/3). we have computed the Bit Error Rate (BER)
`after each decoding step using the Monte Carlo method. as a
`function of signal to noise ratio Eb/No where Er,
`is the
`energy received per information bit dg and N9 is the noise
`monolateral power spectral density. The interleaver consists
`of a 256x256 matrix and the modified BAHL ei oi. algorithm
`has been used with length data block of N=65536 hits. In
`
`Fig. 4a Modular pipallnad decoder, corresponding to an
`Iterative processue of the tsacback decoding.
`
`mp‘:
`
`Fig. 4b Decoding module (level p}.
`
`order to evaluate a BER equal to l0'5. we have considered
`123 data blocks i.e. approximatively 3 :10“ hits at. The BER
`versus Eb/N9. for different values of p is plotted in Fig. 5.
`For any given signal to noise ratio greater than 0 dB. the BER
`decreases as a function of the decoding step p. The coding
`gain is fairly high for the first values of p {p=l.2,3) and
`carries on increasing for the subsequent values of p. For pr-13
`for instance, the BER is lower than 10'5 at Eb/Ng= 0.7 dB.
`Remember that the Shannon limit for a binary modulation
`with R=1f2, is P,= 0 (several authors take .P,_.=I0'5 as a
`reference} for Eb/N0= 0 dB. Willi parallel concatenation of
`RSC convolutional codes and feedback decoding.
`the
`performances are at 0.7 dB from Shannon's limit.
`The influence of the constraint length on the BER
`has also been examined. For X greater
`than 5, at
`E3,/N9: 0.’.-' dB. the BER is slightly worst at the first %l)
`decoding step and the feedback decoding is inefficient to
`improve the final BER. For X smaller
`than 5. at
`Eb/Ng= 0,? dB.
`the BER is slightly better at the first
`decoding step than for K equal to 5, but the correction
`capacity of encoders C 1 and C; is too weak to improve the
`BER with
`feedback decoding. For 11:4 (:22. 8-state
`elementary decoders) and after iteration 13. a BER of 10's is
`achieved at Eb/N9 = 0,9 dB. For X equal to 5, we have tested
`several generators (G1. G2 ) and the best results were
`achieved with G1=3't'. G2=2l.
`
`

`
`theoretical
`limit
`
`Flg. 5 Binary error rate given by Iterative decoding {p=1 8}
`cl code at liq. 2 (rete:1l2): interleaving 256x256.
`
`5 Ebmo (ea)
`
`For low signal to noise ratios. we have sometimes noticed
`that BER could increase during the iterative decoding
`process. In order to overcome this effect. we have divided the
`extrinsic information 3; hy[ 1+ 9|zl1(a',‘)| jtvith B = 0.15.
`
`In Fig. 6, the histogram of extrinsic information (zip
`has been drawn for several values of iteration p, with all data
`bits equal
`to 1 and for a low signal
`to noise ratio
`(Eb/No: 0,3 dB). For p=l
`(first
`iteration). extrinsic
`
`infnnrtation (tip is very poor about bit dg, furthermore the
`gaussian hypothesis made above for extrinsic infonnatiott
`(zip, is not satisfied! Nevertheless when iteration p increases,
`the histogram merges towards a gaussian law with a mean
`equal to 1. For instance. for p=l3, extrinsic information (.2),
`becomes relevant information concerning data bits.
`
`VII CONCLUSION
`In this paper. we have presented a new class of
`mnvolutional codes milled Turbo-codes whose performances
`in terms of BER are very close to SI-lANNON"s limit. The
`decoder is made up of P pipelined identical elementary
`modules and rank p elementary module uses the data
`information coming from the demodulator and the extrinsic
`information generated by the rank (p-1) module. Each
`elementary module uses a modified BAHL at al. algorithm
`which is rather complex. A much simpler algorithm yielding
`weighted (soft) decisions has also been investigated for
`Turbo-codes decoding [2], whose complexity is only twice
`the complexity of the VITERBI algorithm. and with
`performances which are very close to those of the BAHL er
`al. algorithm. This new algorithm will enable encoders and
`
`

`
`decoders to be integrated in silicon with error correcting
`performances unmatched at the present time.
`
`pr{n,, /R1'‘"} = 2:2 5 3: 'r;(R,,m‘m)a,f_1(m’). (.46)
`In in‘ E20 j-r0
`
`Finally probability :1: 1'm) can be expressed from probability
`
`n:§_1(m} by the following relation
`; _i'7;(R&,m'm)a,f_1(m')
` _ (A7;
`£2 E 2'1’.-fR..m’m)a,{-1(m'l
`In In‘
`i=0 j=D
`
`al(m)=
`
`In die same way. probability B; (in) can be recursively
`mlculated from probability I3;-“.1 (in). From relation (16). we
`have
`
`NOFIMALIZED
`SMHPLES
`
`Fig. 6 Histograms oi extrinsic information 2 altar
`Iterations t 1.4.13 at Ebrhlo u 0.8 dB;
`all inlorrnatlon bits d-1.
`
`ANNEX 1 : EVALUATION OF PROBABILITIES gum;
`AND mm) .
`
`From relation (15) probability airm) is equal to
`
`=
`
`I'M)
`
`Pr d, =i,S, = m,Rf",R,,
`__
`Pr{Rf" R;
`“*0”:
`Prid. =r‘,s,, = sin,‘ /Rf“
`Pr{R* /R{'"’}
`The numerator of o:i(rn) can be expressed from state SH
`and bit dz -1.
`Pr{d, =:‘.s. =m,R,,/R,*"‘} =
`l
`2 ;oP,{d, = 1, d,,_, =1, .5‘, = m, SH = m’, Rt /R,*"'} (A2)
`in‘ _r=
`By using BAYES rule. we can write
`Pr{d,, = ask =. m, R, /R,""} =
`
`I---I
`
`)2
`
`P, Rf”/St =rn
`P.lR:’.1/all
`
`I
`
`(
`'3" '"
`I
`-Z:.£';Pr{d.t+I=i’Sk+1:m1'R.:'+1'*Rk+l/St :51}
`II’! :=
`P.§R£i1/Rf}
`By using BAYES rule. the numerator is equal to
`I
`P,{R,£i., /3* = m} = )3 2?, {R312 /SM ~_~ m'}
`It-r'i=0
`
`. (A8)
`
`(A10)
`
`(.49)
`P,{d,,, =i,S_H, =m,'R,,,,/5,, =m}.
`By taking into account expressions of 1,-(R;,+1,rn,m') and
`Bk“ (m'). we can write
`2 E l".'(Rk+I- "1: ’”*)rBt+tl(-"”')
`-”,{R;+i/Rf}
`‘”*""’=
`ln substituting k by (k+ I ) in relation (A6), the denominator
`of (A10) is equal to
`Pr{Rr.. /R.‘ } =2: 2 imRr....m'm)a.{rm'l. (Am
`in In’
`t"-0 j-=0
`Finally probability Bi; (m)can be expressed from probability
`Bk” (m'), by the following relation
`E_%l’:(Rt+1-W-m')l3t+rlm"
`5 (m}=
`Z);
`i;yf{R“,.m'm}a§(m')
`1-
`min
`t: 1:
`REFERENCES
`
`. {A12}
`
`[11 LR. Bahl. J. Cocke. F. leinek and J. Raviv,
`"Optimal decoding of linear codes for minimizing symbol
`error rate". IEEE Trans. Inform Theory, vol. IT-20. pp. 243-
`287, March 1974.
`
`[2] C. Berrou, P. Adde. E. Angui and S. Faudeil, "A
`low complexity soft-output Viterbi decoder architecture", to
`appear at ICC’ 93.
`
`P,{Rf"}
`
`P,{d, =i,S,t = m,R* /dt_1=j,S*_,=m',R1"‘]}. (A3)
`By taking into account that events after time (I:-1) are not
`
`influenced by observation Rf" and bit dk.l if state S“ is
`known and from relation (1?) we obtain
`Pr{d,, =t.s, =m. Ry/R1*"} =
`1
`
`.
`
`E £7.-lR;,.m'm)a£_.(m'l.
`nu j-0
`
`(A4)
`
`The denominator can be also expressed from bit rig and state
`Sr:
`
`1
`p,{R,,/Rf“} =z_);o Pr{d,, = is, =m,R,/R,*"} (A5)
`II! I:
`and from relation (A4). we can write :

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket