throbber
Near Shannon limit performance of low
`density parity check codes
`
`D.J.C. MacKay and R.M. Neal
`
`Indexing terms: Fault tolerant conzputing, Evor correction codes
`
`The authors report the empirical performance of Gallager’s low
`density parity check codes on Gaussian channels. It is shown that
`that of standard
`performance substantially better
`than
`convolutional and concatenated codes can be achieved; indeed the
`performance is almost as close to the Shannon limit as that of
`Turbo codes.
`
`Introduction: A linear code may be described in terms of a genera-
`tor matrix G, or in terms of a parity check matrix H, whilsh satis-
`fies Hx = 0 for all codewords x. In 1962, Gallager reported work
`on binary codes defined in terms of low density parity check
`matrices (abbreviated ‘GL codes’) [5, 61. The matrix H was defined
`in a non-systematic form; each column of H had a small weight
`(e.g., 3) and the weight per row was also uniform; the matrix H
`was constructed at random, subject to these constraints. Gallager
`proved distance-properties of these codes and described a proba-
`bility-based decoding algorithm with promising empirical perform-
`ance. However, it appears; that GL codes have been 2,enerally
`forgotten, the assumption perhaps being that concatenated codes
`[4] were superior for practical purposes.
`Duricg our work on MN codes [8] we realised that it is possible
`to create ‘good’ codes from very sparse random matrices. and to
`decode them (even beyond their minimum distance) using approx-
`imate probabilistic algorithms. We eventually reinvented Gallag-
`er’s decoding algorithm and GL codes. In this Letter we report the
`empirical performance of these codes on Gaussian chann’els. The
`theoretical properties of GL codes have been proven (essentially,
`that the channel coding the’orem holds for them) elsewhere [9]. GL
`codes can also be defined over GF(q). We are currently implement-
`ing this generalisation.
`We created sparse rando’m parity check matrices in the follow-
`ing ways:
`
`Construction 1A: An M by N matrix (M rows, N columns) is cre-
`ated at random with weight per column t (e.g. t = 3), and weight
`per row as uniform as possible, and overlap between any two col-
`umns no greater than 1. (The weight of a column is the number of
`non-zero elements; the overlap between two columns is thrir inner
`product).
`
`Construction 2A: Up to MI2 of the columns are designated ‘weight
`2 columns’, and these are constructed such that there is zero over-
`lap between any pair of columns. The remaining colurnns are
`made at random with weight 3, with the weight per row as uni-
`form as possible, and overlap between any two columns of the
`entire matrix no greater than 1.
`
`Constructions IB and 2B: A small number of columns are deleted
`from a rnatrix produced by constructions 1A and 2A, respectively,
`so that the bipartite graph1 corresponding to the matrix has no
`short cycles of length less than some length 1.
`
`The above constructions do not ensure that all the row:s of the
`matrix are linearly independent, so the M x N matrix created is
`the parity matrix of a linear code with rate at least R = MA‘, where
`K = N-M. Results are reported here on the assumption that the
`rate is 17. The generator rnatrix of the code can be created by
`Gaussian elimination.
`We simulated a Gaussian channel with binary input f a and
`additive noise of variance cr2 = 1. If communication using a code
`of rate R is used then it is conventional to describe the signal to
`noise ratio by EbIN, = a2121202, and to report this number in deci-
`bels as 10log,,Eb/No.
`
`Decoding: The process of decoding involves finding the most prob-
`able vector x such that Hx mod 2 = 0, with the likelihood of x
`given by IIJ? where f i = 1/(1 + exp(-2ay,lo2)) and f! = 1-
`f : , and y, i s the channel’s output at time n.
`No. It3
`
`1645
`
`0
`
`0.1
`
`0.2
`
`0 . 3
`
`0 4
`
`0.5
`
`Fig. 2 Mean waiting time,for priority and nonpriority calls for 15 clzan-
`nels and loud o j 95%
`0 w , ~ Sim
`X WID Sim
`. . . , . , , . , , , , ,
`W2” APPr
`W2M Calc
`
`_ _ _ -
`
`~~~
`
`This is the situation of interest in a trunked mobile system with
`priority, as the application of priorities is under heavy traffic: if
`the system is correctly sized, under regular load conditions no pri-
`orities are needed, as the waiting time must be low for all custom-
`ers. For all delay probabilities the relative error of the mean
`waiting time for priority calls is no longer guaranteed to be lower
`than lo%, but the overestimation is still better than the waiting
`time calculated by considering the MIMIC queue and only eqn. 1.
`
`Conclusion: The performance of a linked mobile radio system with
`two priority levels and deterministic distributed call duration can
`be evaluated in a very simple way when an exact result is not
`required. The deterministic type of call is unusual, but distribu-
`tions with a coefficient of variation < 1 can often be found, and
`the approximation introduced can be helpful to find the minimum
`size of the system (CJ needed to attain a target GoS. The authors
`conjecture that this work could be applied to a wider range of P D
`and C and to more than two priority levels. The results achieved
`can be used in other fields where the teletraffic theory applies.
`
`Acknowledgment: This work was funded by CICYT project
`TIC94-0475.
`
`0 IEE 1996
`Electronics Letters Online No: 19961121
`F. Barcelo, V. Casares and J. Paradells (Departamento de Matematica
`Aplicada y Telemdtica, c/Gran Capithn s/n, 08034 Barcelona, Spain)
`
`28 June 1g96
`
`References
`
`1 GKOSS, D., and HARRIS, CM.: ‘Fundamentals of queueing theory’
`(John Wiley & Sons, 1974)
`2 CHRAPKOWSKI, A., and GRUBE, G : ‘Mobile trunked radio system
`design and simulation’. IEEE 41st Vehicular Technol. Conf., 1991,
`pp. 245-250
`3 MOANG, H., MALHAME, R., and CHAN, G.: ‘Traffic engineering of
`trunked land mobile radio dispatch systems’. 41st Vehicular
`Technol. Conf., 1991, pp. 251-256
`4 HOANCH., and COHEN,P.: ‘A model for channel sharing in land
`mobile radio dispatch services’, ZEEE Trans., 1985, VT-34, (2), pp.
`76-85
`SEELEN, L.P., TIJMS. HC., and VAN HORN, M.H.: ‘Tables for multi-
`server queues’ (North Holland, 1985)
`TIJMS, H.C.: ‘Stochastic modelling and analysis: A computational
`approach’ (John Wiley & Sons, 1986)
`ELECTRONICS LETTERS 29th August 1996 Vol. 32
`
`6
`
`5
`
`Hughes, Exh. 1062, p. 1
`
`

`

`Gallager’s algorithm (reviewed in detail in [9]) may be viewed as
`an approximate belief propagation algorithm [lo]. (The Turbo
`decoding algorithm may also be viewed as a belief propagation
`algorithm.)
`We refer to the elements of x as bits and to the rows of H as
`checks. We denote the set of bits n that participate in check rn by
`N(rn) = { n : Hn,, = 1). Similarly we define the set of checks in
`1). We denote a set
`which bit n participates, M(n) = { M : H,,,,
`N(m) with bit n excluded by N(rn)In. The algorithm has two alter-
`nating parts, in which quantities qmn and r,, associated with each
`non-zero element in the H matrix are iteratively updated. The
`quantity qf$n is meant to be the probability that bit n of x is x,
`given the information obtained via checks other than check m.
`The quantity r& is meant to be the probability of check n.1 being
`satisfied if bit n of x is considered fixed at x and the other bits
`have a separable distribution given by the probabilities {q,n,2,: n’ E
`N(m)\n}. The algorithm would produce the exact posterior proba-
`bilities of all the bits if the bipartite graph defined by the matrix H
`contained no cycles [lo].
`
`~
`
`brlmnl
`
`(1)
`
`Initialisation: The variables q:n and q!7n are initialised to the
`values f,” and f : , respectively.
`Horizontal step: We define 6q,n,l = q:n
`each rn, n:
`
`q,& and compute for
`
`n
`brmn =
`then set rEn = 1/2(1 + S,T,,) and rh = 1/2(14rmn).
`n’ t N ( m ) \n
`n
`Vertical step: For each n and rn and for x = 0,l we update:
`qkn =a,,f,”
`rZln
`( 2 )
`m ’ t M ( n ) \ m
`q; = a n E n cn
`where a,,, is chosen such that qk + q& = 1. We can also
`update the ‘pseudoposterior probabilities’ q! and q: gken by:
`(3)
`
`lutional code (data courtesy of R.J. McEliece). The curve labelled
`Turbo shows the performance of the rate 112 Turbo code
`described in [2]. (Better Turbo codes have since been reported [3].)
`GL codes: From left to right the codes had the following
`parameters (N,K,R): (29507, 9507, 0.322) (construction 2B);
`(15000, 5000, 0.333) (2A); (14971, 4971, 0.332) (2B); (65389,
`32621, 0.499) (1B); (19839, 9839, 0.496) (1B); (13298, 3296, 0.248)
`(1B); (29331, 19331, 0.659) (1B). It should be emphasised that all
`the errors made by the GL codes were detected errors: the decod-
`ing algorithm reported the fact that it had failed.
`Our results show that performance substantially better than that
`of standard convolutional and concatenated codes can be
`achieved; indeed the performance is almost as close to the Shan-
`non limit as that of Turbo codes [2]. It seems that the best results
`are obtained by making the weight per column as small as possible
`(construction 2A). Unsurprisingly, codes with larger block length
`are better. In terms of the value of EJN,, the best codes were ones
`with rates between 112 and 1/3.
`
`Cost: In a brute force approach, the time to create the codes scales
`as N . where N is the block size. The encoding time scales as NL,
`but encoding involves only binary arithmetic, so for the block
`lengths studied here it takes considerably less time than the simu-
`lation of the Gaussian channel. It may be possible to reduce
`encoding time using sparse matrix techniques. Decoding involves
`approximately 6 Nf floating point multiplies per iteration, so the
`total number of operations per decoded bit (assuming 20 itera-
`tions) is about 120t/R, independent of block length. For the codes
`presented here, this is about 800 operations.
`T h ~ s work not only confirms the assertion [1] that good codes
`can be thought of and even decoded, but also that it was possible
`to think of them, and decode them, thirty-five years ago.
`
`Acknowledgments: D.J.C. MacKay is grateful to R. McEliece and
`R. Sewell for helpful discussions.
`
`15 July 1996
`
`0 IEE 1996
`Electronics Letters Online No: 19961141
`D.J.C. MacKay (Cuvendislz Laboratory, Cambridge, CB3 OHE, United
`Kingdom)
`e-mail: mackay@mrao.cam.ac.uk
`R.M. Neal (Department of Statistics and Computer Science, University
`of Toronto, Toronto, M5S IA4, Cunadua)
`e-mail: radford@stat.toronto.edu
`
`References
`
`mtM(n)
`These quantities are used to create a tentative bit-by-bit decoding
`2 ; if however H2 = 0 then the decoding algorithm halts. Other-
`wise, the algorithm repeats from the horizontal step. A failure is
`declared if some maximum number of iterations (e.g. 100) occurs
`without a valid decoding.
`
`10-1
`
`i
`
`1
`
`BATTAIL. G.: ‘We can think of good codes, and even decode them’.
`Eurocode ’92, Number 339 in CISM Courses and Lectures, pp.
`353-368
`BERKOL. c , GLAVIEUX, A., and THITIMAJSHIMA, P : ‘Near Shannon
`limit error-correcting coding and decoding: Turbo-codes’,Proc.
`1993 IEEE Int. Conf. Commun., 1993, (Geneva, Switzerland),pp.
`106L1070
`DIVSILAR. D., and POLLARA, F.: ‘On the design of turbo codes’.
`Technical Report TDA 42-123, Jet Propulsion Laboratory,
`Pasadena, November 1995
`FORNEY. c.D.: ‘Concatenated codes’. Technical Report 37, (MIT,
`1966)
`GALLAGER, R.G.: ‘Low density parity check codes’, IRE Trans. Info.
`Theory, 1962; IT-8, pp. 21-28
`Results: Fig. 1 compares the performance of GL codes with text-
`GALLAGER, R.G: ‘Low Density Parity Check Codes’. No. 21 in
`book codes and with state of the art codes. The vertical axis shows
`Research monograph series (MIT Press, Cambridge, Mass., 1963)
`the empirical bit error probability.
`GOLOMB, s w , PEILE, R E , and SCHOLTZ, R.A.: ‘Basic Concepts in
`Textbook codes: The curve labelled (7,1/2) shows the perform-
`Information Theory and Coding: The Adventures of Secret Agent
`ance of a rate 112 convolutional code with constraint length 7,
`001 1 I’ (Plenum Press, New York, 1994)
`known as the de facto standard for satellite communications [7].
`MACKAY, D J.c., and NEAL, R.M.: ‘Good codes based on very sparse
`The curve (7,1/2)C shows the performance of the concatenated
`matrices’,Cryptography and Coding. 5th IMA Conf., 1995, pp.
`100-111 (number 1025 in Lecture Notes in Computer Science)
`code composed of the same convolutional code and a Reed-
`MaCKAY. D J c., and NEAL, R M.: ‘Good error correcting codes based
`Solomon code.
`on very sparse matrices’
`State of the art: The curve (15,114)C shows the performance of
`an extremely expensive and computer intensive concatenated code
`10 PEARL, J : ‘Probabilistic Reasoning in Intelligent Systems: Networks
`developed at JPL based on a constraint length 15, rate 1/4 convo-
`of Plausible Inference’ (Morgan Kaufmann, San Mateo, 1988)
`ELECTRONICS LETTERS 29th August 1996 Vol. 32 No. 18
`1646
`
`Hughes, Exh. 1062, p. 2
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket