`·X
`LIN~~OSTELLO
`,t·';.t,
`
`SHU LIN I DANIEL J. COSTELLO,Jr.
`
`Error Control
`Coding:
`
`fundamentals and Applications
`
`,
`
`LIBRARY OF CONGRESS
`
`1 1
`
`11
`
`llll
`llllllllllllllllllllllllllllllllllllllllllllllll'1,\lll
`\IIIII
`00023491655
`
`SHU LIN I DANIEL J. COSTELLO, Jr.
`
`Error Control
`Coding:
`
`Fundamentals and Applications
`
`shows how to achieve
`and
`This new book by
`reliable communication over a noisy transmission channel by designing good codes
`and efficient decoding methods.
`
`presents the essentials of
`this highly complex material in a way that can be understood and applied with only
`a minimum of mathematical background. In the 1970s the emphasis in coding
`research shifted from theory to practical applications and in this book there are
`three completely up-to-date chapters that concentrate on these applications of
`coding to digital transmission and storage systems. In addition, the book includes a
`comprehensive treatment of the error-detecting capabilities of block codes and
`emphasizes probabilistic decoding methods for convolutional codes.
`
`develops those concepts from modern algebra which are necessary to
`understand the material in later chapters;
`identifies the basic structure and properties of cyclic codes;
`details the important class of BCH codes for multiple-error-detection;
`describes majority-logic decoding and majority-logic decodable codes;
`presents various codes and coding schemes for burst error correction;
`identifies the basic structure and properties of convolutional codes;
`describes various decoding methods for' convolutional codes such as Viterbi
`decoding, sequential decoding, and threshold decoding;
`relates a variety of applications of coding to modern day data communication
`and storage systems;
`includes codes actually used on many space and satellite systems and a section
`using convolutional codes in a hybrid ARO system.
`
`ISBN 0-13-283796-X
`
`PRENTICE-HALL SERIES IN COMPUTER APPLICATIONS IN ELECTRICAL ENGINEERING
`Franklin F. Kuo, Editor
`
`Page 1 of 98
`
`SAMSUNG EXHIBIT 1010
`
`
`
`
`
`Page 2 of 98
`
`
`
`-,
`
`ERROR CONTRO L CO DING
`
`fundame ntals and Applications
`
`Page 3 of 98
`
`
`
`PRENTICE-HALL COMPUTER APPLICATIONS
`IN ELECTRICAL ENGINEERING SERIES
`
`FRANKLIN F. KUO, editor
`
`ERROR CONTROL CODING
`
`Fundamentals and Applications
`
`ABRAMSON and Kuo, Computer-Communicati n Networks
`BowERS and SP.o RE, Sceptre: A Computer Program for Circuit and System Analysis
`CAozow, Discrel' Time Sy tems: An Introduction with l nterdi ciplinary Applications
`CAozow and MARTENS, Discrete-Time and Computer Control Systems
`DAVIS, Computer Data Displays
`FRIEDMAN and MENON, Fault Detection in Digital Circuits
`HUELSMAN, Basic Circuit Theory
`JENSEN and LIEBERMAN, IBM Circuit Analysis Program: Techniques and Applications
`JENSEN and WATKINS, Network Analysis: Theory and Computer Methods
`KLINE, Digital Computer Design
`KocHENBURGER, Computer Simulation of Dynamic Systems
`Kuo, (ed.) Protocols and Techniques for Data Communication Networks
`Kuo and MAGNUSON, Computer Oriented Circuit Design
`LrN An Introduction to Error-Correcting Codes
`LIN, and CosTELLO, Error Control Coding: Fundamentals and Applications
`NAGLE, CARROLL, and IRWIN, An Introduction to Computer Logic
`RHYNE, Fundamentals of Digitals Systems Design
`SrFFERLEN and VARTANIAN, Digital Electronics with Engineering Applications
`STAUDHAMMER, Circuit Analysis by Digital Computer
`STOUTEMYER, PL/I Programming for Engineering and Science
`
`SHU LIN
`,,
`University of Hawaii
`Texas A&M University
`
`DANIEL J. COSTELLO, JR.
`Illinois Institute of Technology
`
`Prentice-Hall, Inc. Englewood Cliffs, New Jersey 07 632
`
`Page 4 of 98
`
`
`
`Library of Congress Cataloging in Publication Data
`
`LIN, SHU,
`Error control coding.
`
`(Prentice-Ha11 computer applications in
`electrical engineering series)
`Includes bibliographical references and index.
`1. Error-correcting codes (Information theory)
`I. Costello, Daniel J.
`. 11. Title.
`111. Series.
`001.53'9
`QAJ48-;L55--
`ISBN 0-13-283796-X
`
`82-5255
`AACR2
`
`Editorial/production supervision and interior design by Anne Simpson
`
`Cover design by Marvin Warshaw
`Manufacturing buyer: Joyce Levatino
`
`© 1983 by Prentice-Hall, Inc., Englewood Cliffs, N.J. 07632
`
`All rights reserved. No part of this book
`may be reproduced in any form or
`by any means without permission in writing
`from the publisher.
`
`Printed in the United States of America
`
`10 9 8 7 6 5 4 3 2 1
`
`ISBN
`
`(cid:143) -13-283796-X
`
`PRENTICE-HALL INTERNATIONAL, INC., London
`PRENTICE-HALL OF AUSTRALIA PTY. LIMITED, Sydney
`ED ITO RA PRENTICE-HALL DO BRAZIL, LTDA, Rio de Janeiro
`PRENTICE-HALL CANADA INC., Toronto
`PRENTICE-HALL OF INDIA PRIVATE LIMITED, New Delhi
`PRENTICE-HALL OF JAPAN, INC., Tokyo
`PRENTICE-HALL OF SOUTHEAST ASIA PTE. LTD., Singapore
`WHITEHALL BOOKS LIMITED, Wellington, New Zealand
`
`With Love and Affection for
`Ivy,
`Julian, Patrick, and Michelle Lin
`and
`Lucretia,
`Kevin, Nick, Daniel, and Anthony Costello
`
`Page 5 of 98
`
`
`
`Conte nts
`
`PREFACE
`
`xiii
`
`CHAPTER 1 CODING FOR REUABLE DIGITAL TRANSMISSION
`AND STORAGE
`1.1 Introduction
`3
`1.2 Types of Codes
`1.3 Modulation and Demodulation
`1.4 Maximum Likelihood Decoding
`1.5 Types of Errors
`11
`1.6 Error Control Strategies
`References
`14
`
`5
`8
`
`12
`
`CHAPTER 2
`
`15
`
`INTRODUCTION TO ALGEBRA
`2.1 Groups
`15
`2.2 Fields
`19
`2.3 Binary Field Arithmetic
`24
`2.4 Construction of Galois Field GF(2m)
`29
`2.5 Basic Properties of Galois Field GF(2m)
`34
`2.6 Computations Using Galois Field GF(2m) Arithmetic
`2.7 Vector Spaces
`40
`2.8 Matrices
`Problems
`References
`
`46
`48
`50
`
`39
`
`vii
`
`Page 6 of 98
`
`
`
`CHAPTER 3
`
`51
`
`58
`
`UNEAR BLOCK CODES
`51
`3.1 Introduction to Linear Block Codes
`3.2 Syndrome and Error Detection
`63
`3.3 Minimum Distance of a Block Code
`3.4 Error-Detecting and Error-Correcting Capabilities
`of a Block Code
`65
`68
`3.5 Standard Array and Syndrome Decoding
`3.6 Probability of an Undetected Error for Linear Codes
`over a BSC
`76
`3. 7 Hamming Codes
`Problems
`82
`References
`84
`
`79
`
`CHAPTER 4
`
`CHAPTER 5
`
`CHAPTER 6
`
`CYCUC CODES
`85
`85
`4.1 Description of Cyclic Codes
`4.2 Generator and Parity-Check Matrices of Cyclic Codes
`4.3 Encoding of Cyclic Codes
`95
`4.4 Syndrome Computation and Error Detection
`4.5 Decoding of Cyclic Codes
`103
`4.6 Cyclic Hamming Codes
`111
`4. 7 Shortened Cyclic Codes
`116
`Problems
`121
`References
`12 3
`
`98
`
`ERROR-TRAPPING DECODING FOR CYCUC
`CODES
`125
`5.1 Error-Trapping Decoding
`125
`5.2 Improved Error-Trapping Decoding
`5.3 The Golay Code
`134
`Problems
`139
`139
`References
`
`131
`
`SCH CODES
`141
`6.1 Description of the Codes
`142
`6.2 Decoding of the BCH Codes
`151
`6.3 Implementation of Galois Field Arithmetic
`6.4 Implementation of Error Correction
`167
`6.5 Nonbinary BCH Codes and Reed- Solomon Codes
`6.6 Weight Distribution and Error Detection of Binary
`BCH Codes
`177
`180
`Problems
`182
`References
`
`161
`
`92
`
`170
`
`CHAPTER 7 MAJORITY-LOGIC DECODING FOR CYCUC
`CODES
`184
`7.1 One-Step Majority-Logic Decoding
`184
`7.2 Class of One-Step Majority-Logic Decodable Codes
`7.3 Other One-Step Majority-Logic Decodable Codes
`7.4 Multiple-Step Majority-Logic Decoding
`209
`Problems
`219
`References
`221
`
`194
`201
`
`CHAPTER 8 FINITE GEOMETRY CODES
`
`223
`
`240
`
`259
`
`8.1 Euclidean Geometry
`223
`8.2 Majority-Logic Decodable Cyclic Codes Based
`on Euclidean Geometry
`227
`8.3 Projective Geometry and Projective Geometry Codes
`8.4 Modifications of Majority-Logic Decoding
`245
`Problems
`253
`References
`255
`
`257
`
`CHAPTER 9 BURST-ERROR-CORRECTING CODES
`9.1 Introduction
`257
`9.2 Decoding of Single-Burst-Error-Correcting Cyclic Codes
`9.3 Single-Burst-Error-Correcting Codes
`261
`9.4 Interleaved Codes
`277
`9.5 Phased-Burst-Error-Correcting Codes
`9.6 Burst-and-Random-Error-Correcting Codes
`9.7 Modified Fire Codes for Simultaneous Correction
`of Burst and Random Errors
`280
`Problems
`282
`References
`284
`
`272
`
`274
`
`CHAPTER 10 CONVOLUTIONAL CODES
`JO. I Encoding of Convolutional Codes
`288
`10.2 Structural Properties of Convolutional Codes
`10.3 Distance Properties of Convolutional Codes
`Problems
`312
`References
`313
`
`287
`
`295
`308
`
`CHAPTER 11
`
`MAXIMUM UKEUHOOD DECODING
`OF CONVOLUTIONAL CODES
`315
`
`11. I The Viterbi Algorithm
`315
`11.2 Performance Bounds for Convolutional Codes
`
`322
`
`viii
`
`Contents
`
`Contents
`
`ix
`
`Page 7 of 98
`
`
`
`11.3 Construction of Good Convolutional Codes
`11.4 Implementation of the Viterbi Algorithm
`11.5 Modifications of the Viterbi Algorithm
`Problems
`346
`References
`348
`
`329
`337
`345
`
`CHAPTER J2 SEQUENTIAL DECODING OF CONVOLUTIONAL
`CODES
`350
`12. l The Stack Algorithm
`351
`12.2 The Fano Algorithm
`360
`12.3 Performance Characteristics of Sequential Decoding
`12.4 Code Construction for Sequential Decoding
`37 4
`12.5 Other Approaches to Sequential Decoding
`380
`Problems
`384
`References
`386
`
`364
`
`CHAPTER J3 MAJORITY-LOGIC DECODING OF CONVOLUTIONAL
`CODES
`388
`13.1 Feedback Decoding
`389
`13.2 Error Propagation and Definite Decoding
`13.3 Distance Properties and Code Performance
`13.4 Code Construction for Majority-Logic Decoding
`13.5 Comparison with Probabilistic Decoding
`424
`Problems
`426
`References
`428
`
`406
`408
`
`414
`
`15.6 Type II Hybrid Selective-Repeat ARQ
`with Finite Receiver Buffer
`483
`Problems
`494
`References
`49 5
`
`CHAPTER J6 APPUCATIONS OF BLOCK CODES FOR ERROR
`CONTROL IN DATA STORAGE SYSTEMS
`498
`16.1 Error Control for Computer Main Processor
`and Control Storages
`498
`503
`16.2 Error Control for Magnetic Tapes
`16.3 Error Control in IBM 3850 Mass Storage System
`16.4 Error Control for Magnetic Disks
`525
`16.5 Error Control in Other Data Storage Systems
`Problems
`532
`References
`532
`
`531
`
`516
`
`CHAPTER J7 PRACTICAL APPUCATIONS OF CONVOLUTIONAL
`CODES
`533
`17.1 Applications of Viterbi Decoding
`17.2 Applications of Sequential Decoding
`17.3 Applications of Majority-Logic Decoding
`543
`17.4 Applications to Burst-Error Correction
`547
`17.5 Applications of Convolutional Codes in ARQ Systems
`Problems
`556
`References
`557
`
`533
`539
`
`551
`
`Appendix A
`
`Tables of Galois Fields
`
`561
`
`Appendix 8
`
`Minimal Polynomials of Elements in GF(2m)
`
`579
`
`Appendix C Generator Polynomials of Binary Primitive SCH
`Codes of Length up to 2 10
`1
`583
`
`-
`
`442
`
`INDEX
`
`599
`
`458
`
`465
`
`47 4
`
`CHAPTER J4 BURST-ERROR-CORRECTING CONVOLUTIONAL
`CODES
`429
`14.1 Bounds on Burst-Error-Correcting Capability
`14.2 Burst-Error-Correcting Convolutional Codes
`14.3 Interleaved Cortvolutional Codes
`441
`14.4 Burst-and-Random-Error-Correcting Convolutional Codes
`Problems
`455
`References
`456
`CHAPTER rs AUTOMATIC-REPEAT-REQUEST STRATEGIES
`15.1 Basic ARQ Schemes
`459
`15.2 Selective-Repeat ARQ System with Finite Receiver Buffer
`15.3 ARQ Schemes with Mixed Modes of Retransmission
`15.4 Hybrid ARQ Schemes
`477
`15.5 Class of Half-Rate Invertible Codes
`
`430
`430
`
`481
`
`X
`
`Contents
`
`Contents
`
`xi
`
`Page 8 of 98
`
`
`
`Preface
`
`This book owes its beginnings to the pioneering work of Claude Shannon in 1948
`on achieving reliable communication over a noisy transmission channel. Shannon's
`central theme was that if the signaling rate of the system is less than the channel
`capacity, reliable communication can be achieved if one chooses proper encoding
`and decoding techniques. The design of good codes and of efficient decoding methods,
`initiated by Hamming, Slepian, and others in the early 1950s, has occupied the
`energies of many researchers since then. Much of this work is highly mathematical
`in nature, and requires an extensive background in modern algebra and probability
`theory to understand. This has acted as an impediment to many practicing engineers
`and computer scientists, who are interested in applying these techniques to real sys(cid:173)
`tems. One of the purposes of this book is to present the essentials of this highly
`complex material in such a manner that it can be understood and applied with only
`a minimum of mathematical background.
`Work on coding in the 1950s and 1960s was devoted primarily to developing the
`theory of efficient encoders and decoders. In 1970, the first author published a book
`entitled An Introduction to Error-Correcting Codes, which presented the fundamentals
`of the previous two decades of work covering both block and convolutional codes.
`The approach was to explain the material in an easily understood manner, with- a
`minimum of mathematical rigor. The present book takes the same approach to cover(cid:173)
`ing the fundamentals of coding. However, the entire manuscript has been rewritten
`and much new material has been added. In particular, during the 1970s the emphasis
`in coding research shifted from theory to practical applications. Consequently, three
`completely new chapters on the applications of coding to digital transmission and
`storage systems have been added. Other major additions include a comprehensive
`treatment of the error-detecting capabilities of block codes, and an emphasis on
`probabilistic decoding methods for convolutional codes. A brief description of each
`chapter follows.
`Chapter I presents an overview of coding for error control in data transmission
`
`xiii
`
`Page 9 of 98
`
`
`
`and storage systems. A brief discussion of modulation and demodulation serves to
`place coding in the context of a complete system. Chapter 2 develops those concepts
`from modern algebra that are necessary to an understanding of the material in later
`chapters. The presentation is at a level that can be understood by students in the
`senior year as well as by practicing engineers and computer scientists.
`Chapters 3 through 8 cover in detail block codes for random-error correction.
`The fundamentals of linear codes are presented in Chapter 3. Also included is an
`extensive section on error detection with linear codes, an important topic which is
`discussed only briefly in most other books on coding. Most linear codes used in
`practice are cyclic codes. The basic structure and properties of cyclic codes are pre(cid:173)
`sented in Chapter 4. A simple way of decoding some cyclic codes, known as error(cid:173)
`trappjng decoding, is covered in Chapter 5. The important class of BCH codes for
`multiple-error correction is presented in detail in Chapter 6. A discussion of hardware
`and software implementation of BCH decoders is included, as welJ as the use of
`.BCH codes for error detection. Chapters 7 and 8 provide detaHed coverage of majority(cid:173)
`logic decoding and majority-logic decodable codes. The material on fundamentals of
`block codes concludes with Chapter 9 on burst-error correction. This discussion
`includes codes for correcting a combination of burst and random errors.
`Chapters 10 through 14 are devoted to the presentation of the fundamentals of
`convolutional codes. Convolutional codes are introduced in Chapter 10, with the
`encoder state diagram serving as the basis for studying code structure and distance
`properties. The Viterbi decoding algorithm for both bard and soft demodulator deci(cid:173)
`sions is covered in Chapter 11. A detailed performance analysis based on code dis(cid:173)
`tance properties is also included. Chapter 12 presents the basics of sequential decoding
`using both the stack and Fano algorithms. The difficult problem of the computational
`performance of sequential decoding is discussed without including detailed proofs.
`Chapter 13 covers majority-logic decoding of convolutional codes. The chapter con(cid:173)
`cludes with a comparison of the three primary decoding methods for convolutional
`codes. Burst-error-correcting convolutional codes are presented in Chapter 14. A
`section is included on convolutional codes that correct a combination of burst and
`random errors. Burst-trapping codes, which embed a block code in a convolutional
`code, are also covered here.
`Chapters 15 through 17 cover a variety of applications or coding to modern
`day data communication and storage systems. Although they are not intended to
`be comprehensive, they are representative of the many different ways in which coding
`is used as a method of error control. This emphasis on practical applications makes
`the book unique in the coding literature. Chapter 15 is devoted to automatic-repeat(cid:173)
`request (ARQ) error control schemes used for data communications. Both pure ARQ
`(error detection with retransmission) and hybrid ARQ (a combination of error cor(cid:173)
`rection and error detection with retransmission) are discussed. Chapter 16 covers
`the application of block codes for error control in data storage systems. Coding tech(cid:173)
`niques for computer memories, magnetic tape, magnetic disk, and optical storage
`systems are included. Finally, Chapter 17 presents a wide range of applications of
`convolutional codes to digital communication systems. Codes actually used on many
`space and satellite systems are included, as well as a section on using convolutional
`codes in a hybrid ARQ system.
`
`xiv
`
`Preface
`
`Page 10 of 98
`
`
`
`Several additional features are included to make the book useful both as a
`classroom text and as a comprehensive reference for engineers and computer scientists
`involved in the design of error control systems. Three appendices ate given which
`foclude details of algebraic structure used in the construction of block codes. Many
`tables of the best known codes for a given decoding structure are presented through(cid:173)
`out the book. These should prove valuable to designers looking for the be.st code for
`a particular application. A set of problems is given at the end of each chapter. Most
`of the problems are relatively straightforward applications of material covered in
`the text, although some more advanced problems are also included. There are a total
`of over 250 problems. A solutions manual will be made available to instructors using
`the text. Over 300 references are also included. Although no attempt was made to
`compile a complete bibliography on coding, the references listed serve to provide
`additional detail on topics covered in the book.
`The book can be used as a text for an introductory course on error-correcting
`codes and their applications at the senior or beginning graduate level. It can also be
`used as a self-study guide for engineers and computer scientists in industry who want
`to learn the fundamentals of coding and how they can be applied to the design of
`error control systems.
`As a text, the book can be used as the basis for a two-semester sequence in cod(cid:173)
`ing theory and applications, with Chapters l through 9 on block codes covered fo
`one semester and Chapters IO through 17 on convolutional codes and applications
`in a second semester. Alternatively, portions of the book can be covered in a one(cid:173)
`semester course. One possibility is to cover Chapters I through 6 and 10 through 12,
`which include the basic fundamentals of both block and convolutional codes. A
`course on block codes and applications can be comprised of Chapters I through 6,
`9, 15, and 16, whereas Chapters I through 3, IO through 14, and 17 include convolu(cid:173)
`tional codes and applications as well as the rudiments of block codes. Preliminary
`versions of the notes on which the book is based have been classroom tested by both
`authors for university courses and for short courses in industry, with very gratifying
`results.
`It is difficult to identify the many individuals who have influenced this work
`over the years. Naturally, we both owe a great deal of thanks to our thesis advisors,
`Professors Paul E. Pfeiffer and James L. Massey. Without their stimulating our
`interest in this exciting field and their constant encouragement and guidance through
`the early years of our research, this book would not have been possible,
`Much of the material in the first half of the book on block codes owes a great
`deal to Professors W. Wesley Peterson and Tadao Kasami. Their pioneering work in
`algebraic coding and their valuable discussions and suggestions had a major impact
`on the writing of t his material. The second half of the book on convolutional codes
`was g,reatly influenced by Professor James L. Massey. His style of clarifying the basic
`elements in highly complex subject matter was instrumental throughout the prepara(cid:173)
`tion of this material. f n particular, most of Chapter 14 was based on a set of notes
`that he prepared.
`We are grateful to the National Science Foundation, and to Mr. Elias Schutz(cid:173)
`man, for their continuing support of our research in the coding field. Without this
`assistance, our interest in coding could never have developed to the point of writing
`
`Preface
`
`xv
`
`Page 11 of 98
`
`
`
`this book. We thank the University of Hawaii and Illinois Institute of Technology
`for their support of our efforts in writing this book and for providing facilities. We
`also owe thanks to Professor FrankJ'in F. Kuo for suggesting that we write this book,
`and for his constant encouragement and guidance during the preparation of the
`manuscript. Another major source of stimulation for this effort came from our graduate
`students, who have provided a continuing stream of new ideas and insights. Those
`who have made contributions directly reflected in this book include Drs. Pierre
`Chevillar, Farhad Hemmati, Alexander Drukarev, and Michael J. Miller.
`We would like to express our special appreciation to Professors Tadao Kasami,
`Miehael J. Miller, and Yu-ming Wang, who read the first draft very carefully and
`made numerous corrections and suggestions for improvements. We also wish to
`thank our secretaries for their dedication and patience in typing this manuscript.
`Deborah Waddy and Michelle Masumoto deserve much credit for their perseverence
`in preparing drafts and redrafts of this work.
`Finally, we would like to give special thanks to our parents, wives, and children
`for their continuing love and affection thro ughout this project.
`
`Shu Lin
`D(miel J. Costello, Jr.
`
`xvi
`
`Preface
`
`Page 12 of 98
`
`
`
`ERROR CONTROL CODING
`
`fundamentals and Applications
`
`Page 13 of 98
`
`
`
`-
`
`--- ----------
`
`---- - --------- - -- - - ·---- ---·- - ---
`
`Page 14 of 98
`
`Page 14 of 98
`
`
`
`1
`
`Coding for Reliable Digital
`
`Transmission and Storage
`
`1.1 INTRODUCTI ON
`
`1n recent years, there has been an increasing demand for efficient and reliable digital
`data transmission and storage systems. This demand has been accelerated by the
`emergence of large-scale, high-speed data networks for the exchange, processing,
`and storage of digital information in the military, governmental, and private spheres.
`A merging of communications and computer technology is required in the design of
`these, systems. A major concern of the designer is the control of errors so that reliable
`reproduction of data can be obtained.
`In 1948, Shannon (l] demonstrated in a landmark paper that, by proper encoding
`of the information, errors induced by a noisy channel or storage medium can be
`reduced to any desired level without sacrificing the rate of information transmission
`or storage. Since Shannon's work, a great deal of effort has been expended on the
`problem of devising efficient encoding and decoding methods for error control in a
`noisy environment. Recent developments have contributed toward achieving the
`reliability required by today's high-speed digital systems, and the use of coding for
`error control has become an integral part in the design of modern communication
`systems and digital computers.
`The transmission and storage of digital information have much in common.
`They both transfer data from an information source to a destination (or user). A
`typical transmission (or storage) system may be represented by the block diagram
`shown in Figure 1. I. The i11formation source can be either a person or a machine
`(e.g., a digital computer). The source output, which is to be communicated to the
`destination, can be either a continuous waveform or a sequence of discrete symbols.
`
`,
`
`Page 15 of 98
`
`
`
`lnformotion
`source
`
`Source
`e ncoder
`
`Channel
`encoder
`
`Modulator
`(writing: uni\)
`
`l'fo,sc - - - i
`
`CJ1ani1el
`(storage
`medfom)
`
`Oestination ---1
`
`Source
`decoder
`
`,.
`"
`
`Cl1~nnel
`decoder
`
`Demodulntor
`( ri,;Jding unll)
`
`Figure 1.1 Block diagram of a typical data transmission or storage system.
`
`The source encoder transforms the source output into a sequence of binary digits
`(bits) called the information sequence u. In the case of a continuous source, this
`involves analog-to-digital (A/D) conversion. The source encoder is ideally designed
`so that (I) the number of bits per unit time required to represent t.he source output is
`minimized, and (2) the source output can be reconstructed from the information
`sequence u without ambiguity. The subject of source coding is not discussed in this
`book. For a thorough treatment of this important topic, see References 2 and 3.
`The channel encoder transforms the information sequence u into a discrete
`encoded sequence v called a code word. In most instances v is also a binary sequence,
`although in some applications nonbinary codes have been used. The design and
`implementation of channel encoders to combat the noisy envi,ronment in which code
`words must be transmitted or stored is one of the major topics of this book.
`Discrete symbols are not suitable for transmission over a physical channel or
`recording on a digital storage medium. The modulator (or writing 1111it) transforms
`each output symbol of the channel encoder into a waveform of duration T seconds
`which is suitable for transmission (or recording). This waveform enters the channel
`(or :storage medium) and is corrupted by noise. Typical transmisslon channels include
`telephone lines, high-frequency ractio links, telemetry links, microwave links, satelJite
`links, and so on. Typical storage media include core and semiconductor memories,
`magnetic tapes, drums, disk files, optical memory units, and so on. Each of these
`examples is subject to various types of noise disturbances. On a telephone line, the
`disturbance may come from switching impulse noise, thermal noise, crosstalk from
`other lines, or lightning. On magnetic tape, surface defects are regarded as a noise
`disturbance. The demodulator (or reading unit) processes each received waveform of
`duration T and produces an output that may be discrete (quantized) or continuous
`(unquantized). The sequence of demodulator outputs corresponding to the encoded
`sequence vis called the received sequencer.
`The channel decoder transforms the received sequencer into a binary sequence
`ft called the estimated sequence. The decoding strategy is based on the rules of channel
`encoding and the noise characteristics of the channel (or storage medium). Ideally, u
`
`2
`
`Coding for Reliable Digital Transmission and Storage
`
`Chap. 1
`
`Page 16 of 98
`
`
`
`will be a replica of the information sequence u, although the noise may cause some
`decoding errors. Another major topic of this book is the design and implementation
`of channel decoders that minimize the probability of decoding error.
`The source decoder transforms the estimated sequence ii into an estimate of the
`source output and delivers this estimate to the destination. When the source is
`continuous, this involves digital-to-analog (D/A) conversion. ln a well-designed
`system, the estimate will be a faithful reproduction of the source output except when
`the channel (or storage medium) is very noisy.
`To focus attention on the channel encoder and channel -decoder, (I) the infor(cid:173)
`mation source and source encoder are combined into a digital source with output u;
`(2) the modulator (or writing unit), the channel (or storage medium), and the demodu(cid:173)
`lator (or reading unit) are combined into a coding channel with input v ahd output r;
`and (3) the source decoder and destination are combined into a digital sink with input
`ii. A simplified block diagram is shown in Figure 1.2.
`
`Digjbil
`sourct>
`
`u
`
`•
`
`l:;ncoder
`
`Nl)ise - - - i Coding
`channel
`
`Digital ~----u- - -1
`~----
`
`Decoder
`
`sink
`
`Figure 1.2 Simplified model of a coded system.
`
`The major engineering problem that is addressed in this book is to design and
`implement the channel encoder/decoder pair such that (I) information can be trans(cid:173)
`mitted (or recorded) in a noisy environment as fast as possible, (2) reliable reproduc(cid:173)
`tion of the information can be obtained at the output of the channel decoder, and (3)
`the cost of implementing the encoder and decoder falls within acceptable limits.
`
`1.2 TYPES OF CODES
`
`There are two different types of codes in common use today, block codes and convblu(cid:173)
`tional codes. The encoder for a block code divides the information sequence into
`message blocks of k information bits each. A message block is represented by the
`binary k-tuple u = (ui, ui, ... , u .. ) called a message. (In block coding, the symbol
`u is used to denote a k-bit message rather than the entire information sequence.)
`There are a total of 2 .. differeht possible messages. The encoder transforms each
`message u independently into an n-tuple v = (v 1, vi, . .. , v.) of discrete symbols
`called a code word. (In block coding, the symbol v is used to de.note an n-symbol
`block rather than the entire encoded sequence.) Therefore, corresponding to the 2'<
`different possible messages, there are 2" different possible code words at the encoder
`
`Sec, 1.2
`
`Tvpes of Codes
`
`3
`
`Page 17 of 98
`
`
`
`output. This set of 2k code words of length n is called an (nt k) block code. The ratio
`R = k/n is called the code rate, and can be interpreted as the number of information
`bits entering the encoder per transmitted symbol. Since the n-symbol output code
`word depends only on the corresponding k-bit input message, the encoder is memory(cid:173)
`less, and can be implemented with a combinational logic circuit.
`In a binary code, each code word vis also binary. Hence, for a binary code to
`be useful (i.e., to have a different code word assigned to each message), k < n or
`R < I. When k < n, n - k redundant bits can be added to each message to form a
`code word. These redundant bits provide the code with the capability of combating
`the channel noise. For a fixed code rate R, more redundant bits can be added by
`increasing the block length n of the code while holding the ratio k/n constant. How to
`choose these redundant bits to achieve reliable transmission over a noisy channel is
`the major problem in designing the encoder. An example of a binary block code with
`k = 4 and n = 7 is shown in Table 1.1. Chapters 3 through 9 are devoted to the analy~
`sis and design of block codes for controlling errors in a noisy environment.
`
`TA'BLE 1.1 BINARY BLOCK CODE WITH
`k =- 4 AND n '"'7
`
`Messages
`
`(0 0 0 0)
`(I 0 0 0)
`(0 I 0 0)
`( I J O 0)
`(0 0 l 0)
`(l 0 l 0)
`(0 I l 0)
`(1 1 1 0)
`(O o o l)
`(l 0 0 I)
`(0 l O 1)
`(I 1 0 I)
`(0 0 I
`I )
`(I 0 l
`I)
`(0 I 1 I)
`(l l 1 I)
`
`Code words
`
`(0 0 0 0 0 0 0)
`{I 1 0 1 0 0 0)
`(0 l I 0 1 0 0)
`(1011100)
`(I 1 I 0 O t 0)
`(0011010)
`(I 0 0 0 J I 0)
`(0 l O 1 1 J 0)
`(I O l O O O J)
`(0 J 1 1 0 0 1)
`(I I 0 0 J 0 I)
`(0 0 0 I 1 0 I)
`(0 I 0 0 0 I l)
`(I00IOll)
`(0 0 l O l I I)
`(1 l 1 I 1 J 1)
`
`T he encoder for a convolutional code also accepts k-bit blocks of the information
`sequence u and produces an encoded sequence (code word) v of n-symbol blocks. (In
`convolutional coding, the symbols u and v are used to denote sequences of blocks
`rather than a single block.) However, each encoded block depends not only on the
`corresponding k-bit message block at the same time unit, but also on m previous
`message blocks. Hence, the eocoder has a memory order of rn. The set of encoded
`sequences produced by a k-input, n-output encoder of memory order m is called an
`(n, k, m) convolutional code. The ratio R = k/11 is called the code rate. Since the
`encoder contains memory, it must be implemented with a sequential logic circuit.
`In a binary convolutional code, redundant bits for combating the channel noise
`
`4
`
`Coding for Reliable Digital Transmission and Storage
`
`Chap. 1
`
`Page 18 of 98
`
`
`
`can be added to the information sequence when k < n or R < I. Typically, k and n
`are small integers and more redundancy is added by increasing the memory order m
`of the code while holding k and n, and hence the code rate R, fixed. How to use the
`memory to achieve reliable transmission over a noisy channel is the major problem
`in designing the encoder. An example of a binary convolutional encoder with k = 1,
`n = 2, and m = 2 is shown in Figure 1