`
`ERROR-CONTROL
`TECHNIQUES
`FOR DIGITAL
`COMMUNICATION
`
`ARNOLD M. MICHELSON
`
`ALLEN H. LEVESQUE
`
`GTE Government Systems Corporation
`Needham Heights, Massachusetts
`
`A Wiley - Interscience Publication
`JOHN WILEY 8z SONS
`• • New York (cid:9)
`
`Singapore
`
`:E (ENTER
`
`020523
`
`
`
`Chichester (cid:9)
`Brisbane (cid:9)
`Toronto (cid:9)
`LIARtigh16 E; (cid:9)
`DE VRY
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 1
`
`
`
`Copyright © 1985 by John Wiley & Sons, Inc.
`
`All rights reserved. Published simultaneously in Canada.
`
`Reproduction or translation of any part of this work
`beyond that permitted by Section 107 or 108 of the
`1976 United States Copyright Act without the permission
`of the copyright owner is unlawful. Requests for
`permission or further information should be addressed to
`the Permissions Department, John Wiley & Sons, Inc.
`
`Library of Congress Cataloging in Publication Data:
`Michelson, Arnold M.
`Error-control techniques for digital communication.
`"A Wiley-Interscience publication."
`Bibliography: p.
`Includes index.
`1. Error-correcting codes (Information theory)
`2. Digital communications. I. Levesque, Allen H.
`II. Title.
`TK5103.7.M53 1984 (cid:9)
`ISBN 0-471-88074-4
`
`84-15327
`
`621.38 (cid:9)
`
`Printed in the United States of America
`
`10 9 8 7 6 5 4 3 2
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 2
`
`
`
`CONTENTS
`
`CHAPTER 1. RELIABLE TRANSMISSION OF DIGITAL
`INFORMATION
`
`The Communication System Design Problem, 3
`Elements of a Digital Communication System, 4
`1.2.1. Information Source, 4
`1.2.2. Channel Encoder, 5
`1.2.3. Digital Modulator, 6
`1.2.4. Transmission Channel, 7
`1.2.5. Digital Demodulator, 10
`1.2.6. Channel Decoder, 11
`1.2.7. Source Decoder, 12
`Important Channel Models, 13
`1.3.1. The Discrete-Time Channel, 13
`1.3.2. The Binary Symmetric Channel, 14
`1.3.2.
`1.3.3. The Binary Symmetric Erasure
`1.3.3.
`Channel, 15
`1.3.4. The Additive White Gaussian
`1.3.4.
`Noise Channel, 15
`Information Theory and Channel Capacity, 16
`1.4.1. Logarithmic Measures of Information, 16
`1.4.2. Transfer of Information Through
`1.4.2.
`a Channel, 19
`1.4.3. Capacity of the Continuous
`1.4.3.
`AWGN Channel, 24
`1.4.4. Channel Coding Theorem, 27
`1.4.4.
`Modulation Performance on the AWGN
`Channel, 28
`1.5.1. Phase-Shift Keying, 28
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 3
`
`
`
`r
`
`CONTENTS
`
`1
`
`1.5.2. Differential PSK, 29
`1.5.3. Coherent Frequency Shift Keying, 30
`1.5.4. Noncoherent Binary FSK, 31
`1.5.5. M-ary Signahng on the Gaussian
`Noise Channel, 32
`1.5.6. Comparison of Binary Modulation
`Techniques, 37
`Combined Modulation and Coding for
`Efficient Signal Design, 38
`1.6.1. Imphcations of the Capacity
`Formula, 38
`1.6.2. The Rn Criterion for Modulation and
`Coding Design, 42
`Summary and Conclusions, 46
`1.8. Notes, 48
`
`CHAPTER 2. SOME FUNDAMENTALS AND SIMPLE BLOCK CODES
`
`Parity-Check Codes, 49
`Modulo-2 Arithmetic, 51
`Single-Parity-Check Codes, 52
`2.3.1. Error-Detection Decoding, 53
`2.3.2. Erasure Filling, 55
`Product Codes, 55
`2.4.1. Single-Error Correction, 56
`2.4.2. Soft-Decision Decoding, 58
`Binary Repetition Codes, 60
`2.5.1. The Repetition Code as a Parity-
`Check Code, 61
`2.5.2. Performance of Binary Repetition
`Codes, 64
`Properties of the Syndrome, 66
`Binary Hamming Codes, 69
`2.8. Notes, 71
`
`CHAPTER 3. ALGEBRA OF LINEAR BLOCK CODES
`
`Groups, 73
`Fields, 75
`Vector Spaces, 78
`3.3.1. Linear Operations in a Vector Space
`over a Field, 80
`3.3.2. Matrix Representation of a Vector
`Space, 82
`Binary Linear Block Codes, 83
`The Parity-Check Matrix Revisited, 85
`Dual Codes, 87
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 4
`
`
`
`CONTENTS
`
`Hamming Distance and the Weight
`Distribution, 88
`Code Geometry and Error-Correction
`Capabihty, 90
`3.8.1. Complete and Incomplete Decoding, 93
`3.8.2. Code Design and Sphere Packing, 94
`Notes, 96
`
`CHAPTER 4. BINARY CYCLIC CODES AND BCH CODES
`
`Representations of Finite Fields, 98
`4.1.1. The Primitive Element of a Finite
`Field, 99
`4.1.2. Vectors of Field Elements, 101
`4.1.3. Extension Fields and Primitive
`Polynomials, 103
`4.1.4. Relationship to Maximum-Length
`4.1.4.
`Sequences, 108
`The Structure of Binary Cychc Codes, 109
`4.2.1. Key Properties of Irreducible
`Polynomials, 110
`4.2.2. Minimal Polynomials, 111
`4.2.2.
`4.2.3. A Heuristic Description of Binary
`4.2.3.
`Cyclic Codes, 113
`4.2.4. A Polynomial Description of Cyclic
`4.2.4.
`Codes, 115
`Binary BCH Codes, 121
`4.3.1. Primitive BCH Codes, 121
`BCH Codes with mQ = 0, 127
`4.3.2.
`4.3.3. Nonprimitive BCH Codes, 129
`4.3.4. Shortening and Extending BCH
`Codes, 131
`Encoding Binary BCH Codes, 133
`Notes, 135
`
`CHAPTER 5. DECODING TECHNIQUES FOR BINARY BCH CODES
`
`The Parity-Check Matrix for a BCH Code, 138
`The Syndrome Equations, 140
`Peterson's Direct Solution Method, 142
`The Berlekamp Algorithm, 149
`The Kasami Algorithm, 152
`5.5.1. Decoding the (23,12) Golay Code, 157
`5.5.2. Decoding the (24,12) Extended
`Golay Code, 159
`Errors-and-Erasures Decoding, 160
`
`I
`
`-jr'
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 5
`
`
`
`r
`
`1
`
`xvi
`
`CONTENTS
`
`- 5.7. Soft-Decision Decoding Techniques, 162
`5.8. Notes, 169
`
`CHAPTER 6. NONBINARY BCH CODES AND REED-SOLOMON
`CODES
`
`6.1. Algebra for Nonbinary Codes, 171
`6.2. Minimal Polynomials over GF{q), 175
`6.3. Nonbinary BCH Codes, 177
`6.3.1. Some Examples of Primitive
`Codes, 178
`6.3.2. Nonprimitive Codes, 183
`6.4. Reed-Solomon Codes, 185
`6.5. Encoding Nonbinary BCH Codes and RS
`Codes, 189
`6.6. Decoding Algorithms for BCH and RS
`Codes, 190
`6.6.1. Direct Solution for a Distance-7 RS
`Code, 193
`6.6.2. The Massey-Berlekamp Algorithm, 196
`6.6.3. Errors-and-Erasures Decoding, 204
`6.7. Fourier Transform Techniques for RS
`Codes, 208
`6.7.1. The Finite Field Fourier
`Transform, 208
`6.7.2. Transform Decoding for Errors
`Only, 210
`6.7.3. Errors-Only Decoding with Frequency-
`Domain Encoding, 212
`6.7.4. Transform Decoding for Errors and
`Erasures, 214
`6.7.5. An Example: Fast-Transform Decoding
`in GFieA), 215
`6.8. Modifications of BCH and RS Codes, 217
`6.8.1. Simple Code Shortening, 218
`6.8.2. Adding Information Symbols to an
`RS Code, 218
`6.8.3. Designing Codes for Non-Field
`Alphabets, 222
`6.9. Notes, 225
`
`CHAPTER 7. THE PERFORMANCE OF LINEAR BLOCK CODES
`
`WITH BOUNDED-DISTANCE DECODING
`
`7.1. Binary Block Codes used for Error
`Detection, 228
`7.2. Binary Block Codes used for Error Detection
`and Correction, 234
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 6
`
`
`
`7.3. Generalization to Nonbinary Codes, 243
`7.4. Selected Performance Results, 249
`7.5. Notes, 269
`INTRODUCTION TO CONVOLUTIONAL CODES
`
`CHAPTER 8.
`
`CONTENTS
`
`Systematic Rate-1 /2 Codes and the Tree
`Diagram, 271
`The Trelhs and the State Diagram, 275
`Rate-h/F Codes and a View of Encoding
`as Linear Filtering, 277
`Minimum Distance, Decoding Distance, and
`Minimum Free Distance, 282
`Feedback Decoding, 284
`8.5.1. Syndrome Feedback Decoding of
`Systematic Codes, 285
`8.5.2. A Feedback Decoder That Uses a
`Majority-Logic Circuit and Threshold
`Decoding, 288
`The Design of Convolutional Codes, 291
`8.6.1. Infinite Error Propagation and
`Code Design, 291
`8.6.2. Code Generators for Some
`Systematic Codes, 295
`Performance Results for Syndrome
`Feedback Decoding, 297
`Notes, 298
`
`CHAPTER 9. MAXIMUM LIKELIHOOD DECODING OF
`CONVOLUTIONAL CODES
`
`The Viterbi Decoding Algorithm—Hard-
`Decision Decoding, 300
`Viterbi Decoding for the AWGN Channel, 303
`The Generating Function of a Convolutional
`Code, 305
`Performance Bounds for Viterbi Decoding, 309
`9.4.1. The Binary Symmetric Channel, 310
`9.4.2. The AWGN Channel, 313
`Some Practical Design Considerations, 314
`9.5.1. Path-History Storage, 316
`9.5.2. Quantization and Metrics, 319
`9.5.3. Other Design Issues, 323
`9.5.4. Other Features, 325
`Performance Results for Viterbi Decoding, 327
`Good Convolutional Codes for Use with
`Viterbi Decoding, 330
`Notes, 336
`
`[
`
`]
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 7
`
`
`
`r
`
`xviii
`
`CONTENTS
`
`CHAPTER 10., SEQUENTIAL DECODING
`
`A Qualitative Description of Sequential
`Decoding, 338
`The Computational Problem, 342
`Effects of Code Rate and Quantization, 345
`10.3.1. Selection of Code Rate, 346
`10.3.2. Design of the Decoder Quantizer, 347
`The Fano Sequential Decoder, 353
`Some Further Design Issues and Performance
`Results, 360
`Performance as a Function of SNR, 365
`A Brief Description of a Hard-Decision Fano
`Decoder Design, 368
`The Stack Algorithm for Sequential
`Decoding, 369
`Notes, 371
`
`CHAPTER 11. APPLICATIONS OF ERROR-CONTROL CODING
`
`1
`
`APPENDIX
`
`APPENDIX
`
`REFERENC
`
`INDEX
`
`Coherent Reception on the AWGN
`Channel, 374
`11.1.1. High Performance Techniques,
`E^/Nq « 2 to 3 dB, 375
`11.1.1.1. Concatenated Block
`Codes, 375
`11.1.1.2. Concatenated Block and
`Convolutional Codes, 382
`11.1.1.3. Soft-Decision Sequential
`11.1.1.3.
`Decoding of Long-
`Constraint-Length, Low-
`Rate Convolutional
`Codes, 384
`11.1.2. Techniques That Provide Moderate
`11.1.2.
`Coding Gain, 387
`11.1.2.1. Binary BCH Codes and
`Hard-Decision Decoding, 387
`11.1.2.2. Short-Constraint-
`11.1.2.2.
`Length Convolutional Codes
`and Viterbi Decoding, 395
`11.1.3. Techniques Providing Modest
`11.1.3.
`Coding Gain, 395
`Noncoherent Reception on the AWGN
`Channel, 396
`11.2.1. M-ary Orthogonal Signaling and
`Reed-Solomon Coding, 399
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 8
`
`
`
`CONTENTS xix
`
`11.2.2. Convolutional Codes on Noncoherent
`Channels, 399
`11.3. Coding for Compound-Error Channels, 406
`11.3.1. Automatic Repeat Request (ARQ), 407
`11.3.2. Interleaving, 410
`11.4. Concluding Remarks, 412
`
`appendix a. matrix notation and terminology
`
`appendix b. tables of irreducible polynomials
`OVER GF{2)
`
`references
`
`INDEX
`
`[
`
`1
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 9
`
`
`
`CHAPTER ONE
`
`Reliable Transmission
`of Digital Information
`
`The purpose of this book is to provide the communication systems
`engineer with a basic understanding of error-control coding techniques and the
`role that coding plays in the design of efficient digital communication systems.
`Described in its simplest terms, error-control coding involves the addition of
`redundancy to transmitted data so as to provide the means for detecting and
`correcting errors that inevitably occur in any real communication process.
`Thus coding can be used to provide a desired level of accuracy in the digital
`data delivered to a user. There are, however, other ways to achieve accurate
`transmission of digital data, and this book is intended to aid the communication
`system designer in deciding when it makes sense to use coding and when it
`does not, in choosing a coding technique appropriate to the application and
`performance requirements at hand, and in evaluating the performance
`achievable with the chosen technique.
`For example, in many communication systems, an alternative to the use of
`coding is simply to provide sufficient signal energy per unit of information to
`ensure that uncoded information is delivered with the required accuracy. The
`energy needed might be provided by setting signal power at a sufficiently high
`level or, if power limitations prevail, by using some form of diversity
`transmission and reception. However, in many cases, error-control coding can
`provide the required accuracy with less energy than uncoded operation and
`may be the economically preferred solution in spite of an increase in system
`complexity. Cost savings through the use of coding techniques can be dramatic
`when very high accuracy is needed and power is expensive. Furthermore, in
`some applications the savings in signal power are accompanied by important
`reductions in size and weight of the communication equipment
`
`1
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 10
`
`
`
`2 (cid:9)
`
`RELIABLE TRANSMISSION OF DIGITAL INFORMATION
`
`The levels of performance that can ultimately be achieved with coded
`communication systems are given by the remarkable theorems of Claude
`Shannon, who in 1948 laid the foundation of the science of information theory
`in a famous paper entitled "A Mathematical Theory of Communication" [157].
`The basic theorems of information theory not only mark the limits of efficiency
`in communication performance but also define the role that coding plays in
`achieving these limits. That is, digital codes are shown to be an efficient way of
`constructing the waveforms to be transmitted in order to achieve optimum
`communication performance for some applications.
`Shannon's 1948 paper presented a statistical formulation of the
`communication problem, unifying earlier work by Hartley [611, Wiener [178],
`Rice [148], and Koternilcov [86]. Shannon's work sharply contradicted the
`long-standing intuitive but erroneous notion that noise places an inescapable
`limitation on the accuracy of communication. Shannon proved that the
`characteristics of a communication channel, namely the noise level, bandwidth,
`and signal power, can be used to derive a parameter C, called channel capacity,
`that gives the upper limit on the rate at which information can be transmitted
`through the channel and received reliably. Shannon's results showed that as
`long as the information transmission rate is below C, the probability of error in
`the information delivered can in principle be made arbitrarily low by using
`sufficiently long coded transmission signals. Thus noise limits the achievable
`information communication rate, but not the achievable accuracy. Much of the
`research in communication theory since the appearance of Shannon's early
`work has been concerned with extending and refining his basic results and with
`finding ways of approaching the full reali7ation of these results in practical
`communication system designs. The development of error-control coding
`techniques has been a central element in this research.
`In this book we present the most important of the error-control coding
`techniques that have been developed since Shannon's pioneering work. That is,
`we consider those techniques that have actually been used effectively in real
`communication systems. In this introductory chapter we begin with a
`description of the key elements of a modern digital communication system as
`well as the channel models that are used throughout the book. A heuristic
`discussion of information theory follows, concluding with a presentation of the
`key result, the channel coding theorem. It is not necessary to have a detailed
`understanding of information theory in order to make effective use of error-
`control coding techniques. However, a familiarity with the underlying principles
`and the meaning of the channel coding theorem is important. The fundamental
`limit on the efficiency of a digital communication system is given by the
`channel coding theorem, and this provides the gauge for measuring the overall
`efficiency of any given system design.
`We then review the basic digital modulation and demodulation techniques.
`Performance curves are included that show that even the best of the practical
`signaling schemes fall far short of the performance limit given by the channel
`coding theorem. It will be seen in subsequent chapters that judicious choice of
`
`modul
`signifi
`with a
`Specif
`the ke
`
`1.1,
`
`We
`of pro
`to a
`pararr
`compl
`and a
`user's
`Thi
`partic
`separl
`practi
`indivi
`establ
`howel
`to am
`few i
`interf
`Fir
`chara
`possil
`desigi
`consii
`infori
`trans]
`alterr
`trans]
`of co.
`of th(
`need
`consi
`strict
`messl
`impr(
`mean
`when
`impli
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 11
`
`
`
`coded (cid:9)
`Claude (cid:9)
`'I theory (cid:9)
`" [157]. (cid:9)
`ficiency (cid:9)
`)lays in
`way of
`tiMUM (cid:9)
`
`of the (cid:9)
`T [178], (cid:9)
`ted the (cid:9)
`capable (cid:9)
`lat the (cid:9)
`dwidth, (cid:9)
`apacity, (cid:9)
`smitted (cid:9)
`that as (cid:9)
`error in (cid:9)
`y using (cid:9)
`devable (cid:9)
`a of the (cid:9)
`'s early (cid:9)
`ad with (cid:9)
`ractical (cid:9)
`coding (cid:9)
`
`coding (cid:9)
`That is, (cid:9)
`in real (cid:9)
`with a (cid:9)
`stem as (cid:9)
`euristic (cid:9)
`of the (cid:9)
`letailed (cid:9)
`error- (cid:9)
`tnciples (cid:9)
`mental (cid:9)
`by the (cid:9)
`overall (cid:9)
`
`niques. (cid:9)
`ractical (cid:9)
`:hannel (cid:9)
`Loice of (cid:9)
`
`THE COMMUNICATION SYSTEM DESIGN PROBLEM 3
`modulation and coding techniques can, in many applications, provide
`significant improvements in communication efficiency. The chapter concludes
`with a discussion of the proper way to design a digital communication system.
`Specifically, an analytical approach is given that permits joint optimization of
`the key communication functions.
`
`1.1. THE COMMUNICATION SYSTEM DESIGN PROBLEM
`
`We can state the task of the digital communication system designer as that
`of providing a cost-effective system for transmitting information from a sender
`to a user at the rate and level of accuracy that the user requires. The key
`parameters of the design are transmission bandwidth, signal power, and the
`complexity of the implementation chosen. The information transmission rate
`and accuracy of the delivered information are typically determined by the
`user's requirements.
`The transmission bandwidth is often constrained by factors specific to the
`particular transmission medium used. For example, telephone circuits are
`separated into nominal 3-kHz bandwidth segments by longstanding engineering
`practice in the telephone industry. Similarly, there are standard bandwidths for
`individual channels on terrestrial radio circuits and satellite links due to
`established government regulations on spectrum utilization. In other cases,
`however, bandwidth constraints are not a critical issue. Examples include links
`to and from vehicles in deep space, where wide transmission bandwidths for a
`few individual links can be chosen freely without concern about possible
`interference with other users of the spectrum.
`Finally, signal power and implementation complexity are system
`characteristics that are usually very much under the designer's control, and
`possible trade-offs between power and complexity are central issues in the
`design task. Both characteristics represent cost factors for the designer to
`consider. For example, in most systems a desired level of accuracy in the
`information delivered can be achieved by supplying enough power in the
`transmitted signal to overcome channel disturbances that produce errors. An
`alternative to increasing signal power is to add systematic redundancy to the
`transmitted information in the form of error-control coding. However, the use
`of coding adds complexity to the system, particularly for the implementation
`of the decoding operations. Since the addition of redundancy also implies the
`need to increase transmission bandwidth, the design trade-off must include
`considerations of bandwidth. In fact, in applications where bandwidth is
`strictly limited or very costly and when we are not permitted to lengthen
`message transmission time, it is difficult to use coding effectively as a means of
`improving information accuracy, and increasing signal power may be the only
`means available. These issues will be discussed in detail later in this chapter,
`when the fundamental results in information theory are reviewed and the
`implications for the design of efficient communication systems are presented.
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 12
`
`(cid:9)
`(cid:9)
`(cid:9)
`(cid:9)
`(cid:9)
`(cid:9)
`(cid:9)
`(cid:9)
`
`
`4 (cid:9)
`
`RELIABLE TRANSMISSION OF DIGITAL INFORMATION
`
`It is clear - from the outset that with any real communication system we
`cannot expect to receive exactly what is transmitted. At the very last, we can
`expect noise to be added to the transmission, causing random errors. As was
`stated earlier, the work of Shannon showed that channel noise limits the rate at
`which we can communicate reliably but not the achievable level of accuracy.
`The purpose of this introductory chapter is to present and illuminate this
`important result. It is necessary first to review the key elements of a digital
`communication system and the limitations imposed by the physical world.
`Then the role that error-control coding plays in the design of an energy-efficient
`communication system can be discussed in detail.
`
`1.2. ELEMENTS OF A DIGITAL COMMUNICATION SYSTEM
`
`The basic elements of a one-way communication system are illustrated with
`the general block diagram shown in Fig. 1.1. We now examine each of these
`elements in detail.
`
`1.2.1. Information Source
`
`The source of information to be transmitted may be either analog or
`discrete. An analog source produces time-continuous signals, while a discrete
`source produces sequences of discrete symbols. Examples of signals from an
`analog source are speech signals, radar outputs, and photographic scan data.
`Typical discrete sources are computer data files and messages generated at
`teleprinter terminals.
`For analog sources, techniques are needed to efficiently represent the analog
`signals by sequences of digital symbols. Ordinarily this is done simply with a
`sampler and analog-to-digital converter. Ideally, we would like to represent the
`source information by the smallest number of digital samples to make the most
`efficient use of the digital communication system. In other words, we would
`like to remove as much redundancy as possible from the source information
`prior to transmission. Techniques for converting arbitrary information sources
`into digital messages efficiently, are described broadly as
`source coding
`
`Information
`Source
`
`Source
`Encoder
`
`Channel
`Encoder
`
`Digital
`Modulator
`
`V
`Transmission
`Channel
`
`Information
`User
`
`Source
`Decoder
`
`Channel
`Decoder
`
`Digital
`Demodulator
`
`FIGURE 1.1. Block diagram of a digital communication system.
`
`technic
`source
`with s(
`treat ti
`interesi
`For
`source
`commt
`we ass
`equall)
`binary
`values
`produc
`inform
`deliver
`accura,
`of bit
`later d
`direct!:
`inform
`as Eb
`power
`
`1.2.:
`
`The
`the sot
`encom
`redunc
`use a I
`is to
`perfor)
`if an 8
`succes,
`one-th
`If b
`bits at
`rate R
`succes
`n > k.
`releast
`R = k
`have v
`specia
`and n
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 13
`
`
`
`>tem we
`, we can
`As was
`e rate at
`ccuracy.
`ate this
`digital
`1 world.
`-efficient
`
`ted with
`of these
`
`Lalog or
`discrete
`from an
`in data.
`rated at
`
`e analog
`)r with a
`sent the
`he most
`e would
`rmation
`sources
`coding
`
`mission
`nel
`
`ELEMENTS OF A DIGITAL COMMUNICATION SYSTEM
`5
`techniques. There is now a considerable body of literature on the subject of
`source coding, dealing with the theoretical limits on performance as well as
`with some very effective techniques for coding specific types of sources. To
`treat this subject in depth is outside the scope of this book, and we refer the
`interested reader to the literature on source coding [6, 175].
`For our purposes in this book, we assume we are given a digital information
`source and our job is to provide an effective and efficient system for
`communicating the information it produces from one place to another. Further,
`we assume that the source generates a stream of statistically independent,
`equally likely discrete symbols. In particular, we assume that it produces
`binary digits, which we call bits, each bit occurring with equally likely binary
`values independent of all other bits generated. We shall say that the source
`produces data at a constant rate of R s bits per second, which is defined as the
`information rate of the source. The purpose of the communication system is to
`deliver the source data to a user at the rate R s and at some required level of
`accuracy, which is typically stated as an upper limit on acceptable probability
`of bit error or bit-error rate in the delivered information. It will be seen in a
`later discussion that the overall efficiency of the communication system is
`directly related to the amount of signal energy needed to deliver each
`information bit with the required accuracy. We denote this amount of energy
`as Eb and call it the required energy per information bit. The required signal
`power S is therefore given by S = EbR s.
`
`1.2.2. Channel Encoder
`
`The channel encoder performs all the digital operations needed to prepare
`the source data for modulation. We define the encoder here in a general way to
`encompass a variety of possible operations. In the simplest case, where no
`redundancy is to be added and the transmission in the physical channel is to
`use a binary signaling alphabet, the encoder has no function. If no redundancy
`is to be used but the transmission alphabet is to be nonbinary, the encoder
`performs the necessary binary-to-nonbinary symbol conversion. For example,
`if an 8-ary signaling alphabet is to be used, the encoder accepts source bits in
`successive blocks of three bits each and produces 8-ary symbols at a rate that is
`one-third of R.
`If binary error-control coding is to be used, the encoder accepts information
`bits at the rate R s and adds redundancy, producing encoded data at a higher
`rate R. For encoding with a block code, the encoder accepts information in
`successive k-bit blocks and for each k bits generates a block of n bits, where
`n k. The n-bit block is called a code block or a codeword. Thus the encoder
`releases bits at a rate R c = Rs(n/k). We define the dimensionless ratio
`R = k/n as the rate of the code, or simply the code rate. Note that we let n
`have values n k rather than simply n > k, to include uncoded operation as a
`special case. If theerror-control coding is nonbinary, say M-ary where M = 2m
`and m is an integer greater than 1, the encoder accepts information bits in
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 14
`
`
`
`6 (cid:9)
`
`RELIABLE TRANSMISSION OF DIGITAL INFORMATION
`
`blocks of km bits each and produces encoded blocks of n M-ary symbols each.
`Again, the code rate is R = k/n.
`For encoding with a convolutional code, the encoder accepts information
`bits as a continuous stream and, for a binary code, generates a continuous
`stream of encoded bits at a higher rate. The information stream is fed to the
`encoder b bits at a time, where b is typically in the range 1 to 6. The encoder
`operates on the current b-bit input and some number of immediately preceding
`b-bit inputs to produce V output bits, with V> b. Thus the code rate is
`R = b/V. The number of successive b-bit groups of information bits over
`which each encoding step operates is called the constraint length of the code,
`which we also denote by k. The encoder for a convolutional code might be
`thought of as a form of digital filter With memory extending k — 1 symbols
`into the past. A typical binary convolutional code will have b =1, V = 2 or 3,
`and k in the range 4 to 7, although in special applications constraint lengths in
`the range 30 to 70 might be used.
`To use a convolutional code with nonbinary transmission, each b-bit input
`to the encoder results in the generation of V M-ary coded symbols, where
`usually M = 2", and m V > b. A typical rate-1/2 encoder for a 16-ary (m = 4)
`transmission alphabet might have b = 4, V = 2, and k = 2 (four-bit symbols).
`We shall not delve any further into the details of code design at this point,
`our immediate purpose having been served by the introduction of the concepts
`of code rate, block length, and constraint length. As we shall see in later
`discussions, these are the key parameters of a code design, since the reciprocal
`of the code rate gives us a measure of the required bandwidth expansion and
`the code block length or constraint length and rate provide a measure of the
`complexity of the required encoding and (more important) decoding operations.
`In Section 1.4 we shall see that much can be said about the communication
`performance achievable with well-designed codes by dealing with only these
`design parameters.
`
`1.2.3. Digital Modulator
`
`The function of the modulator is to match the encoder output to the
`transmission channel. The modulator accepts binary or M-ary encoded symbols
`and produces waveforms appropriate to the physical transmission medium,
`which is always analog. In many systems where coding is to be applied, the
`modulation and demodulation techniques and equipment are difficult or
`impossible to modify or replace. In other cases, the modulation technique is
`fixed, but changes in the method of demodulation are feasible. In yet other
`applications, it is possible to design the modulation and demodulation system
`along with the coding technique, and greatly increased latitude is provided for
`overall optimization of the design.
`It has been conventional in much of the communication literature to define
`"the channel" as representing that portion of the communication system that
`the designer is unable or unwilling to change. In following this convention, if
`
`the mc
`conver
`incorp
`from t
`of the
`the de!
`in con]
`this pc
`For
`to a w.
`moduh
`set of
`familii
`convei
`encod(
`these
`and
`types,
`appro:
`the no
`perhai
`spectr
`which
`chips,
`spectr
`of pro
`of sprn
`issue [
`We
`1.5.
`
`1.2,
`
`We
`prep&
`physic
`requir
`way,
`imp ai:
`here v
`the tr;
`Tr
`provic
`powei
`signal
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 15
`
`
`
`Ls each.
`
`mation
`tinuous
`. to the
`:ncoder
`ceding
`rate is
`ts over
`e code,
`ight be
`ymbols
`2 or 3,
`tgths in
`
`Lt input
`, where
`m = 4)
`mbols).
`s point,
`oncepts
`in later
`Hprocal
`on and
`3 of the
`rations.
`tication
`y these
`
`to the
`;ymbols
`:tedium,
`ied, the
`cult or
`lique is
`A other
`system
`ided for
`
`3 define
`em that
`ltion, if
`
`ELEMENTS OF A DIGITAL COMMUNICATION SYSTEM
`7
`the modulation and demodulation equipment (usually shortened to modem for
`convenience) is not available for modification, those functions would be
`incorporated into the definition of the channel. However, we prefer to depart
`from this convention to the extent that while we shall treat the modem as part
`of the channel for purposes of analysis, we ask the reader to bear in mind that
`the design of an efficient system is best done by designing the modem functions
`in conjunction with the encoding and decoding functions. Section 1.6 addresses
`this point in more detail.
`For binary modulation, the modulator simply converts a binary digit, 0 or 1,
`to a waveform, say s o( t) or si(t), respectively, of equal duration I. For M-ary
`modulation, the M possible encoded symbols are converted to a corresponding
`set of M waveforms so(t), sl(t), , M- 1 (t). It is assumed that the reader is
`familiar with the common forms of digital modulation. For binary signaling,
`conventional modulation types include phase shift keying (PSK), differentially
`encoded PSK (DPSK), and frequency shift keying (FSK). Nonbinary forms of
`these basic modulation types are M-ary PSK (MPSK), M-ary DPSK (MDPSK),
`and M-ary FSK (MFSK). With the conventional forms of these modulation
`types, the nominal bandwidth of each waveform s,( z), i = 0, 1, 2, ... , M - 1, is
`approximately 1/7. However, for spread spectrum signaling, as is implied by
`the name, the bandwidth of each waveform can be much wider than 1/T„
`perhaps by as much as several orders of magnitude. For example, a spread
`spectrum version of binary PSK might utilize waveforms s o ( t) and s(L) in
`which so( t) is a sequence of much shorter binary PSK pulses, usually called
`chips, and s1 (t) is the complement of the chip sequence in so(t). Spread
`spectrum signaling is used as a multiple-access technique and also as a means
`of protecting a communication system against jamming. For further discussion
`of spread spectrum systems, the reader can refer to a special IEEE Transactions
`issue [70] as well as books on the subject [31, 68].
`We shall return to the modulation and demodulation functions in Section
`1.5.
`
`1.2.4. Transmission Channel
`
`We include in the term transmission channel all the operations required to
`prepare the baseband (low-pass) modulated waveforms for transmission in the
`physical channel, the transmission medium itself, and the receiving operations
`required to bring the signals to the point just prior to demodulation. In this
`way, we incorporate into the transmission channel any practical limitations or
`impairments in the equipment. As a practical matter we are primarily concerned
`here with power and bandwidth limitations, which are reflected in the design of
`the transmitting and receiving equipment.
`Transmitted signal power provides the obvious means in many systems for
`providing a required level of accuracy in received information. However, signal
`power cannot be increased arbitrarily. In telephone networks, for example,
`signal levels are fixed by established industry standards. In radio
`
`IPR2018-01474
`Apple v. INVT
`INVT Exhibit 2002 - Page 16
`
`
`
`R