`
`destinations
`
`II|| i
`
`II
`
`Il
`
`Digital
`input
`
`Information
`sink
`
`m Optianal
`
`I
`
`I Essential
`
`Information
`source
`
`From other
`sources
`
`B it
`
`stream
`
`Digital
`waveform
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 1 of 64
`
`
`
`
`
`DIGITAL
`
`COMMUNICATIONS
`
`Fundamentals and Applications
`
`
`
`
`BERNARD SKLAR
`
`The Aerospace Corporation, El Segundo, California
`and
`University of California, Los Angeles
`
`
`
`
`
`
`
`P T R Prentice: Hall
`Englewood Cliffs, New Jersey 07632
`
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 2 of 64
`
`
`
`Library 0f Congress Catailoring-invfubllmrinn Dara
`Sum, BERNARD (darn)
`Digital communications.
`
`Bibliography: p.
`Includes index.
`I. Title:
`i. Digllal communications.
`“(51031555 1988
`621.38'0413
`JSBN 0-]3-211939—0
`‘
`
`574315
`
`Editorialfpmduclicn supervision and
`interior design: Reynold Ringer
`Cover design: Wanda Lubelska Design
`Manufacturing buyerS: Gordon Osbourne and Paula Benevento
`
`in) 1988 by P T R Prentice-Hall, Inc.
`A Simon & Schustcr Company
`Bngiewood Cliffs, New Icrsay 07632
`
`
`
`Prentice-Hall International (UK) Limited, London-i
`Prentice—Hall of Australia Pty. Limited, Sydney
`Prentice-Hall Canada Inc, Toronto
`Prentict—Hall Hispanoamericana,‘ 5A., Mexico
`Prentice-Hall of India Privatc Limited, New Delhi
`Prcntice—Hall anapan, Inc.. Tokyo
`Simon 6': Schusler Asia Pic. Ltd.. Singapore
`Editora Prentice-Hall do Brasil, Ltdn., Rio de Janefra
`
`All rights reserved: No part of this book may be
`rcproduccd, in any form or by any means,
`without pal-mission in writing. from the publisher.
`
`Printed in the United Slates of America
`
`l0
`
`ISBN U-LB-Ell‘lB'fi-D DEE
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 3 of 64
`
`
`
`
`
`Contents
`
`PREFACE
`
`SIGNALS AND SPECTRA
`
`XX]
`
`1
`
`VII
`
`49
`
`t!
`
`17
`
`3
`
`ll
`12
`
`1.1
`
`1.2
`
`1.3
`
`1.4
`
`1.5
`
`Digital Communication Signal Processing,
`1.}.1 Why DigitaIF'.
`3
`1.1 .2
`Typical Block Diagram and Transformations.
`1.1.3
`Basic Digital Communication Nomenclature,
`1.1.4 Digital verna- Analog Performance Criteria.
`Clasaification of Signals,
`ll
`i.2.1 Deterministic and Random Signals,
`I22 Periodic and Nonperiodic Signals,
`1.2.3 Analog and Diocrote Signals,
`12
`1.2.4
`Energy and Power Signals,
`1.2.5
`The Unit Impulse Function,
`Spectral Density,
`14
`1.3.}
`Energy Spectral Density,
`1.3.2
`Power Spectral Dem-fry,
`Aulocorrclation,
`17
`17
`1.4.1 Antocorreiatioa afar; Energy Signal,
`L42 Aataaorrelation of a Periodic (Power) Signal,
`Random Signals,
`18
`!.5..'
`Random Variablox,
`1.5 .2
`Random Procesaes,
`
`12
`13
`
`14
`15
`
`18
`20
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 4 of 64
`
`
`
`23
`
`30
`
`22
`Time Averaging and Ergoaleity,
`1.5.3
`Power Spectral Density of a Random Process.
`1.5.4
`Noise in Communication Systems,
`27
`1.5.5
`Signal Transmission through Linear Systems,
`1.6.1
`Impulse Response,
`31
`1.6.2
`Frequency Transfer Function, 3]
`1.6.3 Dirtortionless Transmission.
`32
`1.6.4‘
`Signals, Circuits, and Spectra.
`Bandwidth of Digital Data,
`41
`l .711
`Basebana’ versus Bandpass,
`I 5'12
`The Bandwidth Dilemma.
`Conclusion,
`46
`References,
`46
`Problems,
`47
`
`3'3
`
`41
`
`43
`
`Contents
`
`54
`Baseband Systems,
`Formatting Textual Data (Character Coding),
`Messages, Characters, and Symbols,
`55
`2.3.l
`Example of Messages, Characters, and
`Symbols,
`55
`Formatting Analog. Information,
`2.4.1
`The Sampling Theorem,
`2.42 Aliasing.
`66
`2.4 .3
`Signal Interface for a Digital System,
`Sources of Corruption,
`70
`2.5.l
`Sampling and Qaantizing Effects.
`2.5.2
`Channel Efl'ects,
`71
`2.5 .3
`Signal-to-Noise Ratio for Qaantized Pulses,
`Pulse Code Modulation,
`73
`Uniform and Nonuniform Quantization,
`2.7.1
`Statistics of Speech Amplitudes,
`74
`2.7.2 Nonaniform Quantization,
`76
`2.7.3
`Compancling Characteristic-5,
`Baseband Transmission,
`78
`2.8.1 Waveform Representation of Binary Digits.
`2.8.2
`PCM Waveform Types,
`78
`8.?
`2.8.3
`Spectral Attributes of PCM Waveforms,
`Detection of Binary Signals in Gaussian Noise,
`2.9.1 Maximum Likelihood Receiver Structure.
`2.9.2
`The Matched Filter,
`8:?
`2.9.3
`Correlation Realization of the Matched Filter,
`2.9.4 Application of the Matched Filter,
`91
`2.9.5
`Error Probability Performance offltnary
`Signaling,
`92
`
`FORMATTING AND BASEBAND TRANSMISSION
`
`59
`
`59
`
`69
`
`?0
`
`74
`
`9'7
`
`55
`
`72
`
`78
`
`33
`
`8.5
`
`90
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 5 of 64
`
`
`
`95
`
`
`Multilevel Baseband Transmission,
`2.10.1 PCM Ward Size,
`97
`
`98
`IntersymboiInterference,
`2.1]
`
`2.11.1
`Puiee Shaping to Reduce ISL
`2.11.2
`Equalization,
`104
`2.12 Partial Response Signaling,
`2.12.1
`Duobinrzry Signaling,
`2.12.2
`Duobinary Decoding,
`2.12.3
`Preceding.
`108
`2.12.4 Duobinury Equivaient Transfer Function,
`2.12.5 Comparison othnory with Duobinnry
`Signaling,
`111
`2.12.6 Potybtnm'y Signaling,
`2.13 Conclusion,
`112
`References,
`113
`Problems,
`113
`
`
`
`100
`
`106
`106
`1117
`
`112
`
`109
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`BANDPASS MODULATION AND DEMODULATIUN
`
`117
`
`118
`3.1 Why Modulate't,
`119
`3.2
`Signals and Noise,
`3.2.1 Noise in Radio Communication Spit-amen
`3.2.2
`A Geometric View oj’Signoie and Naive,
`Digital Bandpass Modulation Techniques,
`127
`3.3.1
`Phase Shift Keying.
`.131)
`3.3.2
`130
`Frequency Shift Keying.
`3.3.3
`131
`Amplitude Shift Keying,
`3.3.4
`131
`Amplitude Phase Keying.
`3.3.5
`Waveform Amplitude Coejjicient.
`Detection of Signals in Gaussian Noise,
`3.4.1
`Decision Regions,
`132
`3.4.2
`Cnr'retotion Receiver.
`CoherenLDetectim,
`138
`3.5.1
`Coherent Detection of PSK,
`3.5.2
`Sampioa' Matched Fiiter,
`139
`3.5.3
`Coherent Detection ofMuitipte Phase Shift
`Keying.
`.142
`Coherent Detection of'FSK.
`3.6 NoncoherentDelection,
`146
`3.6.1
`.1415
`Detection ofDitferentiul PSK,
`3.6.2
`Binary Differential PSK Exurnpie,
`3.6.3
`K,
`Nonconerent Detection ofFS
`3.64
`Minimum Required Tone Spacing for Noncoiterent
`152
`Orthogonnt FSK Signottng.
`
`3.3
`
`3.4
`
`3.5
`
`3.5.4
`
`Contents
`
`119
`121)
`
`.133
`
`133
`
`.145
`
`132
`132
`
`143
`150
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 6 of 64
`
`
`
`3.7
`
`155
`Error Performance for Binary Systems,
`3.7.]
`Probability ofBit Errorfor Coherently Detected
`BPSK,
`155
`Probability ofBit Errorfor Coherently Detected
`Dficrentially Encoded PSK,
`160
`Probability of Hit Error for Coherently Detected
`FSK.
`I6}
`
`3.7.2
`
`3.2.3
`
`3.7.4
`
`3.7.5
`3.7.6
`
`3.9
`
`167
`
`176
`
`182
`
`4 COMMUNICATIONS LINK ANALYSIS
`
`Probability ofBit Error for .Noncoherently Detected
`FSK,
`162
`164
`Probability ofBit Error for DPSK,
`Comparison of Bit Error Performance for Various
`Modulation Types,
`166
`167
`3.8 M-ary Signaling and Performance,
`3.8.1
`Ideal Probability ofliit Error Performance,
`3.8.2 M-ary Signaling,
`167
`170
`3.8.3
`Vectorial View of MI-‘SK Signaling,
`3.8.4
`BPSK and QPSK Have the Some Bit Error
`Probability,
`I 71
`172
`Vectorlol View ofMFSK Signaling,
`3.8.5
`Symbol Error Performance for M-ary Systems (M :- 2),
`3.9.!
`Probability ofSymbol Error for MPSK,
`126
`3.9.2
`Probability ofSymbol Error for MFSK,
`177
`3.9.3 Bit Error Probability versus Symbol Error Probability
`for Orthogonal Signals,
`180
`Bit Error Probability versus Symbol Error Probability
`3 .94
`for Multiple Pnase Signaling,
`16]
`_
`3.9.5 Wests ofIntcrsymbol Interference,
`3.10 Conclusion,
`182
`References,
`182
`Problems,
`.183
`
`
`
`4.1 What the System Link Budget Tells the System
`Engineer,
`188
`189
`The Channel,
`189
`4.2.!
`The Concept oj'Free Space,
`4.2.2
`Signal—to—Noise Ratio Degradation.
`4.2.3
`Sources of Signal Loss and Noise,
`Received Signal Porter and Noise Power,
`4.3.}
`The Range Equation.
`195
`4.3.2 Received Signal Power as a Function of
`Frequency,
`199
`Path Loss is Frequency Dependant,
`Thermal Noise Poorer,
`202
`
`4.2
`
`4.3.3
`4.3.4
`
`190
`l90
`195
`
`200
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 7 of 64
`
`
`
`‘
`
`.
`
`‘
`
`‘
`
`-
`-
`
`187
`
`4.4
`
`204
`Link Budget Analysis,
`205
`4.4.1
`Two lib/Nu Values ofinterert,
`4.4.2
`Link Budgets Are Typically Calculated in
`Decibels,
`206
`4.4.3 How Much Little Margin Is Enougit?,
`4.4.4
`Link Availability,
`209
`
`207'
`
`
`
`4.5 Noise Figure, Noise Temperature, and System
`Temperature,
`213
`215
`4.5.l Noise Figure,
`4 .52 Noise Temperature,
`4.5.3
`Line Lot's, 2l6
`4.5.4
`Composite Noise Figure and Composite Noise
`Temperature,
`213
`System Effective Temperature,
`4.5 .5
`Sky Noise Temperature,
`224
`4.5.6
`Sample Link Analysis,
`228
`228
`4.6.}
`Link Budget Details,
`4.6.2
`Receiver Figure-oanerit,
`4.6.5
`Received isotropic Power;
`Satellite Repeaters,
`232
`232
`4.7.] Nonregerterative Repeaters,
`4.7.2 Nonlinear Repeater Amplifiers,
`System TradeAOffs,
`233
`Conclusion,
`239
`References,
`239
`Problems,
`240
`
`215
`
`250
`251
`
`220
`
`236
`
`4.6
`
`4.7
`
`4.8
`4.9
`
`5 CHANNEL CODING: PART 1
`
`245
`
`246
`5.1 Waveform Coding,
`5.i.l
`Antipndal and Orthogonal Signals,
`5.}.2 M-at'y Signaling,
`249
`5 .l.3 Waveform Coiling with Correlation Detection,
`5.}.4 Orthogonal Codes,
`251
`5.1.5
`Biortltogonal Codes,
`255
`5.}.6
`Trantorthogoaal (Simplex) Codes,
`Types of Error Control,
`258
`253
`5.2.l
`Terminal Connectivity,
`5.2.2 Automatic Repeat Request,
`Structured Sequences,
`260
`5.3.1
`Channel Models,
`261
`5.3.2
`Code Rate and Redundancy,
`5.3.3
`Parity—Cheek Codes,
`265'
`5.3.4
`Coding Gain,
`266
`
`-
`
`24?f
`
`257
`
`249
`
`259
`
`263
`
`5.2
`
`5.3
`
`Contents
`
`
`Contents
`
`xi
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 8 of 64
`
`
`
`274'
`
`273
`
`280
`28)
`
`288
`
`5.6.6
`
`298
`
`30]
`
`304
`
`269
`Linear Block Codes,
`269
`5.4.1
`Vector Spaces,
`270
`5.4.2
`Vector Subspaces,
`5.4.3
`A (6. 3) Linear Biock Code Example,
`5.4.4 Generator Matrix,
`272
`5.4.5
`Systematic Linear Block Codes,
`5.4.6
`Parity—Check Matrix,
`275
`5.4.7
`Syndrome Testing,
`276
`5 .4 .8
`Error Correction,
`277
`Coding Strength,
`280
`5 .5 .1 Weight and Distance of Binary Vectors.
`5.5.2 Minimum Distance of a Linear Code,
`5.5.3
`Error Detection and Correction,
`281
`5.5.4
`Visualization ofa 6—T'upie Space,
`285
`5.5.5
`Erasure Correction,
`287
`Cyclic Codes,
`288
`5.6.1 Algebraic Structure of Cyclic Codes,
`5.6.2
`Binary Cyclic Code Properties,
`290
`5.6.3
`Encoding in Systematic Form, 290
`5.6.4
`Circuit for Dividing Polynomials,
`292
`5.6.5
`Systematic Encoding with an {n -- ichStage Shift
`Register,
`294
`Error Detection with an (n — k)-Stage Shift
`Register,
`296
`Well-Known Block Codes,
`5.7.1 Hamming Codes,
`298
`5.7.1.7
`Extended Gaiay Code,
`5.7.3
`BC}! Codes,
`301
`5 .74
`Reed-Soiornort Codes.
`Conclusion,
`308
`References,
`308
`Problems,
`309
`
`Contents
`
`6 CHANNEL CODING: PART 2
`
`6.1
`6.2
`
`6.3
`
`315
`Cunvalutional Encoding,
`Convolutional Encoder Representatiom 317
`6.2.1
`Connection Representation,
`3I8
`6.2.2
`State Representation and the State Diagram,
`6.2.3
`The Tree Diagram.
`324
`6.2.4
`The Nellie Diagram,
`326
`Formulation of the Convolutional Decoding Problem,
`6.3.1 Maximum Likelihood Decoding,
`327
`6.3.2
`Channel Models: Hard versus Soft Decisions.
`6.3.3
`The Vitcrbi Convolutionat Decoding Algorithm,
`
`322
`
`329
`333
`
`327'
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 9 of 64
`
`
`
`
`
`6.4.3
`
`357
`
`344
`
`347
`348
`350
`
`6.3.4 An Example of Viterbi meolutional
`Decoding,
`333
`Peri: Memory and Synchronization,
`6.3.5
`Properties of Convolutional Codes,
`338
`6.4.1 Distance Properties of Convolutionai Codes. 338
`6.4.2
`Systematic and Nonsystentotic Convolutionol
`Codes,
`342
`Catastrophic Error Propagation in Convolutionol
`Codes.
`342
`Performance Bounds for Convolutionai Codes,
`6 .4 .4
`Coding Gain.
`345
`6.4.5
`Henri Known Convolutionat Codes,
`6,46
`Convolutiomtl Code Rate TradenUfl'.
`6.4.7
`Other Convolutional Decoding Algorithms,
`6.5 .1
`Sequential Decoding.
`350
`6.5.2
`Comparisons and Limitations of Vilét'bl' and
`Sequential Decoding.
`354'
`Feedback Decoding,
`355
`6.5.3
`Interleaving and Coneatenated Codes,
`6 .15.!
`Block [met-leaving,
`366
`6.6.2
`Convolutionol Inierienving.
`6.6.3
`Conootenoted Codes,
`365
`
`357
`
`362
`
`.
`
`Coding and Interleeving Applied to the Compact Disc Digital
`Audio System,
`366
`367
`6.7.i
`CIRC Encoding,
`369
`6.7.2
`CIRC Decoding,
`6.7.3
`Interpolation and Mining,
`Conclusion,
`374
`References,
`374
`Problems.
`376
`
`37!
`
`6.4
`
`6.5
`
`6.6
`
`6.7
`
`6.8
`
`MODULATION AND CODING TRADE-OFFS
`
`331
`
`332
`
`385
`
`7.5
`
`Goals of the Communications System Designer.
`7.1
`Error Probability Plane,
`383
`7.2
`335
`7.3 Nyquist Minimum Bandwidth,
`7.4
`Shannon—Hartley Capacity Theorem,
`7.4.1
`Shannon Limit.
`3-57
`7.4 .2
`Entropy,
`369
`7.4.3
`Eqnivoootion and Efl'eetive "I'mnsmission Rate. 39i
`Bandwidth—Efficiency Plane.
`393
`7.5.!
`Bandwidth Ejfieioney ofMPSK and MFSK
`Modulation,
`395
`7.5.2 Analogies between Bandwt'dih-Eflic'iency and Error
`Probability Planes,
`396
`Power-Limited Systems,
`396
`
`7.6
`
`1
`
`Contents
`
`Jtlll
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 10 of 64
`
`
`
`399
`
`410
`
`412
`
`397
`Bandwidth-Limited Systems,
`397
`Modulation and Coding Trade-Offs,
`399
`Bandwidth-Efficient Modulations,
`7.9.1
`QPSK and Offset QPSK Signaling,
`7.9.2 Minimum Shift Keying,
`403
`407
`7.9.3 Quadrature Amplitude Modulation,
`Modulation and Coding for Bandlimitecl Channels,
`7.10.1 Commercial Telephone Modems,
`41?
`7.10.2 Signal Constellation Boundaries,
`412
`7.10.3 Higher-Dimensional Signal Constellationx,
`7.10.4 Higher-Density Lattlce Structures.
`415
`7.10.5 Combined-Gain: NwSphere Mapping and Dense
`Lattice,
`416
`7.10.6 Trellis-Coded Modulation. 417
`7.10.? Trellis—Coding Example.
`420
`7.11 Conclusion,
`424
`References,
`425
`Problems,
`426
`
`COMBME
`
`8 SYNCHRDNIZATION
`Maurice A. King, Jr.
`
`8.1
`
`Synchronization in the Context of Digital
`Communications,
`430
`430
`8.1.1 What It Means to Be Synchronized.
`8.1.2
`Costs warms Benefits of Synchronization
`Levels.
`432
`434
`Receiver Synchronization,
`8.2.)
`Coherent Systems: Phase-Locked LOprS,
`8.2.2
`Symbol Synchronization,
`453
`8.2.3
`Frame Synchronization,
`460
`Network Synchronization,
`464
`465
`8.3.1
`Open—Loop Transmitter Synchronization,
`3.3.2
`Closed-Loop Transmitter Synchronization, 468
`Conclusion.
`470
`References,
`471
`Problems,
`472
`
`434
`
`MULTIPLEXING AND MULTIPLE ACCESS
`
`9.1
`
`Allocation of the Communications Resource,
`9.1.1
`Fr'cqnt’ncywfliw's‘ion MultiplexingiMnl'tiple
`Access,
`478
`
`476i
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 11 of 64
`
`
`
`
`
`9.1.2
`9.1.3
`91.4
`
`9.).5
`9.}.6
`
`Time—Division MulnlJlexlnngulriplc Accem,
`Communicarians Rcwurcc Channellmtion,
`PEFfierflnCE Comparison afFDM/l and
`TDMA,
`'483
`491
`Code-Division Multiple Access.
`Space-Division and Polarization-Division Multiple
`Access,
`493
`
`484
`487
`
`9.2 Multiple Access Communications System and
`Architecture,
`495
`496
`9.2.] Humble Access Infurmmlon Flaw,
`9.2.2
`Dcmand—Amlgnmenr Multiple Access,
`497
`9.3 Access Algorithms.
`498
`9.3.1
`ALOHA.
`493
`500
`9.3.2
`Slotted ALOHA.
`502
`9.3.3
`Recervation-ALOHA,
`9.3.4
`Perjbrmartce Comparison of S—ALOHA
`and Iii-ALOHA.
`503
`.505
`Polling Techniques,
`9.3..5
`9.4 Multiple Access Techniques Employed with
`INTELSAT,
`507
`9.4.!
`Prcaa‘slgncd FDM/FM/FDMA or MCPC
`Operation,
`508
`9.4.2 MCPC Macias afAccesm‘ng rm INTELSAT
`Satellite,
`510
`5}}
`SFADE Operation.
`9.4.3
`TDMA in INTELSAT",
`516
`9.4.4
`523
`Satellite-Switched TDMA in INTELSA?)
`9.4.5
`9.5 Multiple Access Techniques for Local Area Networks.
`9.5.! Carrier-Sen“ Multiple Access Nerwcrks,
`.526
`9.5.2
`Token-Ring Networks.
`528
`9.5 .3
`Performance Comparlmn ofCSMAJCD
`and Taken-Ring Networks,
`530
`Conclusicn,
`531
`References,
`532
`Problems,
`533
`
`9.6
`
`526
`
`10 SPREAD-SPECTRUM TECHNIQUES
`
`536
`
`10.1
`
`537
`Spread-Spectrum Overview,
`10.1.}
`The Beneficial Attrlbruex QfSpread—Spectmm
`Systems.
`538
`10.1.2 Model for Spreadepcct'mm Interference
`Rejection,
`542
`A Catalan qf'Spreadr'ng Techniques.
`10.1.3
`H114 Hl'smrical Background,
`544
`
`543
`
`Contents
`
`XV
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 12 of 64
`
`
`
`549
`
`557
`
`552
`
`559
`560
`
`546
`Pseudonoise Sequences,
`546
`10.2.1
`Randomness Properties,
`547
`10.2.2
`Shift Register Sequences,
`548
`10.2.3
`PN Aarocorreiarion Function,
`Direct-Sequence Spread—Spectrum Systems,
`10.3.1
`Example of Direct Sequencing,
`550
`10.3.2 . Processing Gain and Performance,
`Frequency Hopping Systems,
`555
`10.4.1
`Frequency Hopping Example.
`10.4.2 Robustness,
`553
`10.4 .3
`Frequency Hopping with Diversity,
`10.4.4
`Fast Hopping versus Slow Hopping,
`10.4.5
`FFH/MFSK Demodaiaror,
`562‘
`Synchronization,
`562
`10.5.1 Acquisition,
`563
`10.5.2 Hacking,
`.568
`571
`Spread-Spectrum Applications,
`10.6.1
`Code-Division Mniiipic Access,
`10.6.2 Mnitipath Channels,
`573
`10.6.3
`The Jamming Game,
`574
`Further Jamming Considerations,
`10.7.1
`Eroadband Noise Jamming,
`10.7.2
`Partial-Band Noise Jamming,
`10.7.3 Multiple-Tone Jamming,
`583
`10.7.4
`Pulse Jamming,
`584
`10.7.5 Repeat-Back Jamming,
`10.7.6
`BLADES Syxiem,
`538
`Conclmion,
`589
`References,
`589
`Problems,
`591
`
`Contents
`
`571
`
`579
`579
`.581
`
`586
`
`SOURCE CODING
`
`Fredric I. Harris
`
`11.1
`
`596
`601
`
`596
`Sources,
`Discrete Sources.
`11.1.1
`11.1.2 Waveform Sources,
`Amplitude Quantizing,
`603
`11.2.1
`Quantizirig Noise,
`505
`11.2.2
`Uniform Quantizing.
`600
`11.2.3
`Saturation,
`611
`11.2.4
`Dirnering,
`614
`617
`11.2.5 Nonuniform Qaaniizing,
`Differential Pulse Code Modulation,
`11.3.1
`One-Tap Prediction,
`630
`11.3.2
`N-Tap Prediction,
`631
`
`627
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 13 of 64
`
`
`
`11.3.3
`633
`Delta Moduiotion.
`
`
`11.3.4
`Adaptive Prediction,
`639
`
`
`11.4
`Block Coding,
`643
`
`
`11.4.1
`643
`Vector Quentizing,
`11.4.2
`64,5
`Transform Coding,
`11.4.3
`Quantization for Transform Coding,
`11.4.4
`Subbanti Coding,
`647
`SynthesisiAnalysis Coding,
`11.5.1
`Vocoders,
`650
`11.5.2
`Linear Predictive Coding,
`Redundancy-Reducing Coding,
`11.6.1
`Properties of Codes,
`655
`11.6.2
`Huffman Code,
`65 7
`11.6.3
`Run-Length Codes,
`Conclusion,
`663
`References,
`663
`Problems,
`664
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`“.5
`
`11.6
`
`11.7
`
`647
`
`649
`
`65,3
`653
`
`660
`
`
`
`12 ENCRYPTIUN AND DECRYPTION
`
`668
`
`
`
`12.1
`Models, Goals, and Early Cipher Systems,
`669
`12.1.1
`A Model of the Enctyption and Decryption
`Process,
`669
`671
`System Goois,
`6?]
`Classic Threats,
`672
`Classic Ciphers,
`The Secrecy of a Cipher System,
`12.2.1
`Perfect Secrecy,
`675
`12.2.2
`678
`Entropy anti Equivucatt'an,
`12.2.3
`Rate of ti Language and Redundancy,
`12.2.4
`Unictty Distance and Ideai Secrecy.
`Practical Security,
`683
`12.3.1
`Confusion and Difl'nston,
`12.3.2
`Substitution,
`683
`12.3.3
`Permutation,
`685
`12.3.4
`686
`Precinct Cipher System.
`12.3.5
`The Data Encryption Standard,
`Stream Encryption,
`I594
`
`
`12.4.1
`Exrm'tpie of Key Generation Using :2 Linear
`Feedback Shift Register,
`694
`
`
`Vulnerabiiities of Linear Feedback Shift
`Registers,
`69.1
`
`
`Synchronous and SeiiiSyncnrc-nons Stream;
`
`
`Encn'ption Systems,
`697
`
`
` COMEMS
`Contents
`
`
`595
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`12.2
`
`12.3
`
`12.4
`
`12.1.2
`12.1.3
`12.1.4
`
`12.4.2
`
`12.4.3
`
`680
`630
`
`675
`
`683
`
`637
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 14 of 64
`
`
`
`12.5.2
`12.5.3
`1.31.5.4r
`12.5.5
`
`698
`Public Key Cryptosystems,
`12.5.1
`Signature Amhenzicorion Using :1 I’ubiie Key
`Cryptosyrrem,
`699
`700
`A Trapdoor Ono—Way id‘anozirm,
`The Rivesr—Shomir-Adeimon Scheme,
`The Knopsack Problem,
`703
`A Pubiic Key Cryptosysi'em Based on a Trapdoor
`Knapsock,
`705
`.
`Conclusion,
`707
`References,
`707
`Problems,
`708
`
`701
`
`A A REVIEW OF FOURIER TECHNIQUES
`
`736
`
`710
`Signals, Spectra, and Linear Systems,
`Fourier Techniques for Linear System Analysis,
`11.2.1
`Fourier Series Transform,
`713
`11.2.2
`Spectrum oft; Poise Train,
`716
`11.2.3
`Fourier Irategroi Transform,
`719
`Fourier Transform Properties,
`720
`A.3.1
`Time Shéfling Property,
`720
`11.3 .2
`Frequency Shifting Properol,
`Useful Functions.
`721
`11.4.1
`Uni! Impulse Function,
`A.4.2
`Spectrum ofo Sinumid,
`Convolution,
`722
`11.5.1
`Graphical [illustration of Convolution,
`11.5 .2
`Time Convolution Property,
`726
`726
`11.5.3
`Frequency Convoiun’on Property,
`1-1.5 .4
`Convolution of a Function with a Unit
`Impulse,
`728
`Demoduiotirm Applicaiion .of Convoiuzion,
`A.5.5
`Tables of Fourier Transforms and Operations,
`ReferenCES,
`732
`
`A.1
`A}
`
`721
`72,1
`
`720
`
`711
`
`729
`731
`
`726
`
`B FUNDAMENTALS OF STATISTICAL DECISION
`THEORY
`
`13.1
`
`733
`Bayes‘ Theorem,
`3.1.1
`Discrere Form of Bayes' Theorem,
`3.1.2 Mixed Form of Bows" ’I‘heorem,
`
`734
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 15 of 64
`
`
`
`3.3
`
`738
`
`739
`
`740
`
`RESPONSE OF CORRELATORS TO WHITE NOISE
`
`OFTEN USED IDENTITIES
`
`A CONVOLUTIONAL ENCODERIDECODER
`COMPUTER PROGRAM
`
`LIST OF SYMBOLS
`
`738
`Decision Theory,
`3.2.1
`Components of the Decision Them'y Problem,
`3.2.2
`The Likelihood Ran}? Test and the Maximum
`A Posterim‘i Criterirm,
`739
`The Maximum Likeiihood Criterion,
`3.2.3
`Signal Detection Example,
`740
`3.3.1
`The Maximum Likelihood Binary Decision,
`13.3.2
`Probabiiiry of Bi: Error,
`741
`References,
`743
`
`Contents
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 16 of 64
`
`
`
`CHAPTER 7
`
`From other
`sources
`
`Mu Itipla
`access
`
`
`
`
`
`
`
`Modulation and Coding
`Trade-Offs
`
`
`
`stream
`
`Svnch-
`ronizatinn
`
`Dlgfial
`waveform .
`
`Information
`sink
`
`m Optional
`
`|:| Essantlnl
`
`destinations
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 17 of 64
`
`
`
`
`
`as as ch
`will be
`plane; I
`
`7.2 ERROR Pl
`
`Figure
`teetion
`7.113). I
`M—ary l
`represe
`illustral
`M) is in
`module
`PE, or
`Figure'
`as k (or
`keying
`bandwi
`require:
`
`probobr
`WWW!
`able for
`“1‘33, 53
`require:
`Cit-"WE
`mission
`TEQU'W
`availabl
`pomt in
`in the 0
`from 0“
`7-132111‘
`arrows.
`03“ be ‘
`movemt
`perform
`e and f.
`along ”I
`be achit
`traders):
`51-31301‘
`“Bad to
`Sec. 7.2
`
`l
`I
`
`l
`
`.
`
`,
`I
`
`.
`
`'
`
`‘
`
`‘
`‘
`
`7.1 GOALS OF THE COMMUNICATIONS SYSTEM DESIGNER
`-
`System trade-offs are fundamental to all digital communication designs. The goals
`of the designer are [1) to maximize transmission bit rate, R; (2) to minimize prob-
`ability 0f bit 'Bl‘ml": Pa; [3) to minimize required powar, or equivalently, to min-
`imize required bit energy to noise power spectral density, Eb/No; (4) to minimize
`required system bandwidth, W; (5) to maximize system utilization, that is, to
`provide reliable service for a maximum number of users with minimum delay and
`with maximum resistance to interference; and (6) to minimize system complexity,
`computational load, and system cost. A good system designer seeks to achieve
`all these goals simultaneoualy. However, goals 1 and 2 are clearly in conflict with
`goals 3 and 4; they call for simultaneoursly maximizing R, while minimizing PE,
`lib/No, and W. There are several Constraints and theoretical limitations that ne-
`cessitate the trading off of any one system requirement with each of the others.
`Some of the constraints are:
`The Nyquist theoretical minimum bandwidth requirement
`The Shannon—Hartley capacity theorem (and the Shannon limit)
`Government regulations (e.g., frequency allocations)
`Technological limitations (e.g., stateuof—the art components)
`Other system requirements (cg, satellite orbits)
`Some of the realizable modulation and coding trade—offs can best be VieWed
`Modulation and Coding Trade-Offs
`Chap. 7
`
`332
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 18 of 64
`
`
`
`
`
`as a change in operating point on one of two performance planes. These planes
`will be referred to as the error probability plane and the bandwidth efficiency
`plane; they are described in the following sections.
`
`
`
`
`
`
`7.2 ERROR PROBABILITY PLANE
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Figure 7.1 illustrates the family of P3 versus Ebr'Ng curves for the coherent de-
`tection of orthogonal signaling [Figure 7.1a) and multiple phase signaling (Figure
`7.1b). For signaling schemes that process it. hits at a time, the signaling is called
`M—ary (see Section 3.8). The modulator uses one of its M = 2" waveforms to
`represent each k—bit sequence, where M is the size of the symbol set. Figure 7.1a
`illustrates the potential bit error improvement with orthogonal signaling as k (or
`M) is increased. For orthogonal signal sets, such as frequency shift keying (FSK)
`modulation, increasing the size of the symbol set can provide an improvement in
`P3, or a reduction in the Eb/No required, at the cost of increased bandwidth.
`Figure 7.1b illustrates potential bit error degradation with nonorthogonal signaling
`as k (or M} increases. For nonorthogonal signal sets, such as multiple phase shift
`keying (MPSK) modulation. increasing the size of the symbol set can reduce the
`bandwidth requirement, but at the cost of a degraded P3, or an increased Ep'No
`requirement. We shall refer to these families of curves (Figure 7.1a or b) as error
`probability performance curves, “and to the plane on which they are plotted as an
`error probability plane. Such a plane describes the locus of operating points avail~
`able for a particular type of modulation and coding. For a given system information
`rate, each curve in the plane can be associated with a different fixed minimum
`required bandwidth; therefore, the set of curves can be termed equibendwidth
`carver. As the curves move in the direction of the ordinate, the required trans-
`mission bandwidth increases; as the curves move in the opposite direction, the
`required bandwidth decreases. Once a modulation and coding scheme and an
`available EblNo are determined, system operation is characterized by a particular
`point in the error probability plane. Possible trade—offs can be viewed as changes
`in the operating point on one of the curves or as changes in the operating point
`from one curve to another curve of the family. These trade-offs are seen in Figure
`7.1a and b as changes in the system operating point in the direction shown by the
`arrows. Movement of the operating point along line 1, between points a and b.
`can be viewed as trading off P}: for Ebe0 performance (with W fixed). Similarly,
`movement along line 2, between points 0 and d, is seen as trading Pa for W
`performance (with Ebeu fixed). Finally, movement along line 3, between points
`e and 3". illustrates trading W for Ek/Nu performance (with P3 fixed). McVernent
`along line 1 is effected by increasing or decreasing the available Ebeu. This can
`be achieved. for example. by increasing transmitter power, which means that the
`trade-off might be accomplished simply by “turning a knob,” even after the sys-
`tern is configured. However, the other trade-offs (movement along line 2 or line
`3) involve some change in the system modulation or coding scheme. and therefore
`need to be accomplished during the system design phase.
`
`
`
` Sec. 7.2
`
`Error Probability Plans
`
`
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 19 of 64
`
`
`
`
`
`(w) “a 'Mnmnqwd mm 153
`
`Eg8D
`
`5 $
`4 3 E a E
`
`77
`
`éD
`
`
`
`forcoherentlydetectedM—arysignalingEa]
`
`22'
`If}
`
`signaling.
`
`a *
`aaE
`
`“
`
`3
`*3E
`
`15
`
`
`
`10 EhmnIda:
`
`{bl
`
`
`
`EbJ'INU{d5}'
`
`Ila]I
`
`0?D.—
`
`(Wi Ed 'MIIEEIEqOJd Jana 1m
`
`{‘1
`O
`
`M|
`0
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 20 of 64
`
`
`
`7.3 NYOUIST MINIMUM BANDWIDTH
`
`7.4 SHANNON—HARTLEY CAPACITY THEOREM
`
`Shannon [2] showed that the system capacity, C, of a channel perturbed by ad-
`ditive white Gaussian noise (AWGN) is a function of the average received signal
`power, S, the average noise power, N, and the bandwidth, W. The capacity re-
`lationship (Shannon—Hartley theorem) can be stated as
`S
`
`c = w log; (1 + N)
`
`(7.1)
`
`Every realizable system having some nonideal filtering will suffer from inter-sym-
`bol interference (ISIl—the tail of one pulse spilling over into adjacent symbol
`intervals so as to interfere with correct detection. Nyquist [1] showed that, in
`theory, R, symbols per second could be detected without 181 in an 19,12 hertz
`minimum bandwidth (Nyquist bandwidth); this is a basic theoretical constraint,
`limiting the designer's goal to expend as little bandwidth as possible (see Section
`2.11). In practice, R, hertz is typically required for the transmission ofR, symbols
`per second. In other words, typical digital communication throughput, without
`151, is limited to l symbolfs per hertz. The modulation or coding system assigns
`to each symbol, of its set of M symbols, a [obit meaning, where M = 2". For a
`signaling scheme with a fixed bandwidth, such as MPSK, as it increases, the
`allowable data rate, R, increases, and hence the bandwidth efficiency, R! W, mea-
`sured in bits per second per hertz, also increases. For example, movement along
`line 3, from point e to point 3" in Figure, 7. 1b represents trading Estn for a I'Bdllced
`bandwidth requirement. In other words, with the same system bandwidth one can
`transmit at an increased data rate, hence at an increased R! W.
`
`
`
`
`
`When W is in hertz and the logarithm is taken to the base 2, as shown, the capacity
`is given in bits/s. It is theoretically possible to transmit information over such a
`channel at any rate, R, where R 5 C, with an arbitrarily small error probability
`by using a sufficiently complicated coding scheme. For an information rate
`R :> C, it is not possible to find a code that can achieve an arbitrarily small error
`probability. Shannon’s work showed that the values of S, N, and W set a limit
`on transmission rate, not on error probability. Shannon [31 used Equation (7.1)
`to graphically exhibits. bound for the achievable performance of practical systems.
`This plot, shown in Figure 7.2, gives the normalized channel capacity (3/W in
`bits/s/I-la as a function of the channel signal-to-noise ratio (SNR). A related plot,
`shown in Figure 7.3, indicates the normalized channel bandwidth W/C in Hr/bits/s
`as a function of SNR in the channel. Figure 7.3 is sometimes used to illustrate
`the power-bandwidth trade-off inherent in the ideal channel. However, it is not
`a pure trade-off [4] because the detected noise power is proportional to bandwidth.
`
`Sec. 7.4
`
`Shannon-Hartley Capacity Theorem
`
`N = NOW
`
`(7.2)
`
`355
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 21 of 64
`
`
`
`
`
`C/W (bitslsini
`
`1B
`
`Unattainable
`region
`
`
`
`
`Practical
`systems
`
`1/2
`
`
` 20 30
`SNH idBi
`
`
`
`-1CI
`
`Unattainfll‘l
`region
`
`Figure 7.2 Normalized channel
`capacity versus channel SNR.
`
`Substituting Equation (7.2) into Equaticm (7.1) and Irearranging terms yields
`
`C
`S
`(7.3)
`W — log; (I + NOW)
`For the case where transmission bit rate is equal to channel capacity, R = C,
`we can use the identity presented in Equation (3.94) to Write
`
`Hence we can modify Equation (7.3) as follow:
`
`1% a log; [1 + 1%;— (g)]
`2‘3”“ — l + gi (g)
`
`é: 2 155(2ch _ 1)
`
`(75)
`
`(7.6)
`
`Figure 7.4 is a plot of (WC versus E's/Nu in accordance with Equaticm (7.6).
`
`385
`
`Modulation and Coding Trade—Offs
`
`Chap. 7
`
`'
`
`'
`
`l
`Ii
`
`
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 22 of 64
`
`
`
`
`
`
`
`
`Unattainable
`region
`
`1/_4
`
`1/3
`
`Figure 7.3 Normalized channel
`bandwidth versus channel SNR.
`
`7.4.1 Shannon lelt
`
`There exists a limiting value of Eb/No below which there can be no error—free
`communication at any information rate. Using the identity
`
`lim (1 + x)” = 2
`H0
`
`we can calculate the limiting value of Eli/No as follows. Let
`
`x A n g
`F No W
`
`= xlog; (1 + x)“"
`
`EW
`
`Eb
`1 = _
`No log; (1 + x)
`
`If:
`
`Then from Equation (7.5),
`
`Sec. 7.4
`
`WIC (Hszitsfsl
`
`Practical
`systems
`
`30
`20
`ENE (dBl
`
`Shannon-Hartley Capacity Theorem
`
`The asymptotic behavior of this curve as (WW—r 0 (or.WlC —1 m) is discussed in
`the next section.
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 23 of 64
`
`
`
`W/C in/birs/sl
`
`Asymctcte
`log, 2 = -'l.!59 :13
`
`Practical
`systems
`
`1/3
`
`1/16
`
`Unattainable
`region
`
`Chap. 7
`
`This value of Eb/Nu is called the Shannon limit. On Figure 7.1a the Shannon limit
`is the P3 versus Eb/No curve corresponding to k —I m. The curve is discontinuous,
`going from a value of P3 = i to P3 = 0 at E's/Nu = —1.59 dB. It is not possible
`in practice to reach the Shannon limit, because as k increases without bound, the
`bandwidth requirement and the implementation cemplexity increase without
`bound. Shannen's Work provided a theoretical proof for the existence of codes
`that could improve the PB performance. or reduce the E's/Nu required. from the
`levels of the uncoded binary modulation schemes to levels approaching the limiting
`curve. For a bit error probability of 10—5, binary phase shift keying (BPSK) mod-
`ulation requires an Eb/No of 9.6 dB (the optimum uncoded binary modulation).
`Thfirfiforfii Shannon’s work promised the existence ofa theoretical performance
`
`Figure 7.4 Normalised channel bandwidth versus channel Ehl'No.
`
`In the limit, as CfW—J- 0, we get
`
`0.693
`
`or, in decibels = — 1.59 dB
`
`(7.7)
`
`388
`
`Modulation and Coding Trade-Offs
`
`ERICSSON v. UNILOC
`Ex. 1022 / Page 24 of 64
`
`
`
`
`
`
`
`
`
`improvement of 11.2 dB over the performanee of optimum uncoded binary mod—
`ulation, through the use of coding techniques. Today, most of that promised im-
`provement (approximately 7 dB) is realizable [5]. Optimum system design can
`best be described as a search for rational compromises or trade-offs among the
`various constraints and conflicting goals. The modulation and coding trade-off,
`that is, the- selection of modulation and coding techniques to make the best use
`of transmitter power and channel bandwidth, is important, since there are strong
`incentives to reduce the cost of generating power and to conserve the radio
`spectrum.
`
`
`
`
`
`7.4.2 Entropy
`
`
`To design a communications system with a specified message handling capability,
`we need a metric for measuring the information content to be transmitted. Shan—
`‘
`non [2] developed soch a metric, H, called the entropy of the message source
`l
`(having a possible outputs). Entropy is defined as the average amount of inforn
`llI
`mation per source output and is expressed by
`
`,
`
`I
`.
`
`I
`
`I
`l
`
`.
`
`‘
`
`H = — 2 p, log; p,
`r- 1
`
`bits/source output
`
`(7-3)
`
`l'
`
`
`
`
`
`
`
`
`where p, is the probability of the ith output and E p, = 1. In the case of a binary
`message or a source having only two possible outputs, with probabilities p and
`q = (1 — p), the entropy is written
`
`l
`
`H = -(p 103er + 91032 e)
`
`(7.9)
`
`and is plotted versus p in Figure 7.5.
`The