`
`DIGITAL
`
`COMMUNICATIONS
`
`Fundamentals and Applications
`
`BERNARD SKLAR
`The Aerospace Corporation, El Segundo, California
`and
`
`University of California, Los Angeles
`
`‘
`
`
`
`PRENTICE HALL
`
`Englewood Cliffs, New Jersey 07632
`
`. ....
`
`__
`
`001
`
`___
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`001
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`T____
`
`Library of Congress Cataloging-in-Publication Data
`
`SKLAR, BERNARD (date)
`Digital communications.
`
`Bibliography: p.
`Includes index.
`I. Title.
`1. Digital communications.
`TK5103.7.SSS
`1988
`621.38’0413
`ISBN 0-]3-211939—0
`
`87-1316
`
`
`
`Editorial/production supervision and
`interior design: Reynold Rieger
`Cover design: Wanda Lubelska Design
`Manufacturing buyers: Gordon Osbourne and Paula Benevento
`
`I!"‘I
`
`© 1988 by Prentice Hall
`A Division of Simon & Schuster
`Englewood Cliffs, New Jersey 07632
`
`All rights reserved. No part of this book may be
`reproduced, in any form or by any means,
`without permission in writing from the publisher.
`
`Printed in the United States of America
`
`10987654321
`
`ISBN U-lE-E’llclEFI-EI
`
`DES
`
`Prentice-Hall International (UK) Limited, London
`Prentice-Hall of Australia Pty. Limited, Sydney
`Prentice-Hall Canada Inc., Toronto
`Prentice-Hall Hispanoamericana, S.A., Mexico
`Prentice-Hall of India Private Limited, New Delhi
`Prentice-Hall of Japan, Inc., Tokyo
`Simon & Schuster Asia Pte. Ltd., Singapore
`Editora Prentice-Hall do Brasil, Ltda., Rio de Janeiro
`
`
`
`
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`002
`
`002
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`Contents
`
`PREFACE
`
`1 SIGNALS AND SPECTRA
`
`1.1
`
`1.2
`
`1.3
`
`1.5
`
`I
`
`'
`I
`
`Digital Communication Signal Processing,
`1.1.1 Why Digital?,
`3
`1.1.2
`Typical Block Diagram and Transformations,
`1.1.3
`Basic Digital Communication Nomenclature,
`1.1.4 Digital versus Analog Performance Criteria,
`Classification of Signals,
`11
`1.2.1 Deterministic and Random Signals,
`1.2.2
`Periodic and Nonperiodic Signals,
`1.2.3 Analog and Discrete Signals,
`12
`1.2.4
`Energy and Power Signals,
`1.2.5
`The Unit Impulse Function,
`Spectral Density,
`14
`1.3.1
`Energy Spectral Density,
`1.3.2
`Power Spectral Density,
`1.4 Autocorrelation,
`17
`17
`1.4.1 Autocorrelation of an Energy Signal,
`1.4.2 Autocorrelation ofa Periodic (Power) Signal,
`Random Signals,
`18
`1.5.1
`Random Variables,
`1.5.2
`Random Processes,
`
`3
`
`11
`12
`
`xxi
`
`1
`
`vii
`
`4
`9
`1]
`
`17
`
`12
`13
`
`14
`15
`
`18
`20
`
`
`
`003
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`003
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`23
`
`30
`
`22
`Time Averaging and Ergodicity,
`1.5.3
`Power Spectral Density ofa Random Process,
`1.5.4
`1.5.5 Noise in Communication Systems,
`27
`Signal Transmission through Linear Systems,
`1.6.1
`Impulse Response,
`31
`31
`1.6.2
`Frequency Transfer Function,
`32
`1.6.3 Distortionless Transmission,
`1.6.4
`Signals, Circuits, and Spectra,
`Bandwidth of Digital Data,
`41
`1.7.1 Baseband versus Bandpass,
`1.7.2
`The Bandwidth Dilemma,
`Conclusion,
`46
`References,
`46
`Problems,
`47
`
`38
`
`41
`
`43
`
`1.6
`
`1.7
`
`1.8
`
`2 FORMATTING AND BASEBAND TRANSMISSION
`
`51
`
`55
`
`59
`
`59
`
`69
`
`70
`
`72
`
`74
`
`74
`
`77
`
`54
`Baseband Systems,
`2.1
`Formatting Textual Data (Character Coding),
`2.2
`2.3 Messages, Characters, and Symbols,
`55
`2.3.1
`Example of Messages, Characters, and
`Symbols,
`55
`Formatting Analog Information,
`2.4.1
`The Sampling Theorem,
`2.4.2 Aliasing,
`66
`2.4.3
`Signal Interface for a Digital System,
`Sources of Corruption,
`70
`2.5.1
`Sampling and Quantizing Effects,
`2.5.2
`Channel Effects,
`71
`2.5.3
`Signal-to-Noise Ratio for Quantized Pulses,
`Pulse Code Modulation,
`73
`2.6
`2.7 Uniform and Nonuniform Quantization,
`2.7.1
`Statistics of Speech Amplitudes,
`2.7.2 Nonuniform Quantization,
`76
`2.7.3
`Companding Characteristics,
`Baseband Transmission,
`78
`2.8.1 Waveform Representation ofBinary Digits,
`2.8.2
`PCM Waveform Types,
`78
`82
`2.8.3
`Spectral Attributes ofPCM Waveforms,
`2.9 Detection of Binary Signals in Gaussian Noise,
`2.9.] Maximum Likelihood Receiver Structure,
`2.9.2
`The Matched Filter,
`88
`2.9.3
`Correlation Realization of the Matched Filter,
`2.9.4 Application of the Matched Filter,
`91
`2.9.5
`Error Probability Performance of Binary
`Signaling,
`92
`
`2.4
`
`2.5
`
`2.8
`
`
`
`78
`
`83
`
`85
`
`90
`
`viii
`
`Contents
`
`004
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`004
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`2.10
`
`2.11
`
`2.12
`
`2.13
`
`Multilevel Baseband Transmission,
`2.10.1 PCM Word Size,
`97
`
`95
`
`100
`
`98
`Intersymbol Interference,
`2.11.1 Pulse Shaping to Reduce ISI,
`2.11.2 Equalization,
`104
`Partial Response Signaling,
`2.12.1 Duobinary Signaling,
`2.12.2 Duobinary Decoding,
`2.12.3 Precoding,
`108
`2.12.4 Duobinary Equivalent Transfer Function,
`2.12.5 Comparison ofBinary with Duobinary
`Signaling,
`111
`2.12.6 Polybinary Signaling,
`Conclusion,
`112
`References,
`113
`Problems,
`113
`
`106
`106
`107
`
`112
`
`109
`
`3 BANDPASS MODULATION AND DEMODULATION
`
`117
`
`118
`3.1 Why Modulate?,
`119
`3.2
`Signals and Noise,
`3.2.1 Noise in Radio Communication Systems,
`3.2.2
`A Geometric View of Signals and Noise,
`
`119
`120
`
`3.3
`
`3.5
`
`127
`
`132
`132
`
`Digital Bandpass Modulation Techniques,
`3.3.1
`Phase Shift Keying,
`130
`130
`3.3.2
`Frequency Shift Keying,
`131
`3.3.3 Amplitude Shift Keying,
`131
`3.3.4 Amplitude Phase Keying,
`3.3.5 Waveform Amplitude Coefi‘icient,
`3.4 Detection of Signals in Gaussian Noise,
`3.4.1 Decision Regions,
`132
`3.4.2
`Correlation Receiver,
`Coherent Detection,
`138
`3.5.1
`Coherent Detection ofPSK,
`3.5.2
`Sampled Matched Filter,
`139
`3.5.3
`Coherent Detection ofMultiple Phase Shift
`Keying,
`142
`Coherent Detection ofFSK,
`3.5.4
`3.6 Noncoherent Detection,
`146
`146
`3.6.1 Detection ofDifl‘erential PSK,
`148
`3.6.2
`Binary Difi’erential PSK Example,
`150
`3.6.3 Noncoherent Detection ofFSK,
`3.6.4 Minimum Required Tone Spacing for Noncoherent
`Orthogonal FSK Signaling,
`152
`
`133
`
`138
`
`145
`
`Contents
`
`ix
`
`005
`
`. _
`
`>.
`
`ONE-E-WAY 2004
`
`Apple v. One-E-Way
`IPR2021-00283
`
`005
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`3.7
`
`3.8
`
`3.9
`
`3.7.2
`
`3.7.3
`
`3.7.4
`
`3.7.5
`3.7.6
`
`3.8.5
`
`3.9.4
`
`3.9.5
`
`155
`Error Performance for Binary Systems,
`3.7.1
`Probability of Bit Error for Coherently Detected
`BPSK,
`155
`Probability of Bit Error for Coherently Detected
`Differentially Encoded PSK,
`160
`Probability of Bit Error for Coherently Detected
`FSK,
`16]
`Probability of Bit Error for Noncoherently Detected
`FSK,
`162
`I64
`Probability of Bit Error for DPSK,
`Comparison of Bit Error Performance for Various
`Modulation Types,
`166
`167
`M—ary Signaling and Performance,
`3.8.1
`Ideal Probability of Bit Error Performance,
`3.8.2
`M-ary Signaling,
`167
`3.8.3
`170
`Vectorial View ofMPSK Signaling,
`3.8.4
`BPSK and QPSK Have the Same Bit Error
`Probability,
`I71
`172
`Vectorial View of MFSK Signaling,
`Symbol Error Performance for M-ary Systems (M > 2),
`3.9.1
`Probability of Symbol Error for MPSK,
`176
`3.9.2
`Probability of Symbol Error for MFSK,
`177
`3.9.3
`Bit Error Probability versus Symbol Error Probability
`for Orthogonal Signals,
`180
`Bit Error Probability versus Symbol Error Probability
`for Multiple Phase Signaling,
`I81
`Efiects of Intersymbol Interference,
`3.10 Conclusion,
`182
`References,
`182
`Problems,
`183
`
`167
`
`176
`
`182
`
`4 COMMUNICATIONS LINK ANALYSIS
`
`187
`
`4.1 What the System Link Budget Tells the System
`Engineer,
`188
`189
`The Channel,
`4.2.1
`189
`The Concept ofFree Space,
`4.2.2
`Signal-to-Noise Ratio Degradation,
`4.2.3
`Sources of Signal Loss and Noise,
`Received Signal Power and Noise Power,
`4.3.1
`The Range Equation,
`195
`4.3.2
`Received Signal Power as a Function of
`Frequency,
`199
`Path Loss Is Frequency Dependent,
`Thermal Noise Power,
`202
`
`4.2
`
`4.3
`
`4.3.3
`4.3.4
`
`I90
`190
`195
`
`200
`
`Contents
`
`006
`
`ONE-E-WAY 2004
`
`Apple v. One-E-Way
`IPR2021-00283
`
`006
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`4.4
`
`4.6
`
`4.7
`
`4.8
`4.9
`
`204
`Link Budget Analysis,
`205
`4.4.}
`Two E;,lNu Values (H'Irrterest,
`4.4.2
`Link Budgets Are Typically Calculated in
`Decibels,
`206
`How Much Link Margin Is Enough?,
`4.4.3
`Link Availability,
`209
`4.4.4
`4.5 Noise Figure, Noise Temperature, and System
`Temperature,
`213
`213
`4.5.1 Noise Figure,
`4.5.2 Noise Temperature,
`4.5.3
`Line Lots,
`216
`4.5.4
`Composite Noise Figure and Composite Noise
`Temperature,
`218
`System Effective Temperature,
`4.5.5
`Sky Noise Temperature,
`224
`4.5.6
`Sample Link Analysis,
`228
`228
`4.6.]
`Link Budget Details,
`4.6.2
`Receiver Figure-of-Merit,
`4.6.3
`Recieved Isotropic Power,
`Satellite Repeaters.
`232
`232
`4.7.1 Nonregmemtive Repeaters,
`4.7.2 Nonlinear Repeater Amplifiers,
`System Trade-Offs,
`238
`Conclusion,
`239
`References,
`239
`Problems,
`240
`
`207
`
`215
`
`220
`
`236
`
`230
`231
`
`
`
`5 CHANNEL CODING: PART 1
`
`245
`
`246
`5.1 Waveform Coding,
`5.1.1 Antipodal and Orthogonal Signals,
`5.1.2
`M—ary Signaling,
`249
`5.1.3 Waveform Coding with Correlation Detection,
`5.1.4 Orthogonal Codes,
`251
`5.1.5
`Biorthogonal Codes,
`255
`5.1.6
`Trausorthogonal (Simplex) Codes,
`Types of Error Control,
`258
`258
`5.2.1
`Terminal Connectivity,
`5.2.2 Automatic Repeat Request,
`Structured Sequences,
`260
`5.3.1
`Channel Models,
`261
`5.3.2
`Code Rate and Redundancy,
`5.3.3
`Parity-Check Codes,
`263
`5.3.4
`Coding Gain,
`266
`
`247
`
`257
`
`249
`
`259
`
`263
`
`5.2
`
`5.3
`
`Contents
`
`xi
`
`4———‘
`
`007
`
`ONE-E-WAY 2004
`
`Apple v. One-E-Way
`IPR2021-00283
`
`007
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`5.4
`
`5.5
`
`269
`Linear Block Codes,
`269
`5.4.1
`Vector Spaces,
`270
`5.4.2
`Vector Subspaces,
`5.4.3
`A (6, 3) Linear Block Code Example,
`5.4.4 Generator Matrix,
`272
`5.4.5
`Systematic Linear Block Codes,
`5.4.6
`Parity-Check Matrix,
`275
`5.4.7
`Syndrome Testing,
`276
`5.4.8
`Error Correction,
`277
`
`273
`
`271
`
`280
`Coding Strength,
`280
`5.5.] Weight and Distance of Binary Vectors,
`5.5.2 Minimum Distance ofa Linear Code,
`281
`5.5.3
`Error Detection and Correction,
`281
`"5.5.4 Visualization ofa 6-Tuple Space,
`285
`5.5.5
`Erasure Correction,
`287
`
`5.6
`
`5.8
`
`5.6.6
`
`288
`Cyclic Codes,
`5.6.1 Algebraic Structure of Cyclic Codes,
`5.6.2
`Binary Cyclic Code Properties,
`290
`5.6.3
`Encoding in Systematic Form,
`290
`5.6.4
`Circuit for Dividing Polynomials,
`292
`5.6.5
`Systematic Encoding with an (n — k)-Stage Shift
`Register,
`294
`Error Detection with an (n — k)-Stage Shift
`Register,
`296
`5.7 Well-Known Block Codes,
`5.7.1 Hamming Codes,
`298
`5.7.2
`Extended Golay Code,
`5.7.3 BCH Codes,
`301
`5.7.4
`Reed—Solomon Codes,
`Conclusion,
`308
`References,
`308
`Problems,
`309
`
`288
`
`298
`
`301
`
`304
`
`
`
`I
`
`'
`
`6 CHANNEL CODING: PART 2
`
`314
`
`317
`
`315
`Convolutional Encoding,
`Convolutional Encoder Representation,
`6.2.1
`Connection Representation,
`318
`6.2.2
`State Representation and the State Diagram,
`6.2.3
`The Tree Diagram,
`324
`6.2.4
`The Trellis Diagram,
`326
`Formulation of the Convolutional Decoding Problem,
`6.3.] Maximum Likelihood Decoding,
`327
`6.3.2
`Channel Models: Hard versus Soft Decisions,
`6.3.3
`The Viterbi Convolutional Decoding Algorithm,
`
`322
`
`327
`
`329
`333
`
`6.1
`6.2
`
`6.3
`
`xii
`
`Contents
`
`
`
`008
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`008
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`
`
`6.4.3
`
`6.4
`
`6.5
`
`6.6
`
`6.7
`
`6.8
`
`6.3.4 An Example of Viterbi Convolutional
`Decoding,
`333
`Path Memory and Synchronization,
`6.3.5
`Properties of Convolutional Codes,
`338
`6.4.] Distance Properties of Convolutional Codes,
`6.4.2
`Systematic and Nonsystematic Convolutional
`Codes,
`342
`Catastrophic Error Propagation in Convolutional
`Codes,
`342
`Performance Bounds for Convolutional Codes,
`6.4.4
`Coding Gain,
`345
`6.4.5
`6.4.6 Best Known Convolutional Codes,
`6.4.7
`Convolutional Code Rate Trade-Ofi’,
`Other Convolutional Decoding Algorithms,
`6.5.1
`Sequential Decoding,
`350
`6.5.2
`Comparisons and Limitations of Viterbi and
`Sequential Decoding,
`354
`Feedback Decoding,
`355
`6.5.3
`Interleaving and Concatenated Codes,
`6.6.1
`Block Interleaving,
`360
`6.6.2
`Convolutional Interleaving,
`6.6.3
`Concatenated Codes,
`365
`Coding and Interleaving Applied to the Compact Disc Digital
`Audio System,
`366
`367
`6.7.1
`CIRC Encoding,
`369
`6.7.2
`CIRC Decoding,
`6.7.3
`Interpolation and Mating,
`Conclusion,
`374
`References,
`374
`Problems,
`376
`
`338
`
`344
`
`337
`
`347
`348
`350
`
`357
`
`362
`
`371
`
`7 MODULATION AND CODING TRADE-OFFS
`
`381
`
`Goals of the Communications System Designer,
`7.1
`Error Probability Plane,
`383
`7.2
`385
`7.3 Nyquist Minimum Bandwidth,
`7.4
`Shannon—Hartley Capacity Theorem,
`7.4.1
`Shannon Limit,
`387
`7.4.2
`Entropy.
`389
`7.4.3
`Equivocation and Efi‘ective Transmission Rate,
`Bandwidth—Efficiency Plane,
`393
`7.5.1
`Bandwidth Efficiency of MPSK and MFSK
`Modulation,
`395
`7.5.2 Analogies between Bandwidth-Efficiency and Error
`Probability Planes,
`396
`Power-Limited Systems,
`396
`
`385
`
`382
`
`391
`
`7.5
`
`7.6
`
`Contents
`
`Xi"
`
`
`
`009
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`009
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`397
`Bandwidth-Limited Systems,
`7.7
`397
`7.8 Modulation and Coding Trade-Offs,
`399
`7.9
`Bandwidth-Efficient Modulations,
`7.9.1
`QPSK and Offset QPSK Signaling,
`7.9.2 Minimum Shift Keying,
`403
`407
`7.9.3 Quadrature Amplitude Modulation,
`7.10 Modulation and Coding for Bandlimited Channels,
`7.10.1 Commercial Telephone Modems,
`411
`7.10.2 Signal Constellation Boundaries,
`412
`7.10.3 Higher-Dimensional Signal Constellations,
`7.10.4 Higher-Density Lattice Structures,
`415
`7.10.5 Combined-Gain: N—Sphere Mapping and Dense
`Lattice,
`416
`7.10.6 Trellis-Coded Modulation,
`7.10.7 Trellis—Coding Example,
`7.11 Conclusion,
`424
`References,
`425
`Problems,
`426
`
`399
`
`410
`
`412
`
`417
`420
`
`8 SYNCHRONIZATION
`Maurice A. King, Ir.
`
`429
`
`8.1
`
`8.2
`
`Synchronization in the Context of Digital
`Communications,
`430
`430
`8.1.1 What It Means to Be Synchronized,
`8.1.2
`Costs versus Benefits of Synchronization
`Levels,
`432
`434
`Receiver Synchronization,
`8.2.1
`Coherent Systems: Phase-Locked Loops,
`8.2.2
`Symbol Synchronization,
`453
`8.2.3
`Frame Synchronization,
`460
`8.3 Network Synchronization,
`464
`8.3 .1
`Open-Loop Transmitter Synchronization,
`8.3.2
`Closed-Loop Transmitter Synchronization,
`Conclusion,
`470
`References,
`471
`Problems,
`472
`
`8.4
`
`434
`
`465
`468
`
`9 MULTIPLEXING AND MULTIPLE ACCESS
`
`475
`
`Allocation of the Communciations Resource,
`9.1.1
`Frequency-Division Multiplexing/Multiple
`Access,
`478
`
`476
`
`9.1
`
`xiv
`
`Contents
`
`
`
`01 0
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`010
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`9.1.2
`9.1.3
`9.1.4
`
`9.1.5
`9.1.6
`
`9.6
`
`484
`487
`
`
`
`526
`
`Time-Division Multiplexing/Multiple Access,
`Communications Resource Channelization,
`Performance Comparison ofFDMA and
`TDMA,
`488
`491
`Code-Division Multipie Access.
`Space-Dirisimi and Poiarization-Division Multiple
`Access,
`493
`9.2 Multiple Access Communications System and
`Architecture,
`495
`496
`9.2.1 Multiple Access Information Flow,
`9.2.2 Demand-Assignment Multipt'e Access,
`497
`9.3 Access Algorithms,
`498
`9.3.1
`ALOHA,
`498
`500
`9.3.2
`Slotted ALOHA,
`502
`9.3.3
`Reservation-ALOHA,
`9.3.4
`Performance Comparison of S-ALOHA
`and R-ALOHA,
`503
`505
`Polling Techniques,
`9.3.5
`9.4 Multiple Access Techniques Employed with
`INTELSAT,
`507
`9.4.1
`Preassigned FDM/FM/FDMA or MCPC
`Operation,
`508
`9.4.2 MCPC Modes ofAccessing an INTELSAT
`Satellite,
`510
`511
`SPADE Operation,
`9.4.3
`516
`TDMA in INTELSAT,
`9.4.4
`523
`Sateiiire-Switched TDMA in INTELSAT.
`9.4.5
`9.5 Multiple Access Techniques for Local Area Networks,
`9.5.1
`Carrier-Sense Miiiiipie Access Networks,
`526
`9.5.2
`Token-Ring Networks,
`528
`9.5.3
`Performance Comparison of CSMA/CD
`and Token-Ring Networks,
`530
`Conclusion,
`531
`References,
`532
`Problems,
`5 33
`
`10 SPREAD-SPECTRUM TECHNIQUES
`
`536
`
`10.1
`
`537
`Spread—Spectrum Overview,
`10.1.1
`The Beneficial Attributes of Spread-Spectrum
`Systems,
`538
`10.1.2 Model for Spread-Spectrum Interference
`Rejection.
`542
`A Guiding of Spreading Techniques,
`10.1.3
`10.1.4 Historical Background,
`544
`
`543
`
`Contents
`
`01 1
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`011
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`549
`
`557
`
`552
`
`559
`560
`
`10.2
`
`10.4
`
`10.5
`
`10.6
`
`10.7
`
`10.8
`
`546
`Pseudonoise Sequences,
`546
`10.2.1 Randomness Properties,
`547
`10.2.2
`Shift Register Sequences,
`548
`10.2.3
`PN Autocorrelation Function,
`10.3 Direct-Sequence Spread-Spectrum Systems,
`10.3.1
`Extflflpli’ ofDirect Sequencing,
`550
`10.3.2
`Processing Gain and Performance,
`Frequency Hopping Systems,
`555
`10.4.1
`Frequency Hopping Example,
`10.4.2
`Robustness,
`558
`10.4.3
`Frequency Hopping with Diversity,
`10.4.4
`Fast Hopping I-wrsus Slow Hopping,
`10.4.5
`FFHMIFSK Demodulamr.
`56.?
`Synchronization,
`562
`10.5.1 Acquisition,
`563
`10.5.2
`Tracking,
`568
`571
`Spread-Spectrum Applications,
`10.6.1
`Code-Division Multiple Access,
`10.6.2 Multipath Channels,
`573
`10.6.3
`The Jamming Game,
`574
`Further Jamming Considerations,
`10.7.1
`Broadband Noise Jamming,
`10.7.2
`Partial-Band Noise Jamming,
`10.7.3 Multiple-Tone Jamming,
`583
`10.7.4
`Pulse Jamming,
`584
`10.7.5 Repeat-Back Jamming,
`10.7.6
`BLADES System,
`588
`Conclusion,
`589
`References,
`589
`Problems,
`591
`
`571
`
`579
`579
`581
`
`586
`
`11 SOURCE CODING
`Fredric I. Harris
`
`595
`
`596
`601
`
`596
`Sources,
`11.1.1 Discrete Sources,
`11.1.2 Waveform Sources,
`Amplitude Quantizing,
`603
`11.2.1
`Quantizing Noise,
`605
`11.2.2
`Uniform Quantizing,
`608
`11.2.3
`Saturation,
`611
`11.2.4 Dithering,
`614
`617
`11.2.5
`Nonuniform Quantizing,
`Differential Pulse Code Modulation,
`11.3.1
`One-Tap Prediction,
`630
`11.3.2
`N-Tap Prediction,
`631
`
`627
`
`11.1
`
`11.2
`
`11.3
`
`xvi
`
`
`
`01 2
`
`Contents
`
`
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`012
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`11.4
`
`11.5
`
`11.6
`
`11.7
`
`Synthesis/Analysis Coding,
`
`649
`
`Delta Modulation,
`11.3.3
`Adaptive Prediction,
`11.3.4
`Block Coding,
`643
`643
`11.4.1
`Vector Quantizing,
`645
`11.4.2
`Transform Coding,
`11.4.3
`Quantization for Transform Coding,
`11.4.4
`Subband Coding,
`647
`
`633
`639
`
`647
`
`11.5.1
`650
`Vocoders,
`11.5.2
`Linear Predictive Coding,
`Redundancy-Reducing Coding,
`11.6.1
`Properties of Codes,
`655
`11.6.2
`Huffman Code,
`657
`11.6.3
`Run-Length Codes,
`Conclusion,
`663
`References,
`663
`Problems,
`664
`
`653
`653
`
`660
`
`'
`
`‘
`
`'
`‘
`|
`
`1
`|
`
`
`
`12 ENCRYPTION AND DECRYPTION
`
`668
`
`669
`12.1 Models, Goals, and Early Cipher Systems,
`12.1.1
`A Model of the Encryption and Decryption
`Process,
`669
`671
`System Goals,
`12.1.2
`671
`Classic Threats,
`12.1.3
`672
`Classic Ciphers,
`12.1.4
`The Secrecy of a Cipher System,
`12.2.1
`Perfect Secrecy,
`675
`678
`12.2.2
`Entropy and Equivocation,
`12.2.3
`Rate ofa Language and Redundancy,
`12.2.4
`Unicity Distance and Ideal Secrecy,
`Practical Security,
`683
`12.3.1
`Confusion and Diffusion,
`12.3.2
`Substitution,
`683
`12.3.3
`Permutation,
`685
`686
`[2.3.4
`Product Cipher System,
`12.3.5
`The Data Encryption Standard,
`Stream Encryption,
`694
`12.4.1
`Example of Key Generation Using a Linear
`Feedback Shift Register,
`694
`Vulnerabilities ofLinear Feedback Shift
`Registers,
`695
`Synchronous and Self-Synchronous Stream
`Encryption Systems,
`697
`
`675
`
`683
`
`680
`680
`
`687
`
`12.2
`
`12.3
`
`12.4
`
`12.4.2
`
`12.4.3
`
`Contents
`
`xvii
`
`.- ———_"
`
`01 3
`
`ONE—E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`013
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`12.5
`
`12.6
`
`12.5.2
`12.5.3
`12.5.4
`12.5.5
`
`698
`Public Key Cryptosystems,
`12.5.1
`Signature Authentication Using a Public Key
`Cryptosystem,
`699
`700
`A Trapdoor One-Way Function,
`The Rivest—Shamir—Adelman Scheme,
`The Knapsack Problem,
`703
`A Public Key Cryptosystem Based on a Trapdoor
`Knapsack,
`705
`Conclusion,
`707
`References,
`707
`Problems,
`708
`
`701
`
`A A REVIEW OF FOURIER TECHNIQUES
`
`710
`
`A.l
`A.2
`
`A.3
`
`A.4
`
`A5
`
`A5 .5
`
`A6
`
`Tables of Fourier Transforms and Operations,
`References,
`732
`
`B FUNDAMENTALS OF STATISTICAL DECISION
`THEORY
`
`733
`
`733
`Bayes’ Theorem,
`3.1.1
`Discrete Form ofBayes’ Theorem,
`3.1.2 Mixed Form ofBayes’ Theorem,
`
`734
`736
`
`B.1
`
`xviii
`
`Contents
`
`— 0
`
`14
`
`ONE-E-WAY 2004
`
`Apple v. One-E-Way
`IPR2021-00283
`
`710
`Signals, Spectra, and Linear Systems,
`Fourier Techniques for Linear System Analysis,
`A21
`Fourier Series Transform,
`713
`A22
`Spectrum ofa Pulse Train,
`716
`A.2.3
`Fourier Integral Transform,
`719
`Fourier Transform Properties,
`720
`A.3.1
`Time Shifting Property,
`720
`A.3.2
`Frequency Shifting Property,
`Useful Functions,
`721
`11.4.1
`Unit Impulse Function,
`A.4.2
`Spectrum ofa Sinusoid,
`Convolution,
`722
`A.5 .1
`Graphical Illustration of Convolution,
`AS .2
`Time Convolution Property,
`726
`726
`A.5 .3
`Frequency Convolution Property,
`A.5.4
`Convolution ofa Function with a Unit
`Impulse,
`728
`Demodulation Application of Convolution,
`
`720
`
`72]
`721
`
`726
`
`711
`
`729
`
`731
`
`014
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`
`
`B.2
`
`B.3
`
`738
`Decision Theory,
`B.2.1
`Components of the Decision Theory Problem,
`3.2.2
`The Likelihood Ratio Test and the Maximum
`A Posteriori Criterion,
`739
`The Maximum Likelihood Criterion,
`
`B.2.3
`
`739
`
`738
`
`740
`Signal Detection Example,
`3.3.1
`The Maximum Likelihood Binaiy Decision,
`3.3.2
`Probability ofBit Error,
`741
`References,
`743
`
`740
`
`C RESPONSE OF CORRELATORS TO WHITE NOISE
`
`744
`
`D OFTEN USED IDENTITIES
`
`E A CONVOLUTIONAL ENCODEIUDECODER
`COMPUTER PROGRAM
`
`F LIST OF SYMBOLS
`
`INDEX
`
`746
`
`748
`
`759
`
`765
`
`i 1
`
`Contents
`
`xix
`
`J...
`
`
`
`01 5
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`015
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`CHAPTER 6
`
`
`
`Channel Coding:
`Part 2
`
`
`
`Source
`Channel
`Information
`
`SOUFCB
`'
`
`From other
`sources
`
`
`
`
`
`Synch-
`ronization
`
`Digital
`waveform
`
`
`
`Information
`sink
`
`
`
`[:l Essential
`
`To other
`destinations
`
`
`
`314
`
`
`
`01 6
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`016
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`There are two major categories of channel codes: block and convolutional. Chap-
`ter 5 deals mainly with block coding. This chapter deals mainly with convolutional
`coding. A linear block code is described by two integers, n and k, and a generator
`matrix or polynomial. The integer k is the number of data bits that form an input
`to a block encoder. The integer n is the total number of bits in the associated
`codeword out of the encoder. A characteristic of linear block codes is that each
`
`codeword n-tuple is uniquely determined by the input message k-tuple. The ratio
`k/n is called the rate of the code—a measure of the amount of added redundancy.
`A convolutional code is described by three integers, n, k, and K, where the ratio
`k/n has the same code rate significance (information per coded bit) that it has for
`block codes; however, n does not define a block or codeword length as it does
`for block codes. The integer K is a parameter known as the constraint length; it
`represents the number of k-tuple stages in the encoding shift register. An important
`characteristic of convolutional codes, different from block codes, is that the en-
`coder has memory—the n-tuple emitted by the convolutional encoding procedure
`is not only a function of an input k-tuple, but is also a function of the previous
`K — 1 input k-tuples. In practice, n and k are small integers and K is varied to
`control the redundancy.
`
`6.1 CONVOLUTIONAL ENCODING
`
`In Figure 1.2 we presented a typical block diagram of a digital communication
`system. A version of this functional diagram, focusing primarily on the convo-
`lutional encode/decode and modulate/demodulate portions of the communication
`
`Sec. 6.1
`
`Convolutional Encoding
`
`315
`
`
`
`01 7
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`017
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`
`
`link, is shown in Figure 6.1. The input message source is denoted by the sequence
`m = m1, m2,
`.
`.
`.
`, m,-,
`.
`.
`.
`, where each m, represents a binary digit (bit). We
`shall assume that each m,- is equally likely to be a one or a zero, and independent
`from digit to digit. Being independent, the bit sequence lacks any redundancy;
`that is, knowledge about bit m ,- gives no information about mj (i 7'5 j). The encoder
`transforms each sequence m into a unique codeword sequence U = G(m). Even
`though the sequence In uniquely defines the sequence U, a key feature of con-
`volutional codes is that a given k-tuple within m does not uniquely define its
`associated n-tuple within U since the encoding of each k-tuple is not only a function
`of that k-tuple but is also a function of the K - 1 input k-tuples that precede it.
`The sequence U can be partitioned into a sequence of branch words: U = U1,
`U2,
`.
`.
`.
`, U,-,
`.
`.
`.
`. Each branch word U, is made up of binary code symbols,
`often called channel symbols, channel bits, or coded bits; unlike the input message
`bits the code symbols are not independent.
`In a typical communication application, the codeword sequence U modulates
`a waveform s(t). During transmission, the waveform s(t) is corrupted by noise,
`resulting in a received waveform s(t) and a demodulated sequence Z = Z 1, 22,
`.
`, Z,-,
`.
`.
`.
`, as indicated in Figure 6.1. The task of the decoder is to produce
`an estimate in = m1, m2, .
`.
`.
`, mi, .
`.
`. ,of the original message sequence, using
`the received sequence Z together with a priori knowledge of the encoding
`procedure.
`A general convolutional encoder, shown in Figure 6.2, is mechanized with
`a kK-stage shift register and n modulo-2 adders, where K is the constraint length.
`The constraint length represents the number of k-bit shifts over which a single
`information bit can influence the encoder output. At each unit of time, k bits are
`shifted into the first k stages of the register; all bits in the register are shifted k
`stages to the right, and the outputs of the n adders are sequentially sampled to
`
`I
`
`|
`I
`
`Convolutional
`encode
`
`M d I t
`o u a e
`
`
`
`
`
`/
`
`Information
`source
`
`
`
`m=m1,m2,...,mi,...
`Input sequence
`
` Information
`sink
`A
`A
`E3 3 '9
`
`
`
`_a
`
`A
`a II
`
`U=G(m)
`. ., Ui, . ..
`= U1, U2, .
`Codeword sequence
`where Ui = u1i,. ..,uji,. . .,uni
`
`{s(t)}
`
`AWGN
`channel
`
`Convolutional
`
`decode
`
`Demodulate
`
`Z
`2:21,22,...,
`l""
`{giltll
`where 2] =2“, .
`.
`., zji, .
`. ., zni
`and 2]]
`is the jth demodulator output
`symbol of branch word Zi
`
`Figure 6.1 Encode/decode and modulate/demodulate portions of a communi-
`cation link.
`
`316
`
`Channel Coding: Part2
`
`Chap. 6
`
`
`
`01 8
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`018
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`adders
`
`m=m1,m2,...,mi,...
`Input sequence
`(shifted in k at a time)
`
`kK-stage
`shift register
`
`n modulo-2
`
`Codeword sequence U = U1, U2, .
`where Ui = u1i,..., Ujir-- .,uni
`= ith codeword branch
`
`.
`
`., Ui, . ..
`
`u“ = jth binary code symbol
`of branch word Ui
`
`Figure 6.2 Convolutional encoder with constraint length K and rate k/n.
`
`yield the binary code symbols or coded bits. These code symbols are then used
`by the modulator to specify the waveforms to be transmitted over the channel.
`Since there are n coded bits for each input group of k message bits, the code rate
`is k/n message bit per coded bit, where k < n.
`We shall consider only the most commonly used binary convolutional en-
`coders for which k = 1, that is, those encoders in which the message bits are
`shifted into the encoder one bit at a time, although generalization to higher-order
`alphabets is straightforward [1, 2]. For the k = 1 encoder, at the ith unit of time,
`message bit m,- is shifted into the first shift register stage; all previous bits in the
`register are shifted one stage to the right, and as in the more general case, the
`outputs of the n adders are sequentially sampled and transmitted. Since there are
`n coded bits for each message bit, the code rate is 1/11. The n code symbols
`occurring at time t,- comprise the ith branch word, U,- = 111,-, M2,",
`.
`.
`.
`, uni, where
`14,-, (j = 1, 2,
`.
`.
`.
`, n) is the jth code symbol belonging to the ith branch word.
`Note that for the rate l/n encoder, the kK—stage shift register can be referred to
`simply as a K—stage register, and the constraint length K, which was expressed
`in units of k-tuple stages, can be referred to as constraint length in units of bits.
`
`6.2 CONVOLUTIONAL ENCODER REPRESENTATION
`
`To describe a convolutional code, one needs to characterize the encoding function
`G(m), so that given an input sequence In, one can readily compute the output
`sequence U. Several methods are used for representing a convolutional encoder,
`the most popular being the connection pictorial, connection vectors 0r polynom-
`ials, the state diagram, the tree diagram, and the trellis diagram. They are each
`described below.
`
`Sec. 6.2
`
`Convolutional Encoder Representation
`
`317
`
`
`
`0 1 9
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`019
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`6.2.1 Connection Representation
`
`We shall use the convolutional encoder, shown in Figure 6.3, as a model for
`discussing convolutional encoders. The figure illustrates a (2, 1) convolutional
`encoder with constraint length K = 3. There are n = 2 modulo-2 adders; thus
`the code rate k/n is %. At each input bit time, a bit is shifted into the leftmost stage
`and the bits in the register are shifted one position to the right. Next, the output
`switch samples the output of each modulo-2 adder (i.e., first the upper adder,
`then the lower adder), thus forming the code symbol pair making up the branch
`word associated with the bit just inputted. The sampling is repeated for each
`inputted bit. The choice of connections between the adders and the stages of the
`register gives rise to the characteristics of the code. Any change in the choice of
`connections results in a different code. The connections are, of course, not chosen
`or changed arbitrarily. The problem of choosing connections to yield good distance
`properties is complicated and has not been solved in general; however, good codes
`have been found by computer search for all constraint lengths less than about 20
`[3—5].
`Unlike a block code that has a fixed word length n, a convolutional code
`has no particular block size. However, convolutional codes are often forced into
`a block structure. by periodic truncation. This requires a number of zero bits to
`be appended to the end of the input data sequence, for the purpose of clearing
`or flushing the encoding shift register of the data bits. Since the added zeros carry
`no information, the effective code rate falls below k/n. To keep the code rate
`close to k/n, the truncation period is generally made as long as practical.
`One way to represent the encoder is to specify a set of n connection vectors,
`one for each of the n modulo-2 adders. Each vector has dimension K and describes
`the connection of the encoding shift register to that modulo-2 adder. A one in the
`ith position of the vector indicates that the corresponding stage in the shift register
`is connected to the modulo-2 adder, and a zero in a given position indicates that
`no connection exists between the stage and the modulo-2 adder. For the encoder
`example in Figure 6.3, we can write the connection vector g1 for the upper con-
`nections and g; for the lower connections as follows:
`
`g1=111
`
`g2=101
`
`Consider that a message vector m = 1 0 1 is convolutionally encoded with the
`encoder shown in Figure 6.3. The three message bits are inputted, one at a time,
`at times t1, t2, and t3, as shown in Figure 6.4. Subsequently, (K — 1) = 2 zeros
`are inputted at times t4 and t5 to flush the register and thus ensure that the tail
`end of the message is shifted the full length of the register. The output sequence
`is seen to be 1
`1
`1 0 0 0 1 0 1 1, where the leftmost symbol represents the
`earliest transmission. The entire output sequence, including the code symbols as
`a result of flushing, are needed to decode the message. To flush the message from
`the encoder requires one less zero than the number of stages in the register, or
`K — 1 flush bits. Another zero input is shown at time t6, for the reader to verify
`that the corresponding branch word output is then 00.
`
`318
`
`Channel Coding: Part 2
`
`Chap. 6
`
`
`
`020
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`020
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`
`
`
`
`
`
`
`I
`
`t b't
`npu
`l
`m
`
`,
`
`First
`U1 5
`[ code symbol
`
`I
`;
`u
`
`Output
`branch word
`Second
`l
`1 code symbol
`
`Figure 6.3 Convolutional encoder (rate é, K = 3).
`
`6.2.1.1 Impulse Response of the Encoder
`
`We can approach the encoder in terms of its impulse response—that is, the
`response of the encoder to a single “one” bit that moves through it. Consider
`the contents of the register in Figure 6.3 as a one moves through it.
`
`Branch word
`Register
`contents —
`
`“1
`L12
`
`1 0 0
`0 1 0
`0 0 1
`
`1
`1
`1
`
`1
`0
`1
`
`Input sequence:
`Output sequence:
`
`1
`
`1
`
`1
`
`0
`1 0
`
`0
`
`1
`
`1
`
`The output sequence for the input “one” is called the impulse response of the
`encoder. Then for the input sequence m = 1 0 1, the output may be found by
`the superposition or the linear addition of the time-shifted input “impulses” as
`follows:
`
`Input m
`
`Output
`
`1
`0
`1
`
`1
`
`1
`
`1
`1
`1 0
`0 0
`0 0
`0 0
`1
`1
`1 0
`
`
`Modulo-2 sum:
`
`1
`
`1
`
`1 0
`
`0 0
`
`1 0
`
`1
`
`1
`
`1
`
`1
`
`Observe that this is the same output as that obtained in Figure 6.4, demonstrating
`that convolutional codes are linear—just as the linear block codes of Chapter 5.
`It is from this property of generating the output by the linear addition of time-
`shifted impulses, or the convolution of the input sequence with the impulse re-
`sponse of the encoder, that we derive the name convolutional encoder. Often,
`
`Sec. 6.2
`
`Convolutional Encoder Representation
`
`319
`
`—————
`
`021
`
`ONE-E-WAY 2004
`Apple v. One-E-Way
`IPR2021-00283
`