throbber
Introduction to ““
`Trellis-Coded
`
`Modulation
`with Applications
`
`*
`
`Marvin K. Simon
`
`Ezio Biglieri
`Dariush Divsalar
`
`Peter J. McLane
`
`J
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 1
`
`IPR2018-01474
`Apple Inc. EX1021 Page 1
`
`

`

`
`
`Introduction to
`Trellis-Coded
`
`
`Modulation
`with Applications
`
`Ezio Biglieri
`
`Pnir'recnim {Ii Tm'ino. Italy
`
`Dariush Divsalar
`Jcr Propul'xirm Urbnmrorr
`California Iru‘timle of Technology
`
`Peter J. McLane
`Queen's Univerzx‘ir)‘, Canada
`
`Marvin K. Simon
`J9! Propulsion Labarmorv
`California lrrsrimre of Yécr'moiogy
`
`Macmillan Publishing Company
`New York
`
`Maxwell Macmillan Canada, Inc.
`Toronto
`
`Maxwell Macmillan International
`New York Oxford Singapore Sydney
`
`lPR2018—01474
`
`Apple Inc. EX1021 Page 2
`
`IPR2018-01474
`Apple Inc. EX1021 Page 2
`
`

`

`Editor:
`John Griffin
`Production Supervisor: Elaine W. Wetterau
`Production Manager:
`Pamela Kennedy Oborski
`Text Designer: Eileen Burke
`Cover Designer: Doris Chen
`Cover Illustration: Marvin Simon
`Illustrations:
`Publication Services
`
`This book was set in 10/12 Times Roman by Publication Services. printed
`and bound by Quinn Woodbine, Inc. The jacket was
`printed by Phoenix Color Corporation.
`
`Copyright ©l99l by Macmillan Publishing Company. a division of Macmillan. Inc.
`
`Printed in the United States of America
`
`All rights reserved. No part of this book may be reproduced or
`transmitted in any form or by any means. electronic or mechanical.
`including photocopying. recording. or any information storage and
`retrieval system. Without permission in writing from the publisher.
`
`Macmillan Publishing Company
`866 Third AVenue. New York. New York 10022
`
`Maxwell Macmillan Canada. Inc.
`
`1200 Eglinton Avenue. E.
`Suite 200
`Don Mills. Ontario M3C 3N1
`
`Library of Congress Cataloging in Publication Data
`
`Introduction to trellis-coded modulation. with applications/Elie
`Biglieri . . .[et al.].
`p.
`cm.
`Includes bibliographical references and index.
`ISBN (LUZ-3099653
`
`1. Digital modulation.
`theory.
`I. Biglieri. Ezio.
`TK5106.7.158
`199]
`621 .381'536—dc20
`
`2. Modulation (Electronics)
`
`3. Coding
`
`9l—l099l
`CIP
`
`Printingzl23456789
`
`Year11234567890
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 3
`
`IPR2018-01474
`Apple Inc. EX1021 Page 3
`
`

`

`Contents
`
`CHAPTER 1
`
`Introduction
`1.1
`
`Digital communications structure
`1.1.1
`Source encoder
`2
`
`1
`
`1.1.2 Channel encoder
`1.1.3 Modulator
`4
`
`2
`
`1.1.4 The communications channel
`1.1.5 The receiver
`8
`
`6
`
`1.2
`
`1.3
`
`1.4
`
`1.5
`
`1.6
`
`1
`1
`Discrete memoryless channels
`1.2.1 Uncoded baseband communication
`
`ll
`
`1.2.2 Bit error probability: Binary signals 12.
`
`15
`Entropy
`1.3.1 Discrete sources
`
`16
`
`1.3.2 Continuous source
`
`17
`
`18
`Channel capacity
`1.4.1 Related information theory measures
`1.4.2 Channel capacity
`21
`
`1 ‘9
`
`21
`Symmetric channels
`1.4.3
`23
`1.4.4 Kuhn—"flicker conditions
`1.4.5 Bandlimited Gaussian channel
`Shannon‘s two theorems 26
`Computational cutoff rate
`27
`
`25
`
`CHAPTER 2
`Error-correcting codes
`2.1
`Introduction
`33
`2.2
`34
`Parity check codes
`Matrix description: Ermn
`.flg Codes
`Algebraic concepts
`40 Correct!
`2.4.]
`Primitive element
`2.4.2 Extension field CF 42 42
`Cyclic codes
`48
`(9)
`
`2.3
`2.4
`
`2.5
`
`33
`
`36
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 4
`
`IPR2018-01474
`Apple Inc. EX1021 Page 4
`
`

`

`Jrli Contents
`
`2.6 BCH codes
`
`48
`
`49
`2.6.1 Binary BCH codes
`2.6.2 Nonbinary BCH codes
`2.6.3 Reed—Solomon codes
`
`52
`54
`
`2.7 Convolutional codes
`56
`2.7.1 FSM and trellis representation
`2.7.2 G(D) and H(D) matrices
`58
`
`56
`
`2.7.3 Viterbi algorithm 59
`2.7.4 Error-state diagrams
`
`61
`
`CHAPTER 3
`TCM: Combined modulation and coding
`3.1
`Introducing TCM 67
`3.2
`The need for TCM 69
`
`3.3
`
`3.4
`3.5
`3.6
`3.7
`3.8
`
`3.9
`
`Fundamentals of TCM 70
`3.3.1 Uncoded transmission
`
`70
`
`The concept of TCM 71
`Trellis representation
`73
`Some examples of TCM schemes
`Set partitioning
`77
`Representations for TCM 79
`3.8.1 Ungerboeck representation
`3.8.2 Analytical representation
`Decoding TCM 87
`3.9.1 Definition of branch metric
`
`74
`
`79
`82
`
`88
`
`3.9.2 The Viterbi algorithm 90
`3.10 Bibliographical notes
`93
`Appendix 3A: Orthogonal expansion of the function f 94
`
`67
`
`CHAPTER 4
`
`Performance evaluation
`
`99
`
`99
`4.1 Upper bound to error probability
`4.1.1 The error-state diagram 102
`4.1.2 The transfer function bound
`
`102
`
`4.1.3 Consideration of different channels
`4.2 Lower bound to error probability
`114
`4.3 Examples
`116
`4.4 Computation of drm 125
`4.4.1 Using the error-state diagram 125
`4.4.2 A computational algorithm 128
`4.4.3 The product-trellis algorithm 131
`4.5 Lower bounds to the achievable drm 133
`4.5.1 A simple lower bound
`135
`
`113
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 5
`
`IPR2018-01474
`Apple Inc. EX1021 Page 5
`
`

`

`Contents
`
`xiii
`
`135
`4.6 Sphere-packing upper bounds
`4.6.1 A universal upper bound
`137
`4.7 Other sphere-packing bounds
`138
`4.7.1 Constant-energy signal constellations
`4.7.2 Rectangular signal constellations
`138
`4.7.3 An upper bound for PSK signals
`138
`4.7.4 An asymptotic upper bound
`145
`4.8 Power density spectrum 145
`
`138
`
`CHAPTER 5
`One- and two-dimensional modulations for TCM
`5.1
`Introduction
`149
`
`149
`
`5.2
`
`149
`Step—by—step design procedure
`5.2.1 Derivation of the analytic description
`5.2.2 Design rules and procedure
`152
`5.3 Onevdimensional examples
`153
`5.4 Two~dimensional examples
`163
`169
`5.5 Trellis code performance and realization
`5.6 Trellis coding with asymmetric modulations
`5.6.1 Analysis and design
`175
`5.6.2 Best rate 1/2 codes combined with asymmetric 4-PSK
`(A4-PSK)
`179
`
`150
`
`174
`
`5.6.3 Best rate 2/3 codes combined with asymmetric 8-PSK
`(AS-PSK)
`187
`
`5.6.4 Best rate 3/4 codes combined with asymmetric 16-PSK
`(A16-PSK)
`195
`
`5.6.5 Best rate [/2 codes combined with asymmetric
`4-AM 201
`
`5.6.6 Trellis—coded asymmetric I6-QAM 203
`
`CHAPTER 6
`
`Multidimensional modulations
`6.1 Lattices
`209
`
`207
`
`6.1.1
`6.1.2
`
`210
`Some examples of lattices
`Structural characteristics of lattices
`
`212
`
`6.1.3 Exampie of lattice constellations for TCM 212
`6.1.4 Partition of lattices
`216
`
`6.1.5 Calderbank—Sloane TCM schemes based on lattices
`
`217
`
`220
`6.2. Group alphabets
`6.2.1
`Set partitioning of a GA 223
`6.3 Ginzburg construction
`224
`6.3.1
`Set partitioning
`229
`6.3.2 DesigningaTCM scheme
`
`231
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 6
`
`IPR2018-01474
`Apple Inc. EX1021 Page 6
`
`

`

`xiv Contents
`
`232
`6.4 Wei construction
`6.4.1 A design example
`6.5 Trellis-encoded CPM 240
`6.5.1 Review of CPM 240
`
`233
`
`243
`
`6.5.2 Parameter selection
`242
`6.5.3 Designing the TCM scheme
`6.5.4
`Performance examples
`246
`246
`Appendix 6A: Examples of group alphabets
`247
`6A.1
`Permutation alphabets
`250
`6A2 Cyclic—group alphabets
`250
`Appendix 6B: Decomposition of the CPM modulator
`68.]
`Phase trellis and tilted phase trellis
`613.2 Decomposing the CPM modulator
`
`251
`252
`
`259
`
`295
`
`CHAPTER 7
`Multiple TCM
`7.1
`Two—state MTCM 261
`
`7.1.1 Mapping procedure for two—state MTCM 264
`7.1.2 Evaluation of minimum squared free distance
`7.2 Generalized MTCM 268
`
`265
`
`Set—partitioning method for generalized MTCM 269
`7.2.1
`7.2.2 Set mapping and evaluation of squared free distance
`7 3 Analytical representation of MTCM 282
`7.4 Bit error probability performance
`287
`7.5 Computational cutoff rate and MTCM performance
`7.6 Complexity considerations
`292
`7.7 Concluding remarks
`293
`
`290
`
`1))
`
`27
`
`CHAPTER 8
`Rotationally invariant trellis codes
`8.1
`Introduction
`295
`
`8.1.1 Rotational invariance
`
`296
`
`8.1.2 Rotational invariant code
`
`297
`
`302
`8.1.3 Design rules
`303
`8.1.4 Design procedure
`304
`8.1.5
`16-point examples
`8.2 Generation: Rotationally invariant codes
`8.2.1 Eight-point example
`310
`8.2.2
`Sixteen-point examples
`313
`8.3 Multidimensional RIC 322
`
`8.3.] Linear examples
`8.3.2 Nonlinear example
`8.4 Bit error rate performance
`
`323
`329
`335
`
`310
`
`|PR2018-01474
`
`Apple Inc. EX1021 Page 7
`
`IPR2018-01474
`Apple Inc. EX1021 Page 7
`
`

`

`335
`8.4.1 Nonlinear codes
`8.4.2 Linear codes
`336
`
`Contents
`
`xv
`
`CHAPTER 9
`Analysis and performance of TCM for fading channels 343
`9.1 Coherent detection of trellis-coded M—PSK on a slow-fading Rician
`channel
`344
`
`9.1.1 Channel model
`
`344
`
`344
`System model
`9.1.2
`9.1.3 Upper bound on pairwise error probability
`9.1.4 Upper bound on bit error probability
`350
`9.1.5 Simulation results
`366
`
`346
`
`9.2 Differentially coherent detection of trellis-coded M—PSK on a
`slow—fading Rician channel
`371
`9.2.1
`System model
`371
`9.2.2 Analysis model
`372
`9.2.3 The maximum—likelihood metric for trellis—coded M~DPSK 373
`
`9.2.4 Upper bound on pairwise error probability
`9.2.5 Upper bound on average bit error probability
`9.2.6 Simulation results
`385
`
`374
`378
`
`9.3 Differentially coherent detection of trellis-coded M—PSK on a
`fast-fading Rician channel
`387
`9.3.1 Analysis model
`388
`388
`9.3.2 Upper bound on pairwise error probability
`389
`9.3.3 Upper bound on average bit error probability
`9.3.4 Characterization of the autocorrelation and power spectral
`
`9.3.5
`
`density of the fading process
`Simulation results
`392
`
`390
`
`393
`9.4 Asymptotic results
`398
`9.4.1 An example
`9.4.2 No interleaving/deinterleaving
`Further discussion
`40]
`
`9.5
`
`399
`
`Appendix 9A: Proof that (1 whose square is defined in (9.19) satisfies the
`conditions for a distance metric
`402
`
`Appendix 98: Derivation of the maximum-likelihood branch metric for
`trellis-coded M—DPSK with ideal channel state
`
`information
`
`405
`
`CHAPTER 1 0
`Design of TCM for fading channels
`412
`10.1 Multiple trellis—code design for fading channels
`10.2
`Set partitioning for multiple trellis—coded M—PSK on the fading
`channel
`417
`
`411
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 8
`
`IPR2018-01474
`Apple Inc. EX1021 Page 8
`
`

`

`xvi Contents
`
`417
`10.2.1 The first approach
`.
`.
`'
`10.2.2 The second approach
`427
`10.3 Design of Ungerboeck—type codes (unit muitiphcrty) for fading
`channels
`430
`.
`10.4 Comparison of error probability performance with computational
`cutoff rate
`432
`
`10.5 Simulational results
`10.6 Further discussion
`
`433
`435
`
`CHAPTER 1 1
`Analysis and design of TCM for other practical
`channels
`11.1
`Intersymbol interference channels
`11.1.1 Model
`438
`
`437
`
`437
`
`447
`
`444
`11.1.2 LMS equalization
`11.1.3 Trellis-code performance
`11.2 Channels with phase offset
`453
`11.2.1 Upper bound on the average bit error probability
`performance of TCM 454
`11.2.2 Carrier synchronization loop statistical model and average
`pairwise error probability evaluation
`461
`11.2.3 The case of binary convolutional coded BPSK
`modulation
`464
`
`470
`11.2.4 A TCM example
`11.2.5 Concluding remarks
`474
`11.3 TCM over satellite channels
`475
`
`11.3.1 A modem for land mobile satellite communications
`11.3.2 An SCPC modern
`478
`
`476
`
`479
`11.4 Trellis codes for partial response channels
`485
`11.4.1 Trellis codes for the binary (1 - D) channel
`11.4.2 Convolutional codes with precoder for (1 - D)
`channels
`487
`
`490
`11.5 Trellis coding for optical channels
`11.5.1
`Signal sets with amplitude and pulse—width
`constraints
`492
`
`11.5.2 Trellis-coded modulation for optical channels
`11.6 TCM with prescribed convolutional codes
`502
`11.6.1 Application to M—PSK modulation
`503
`11.6.2 Application to M-AM and QAM modulations
`
`494
`
`506
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 9
`
`IPR2018-01474
`Apple Inc. EX1021 Page 9
`
`

`

`APPENDIX A
`Fading channel models
`
`APPENDIX B
`Computational techniques for transfer functions
`
`APPENDIX C
`Computer programs: Design technique
`
`Index
`
`Contents
`
`xvii
`
`511
`
`521
`
`52?
`
`541
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 10
`
`IPR2018-01474
`Apple Inc. EX1021 Page 10
`
`

`

`Introduction
`
`We begin by outlining the structure of digital communication systems and end the
`first chapter with an outline of Shannon’s information theory. which includes some
`aspects of elementary channel and source coding. The second chapter contains the
`rest of our material on traditional coding theory: a consideration of BCH codes and
`some material on convolutional codes. The main intent of the first two chapters is
`to supply the reader with the necessary background in information theory and error
`correction coding to understand the theory of treliis-coded modulation. Another
`goal is to provide the essentials of information theory and coding where the book
`is used for a single course on these subjects that contains a significant component
`on trellis-coded modulation.
`
`1.1 Digital Communications Structure
`
`Digital communications systems have a definite structure and knowledge of this
`structure is helpful in understanding the role of coding and modulation systems.
`The simplest structure. shown in Fig. 1.1,
`is for a point-to-point communication
`system—not that for a communications network or a point-to-multipoint system.
`We have a transmitter. 7",, a receiver, Rx. and a channel that links the transmitter
`and receiver.
`
`The transmitter, channel, and receiver shown in Fig. 1.1 can be further subdi-
`vided. Let us begin by considering a subdivision of the transmitter structure (Fig.
`1.2). We have an information source that we will take as binary, which means that
`its output is a sequence in which the only elements are Is and Os. The source is
`fOIIOWed by a source encoder. a channel encoder, and a modulator. We now de-
`scribe the function of each of these entities. Note that if the source is analog—for
`example, a speech or video source—we shall aSSUme that it has been digitized.
`
`
`
`FIGURE 1.1
`
`Simplest digital communications structure.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 11
`
`IPR2018-01474
`Apple Inc. EX1021 Page 11
`
`

`

`2 01.1 I introduction
`
`To
`
`
`Channel
`Source
`
`
`Modulator
`
`encoder
`encoder
`channel
`
`
`
`
`
`FIGURE 1.2 Transmitter block diagram.
`
`1.1.1 Source Encoder
`
`A good. or desirable source is random—such sources have maximum informa—
`tion. Clearly, if a l and a 0 have an equal probability of occurrence. knowledge
`of the source output provides a maximum amount of information. For instance.
`if
`the source nearly always outputs a l. its output can be predicted and knowledge
`of the output provides very little information. Usually. sources are not random and
`contain significant amounts of redundancy. For example. in a video image. neigh-
`boring picture elements are usually Strongly related. The role of a source encoder is
`to randomize the source. A measure of randomness is entropy. a concept borrowed
`from thermodynamics. The function of a source encoder. then.
`is as illustrated in
`Fig. 1.3.
`Why do we want the source to be encoded to a disordered state? The answer
`lies in the Utilization of one of the scarce resources of the telecommunications
`
`problem—the channel. We should not waste the scarce resources of the channel by
`sending predictable quantities over this link between the receiver and the transmitter.
`The channel should only be used to carry the unpredictable information from the
`source. that is. the output from the source encoder.
`
`1.1.2 Channel Encoder
`
`The goal of the channel encoder is to introduce an error correction capability into
`the source encoder output to combat channel transmission errors. To achieve this
`goal. some redundancy must be added to the source encoder output. This may seem
`confusing at first because we have just argued that all redundancy must be stripped
`from the source outputs for efficient channel
`transmission. Indeed.
`this book is
`about a technique. trellis coding. for adding redundancy to the source outputs SO
`that the channel is utilized from a very efficient point of view. However. the first
`two chapters are about the traditional way of adding redundancy.
`through parity
`checks. and then transmitting the information plus parity bits across the channel in
`
`
`Source
`
`encoder
`
`
`
`FIGURE 1.3
`
`Function of a source encoder.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 12
`
`IPR2018-01474
`Apple Inc. EX1021 Page 12
`
`

`

`1.1 I Digital Communications Structure
`
`3
`
`0
`
`l
`
`0
`
`l
`
`p
`
`P
`
`1-H
`
`(3. ”Repetition code
`0 —-v()00
`l —I- l l I
`
`FIGURE 1.4 BSC plus (3, 1) repetition code.
`
`a time-serial manner. Note that the same parity bits will be appended to a unique
`collection of source output bits called the message. In this way,
`the redundancy
`we add to the message is controlled and the. receiver will have knowledge of the
`structure of this redundancy. This is the difference between the original redundancy
`in the source symbols, which is not controlled. and the redundancy added in channel
`coding. which is controlled. Let us consider a simple example of channel coding.
`The binary symmetric channel (BSC) plus a (3, l) repetition code are illustrated in
`Fig. 1.4.
`The transmission diagram is a summarizing diagram for transmission over the
`channel. illustrating the fact that transmission errors occur with a prescribed prob-
`ability of p. The channel coder appends two identical parity bits to the source
`symbol. and the resulting 3—bit word is transmitted over the channel one bit at a
`time (in Section 1.1.3 we show how this could be done). We can regard channel
`transmission as three uses of the BSC shown in Fig. 1.4. If (0, 1. 0) is the 3-bit out-
`put from three uses of the BSC, we should declare (0. 0, 0) as transmitted (denoted
`as (1). since if 0 was the source bit sent, we have corrected a channel transmission
`
`error in position 2. Thus our decoder for the BSC‘s 3-bit outputs is based on ma-
`jority rule and so will always result in one channel error being corrected no matter
`
`where it appears in the 3-bit word.
`The situation described above can be represented using the cube shown in Fig.
`l .5. Two possible code words transmitted over the BSC are separated by a Hamming
`distance. d”. of 3 and nearesz-neighbor decoding results on a single error being
`corrected. In general. iftwo code words differ in their component position, we add
`one to their component distance. and examining all components gives the Hamming
`distance between the two code words. Here we have d” = 3. In general. for more
`than two code words the greatest chance for error comes in comparing two code
`Words of least distance. din. In addition. the number of errors that can be corrected
`by a code with the shortest Hamming distance, erg”. is r = lldiiim — 1)/2j, where
`[xi is the largest integer less than or equal to X. In the present example we have
`i = l correctable channel transmission errors per code word received as dam = 3,
`The detection of errors is also a key item in channel coding because we could
`always request.
`through a feedback channel. retransmission of a code word de-
`tected to contain errors. In the present example only one error can be detected: for
`instance, (0, 1, I) could be received (denoted as r..) when (0. 0. 0) was transmitted.
`but this error pattern is not detectable since the decoder must also consider (1. 1. l)
`as a candidate transmitted code word. Clearly. a single channel error can always
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 13
`
`IPR2018-01474
`Apple Inc. EX1021 Page 13
`
`

`

`4 Ch.1 I Introduction
`
`
`
`FIGURE 1.5 Decoding represented as points on a cube.
`
`be detected for the present example. In general, [din/2] transmission errors can
`always be detected and for the present example we have only one error detected.
`However, if we used the code 1 —+ (l, l, l, l) and 0 —> (0,0,0, 0). we would have
`diam = 4. and thus two errors detected but still only one error corrected. Note here
`that only 1 out of 4 bits sent is an information bit. and we say that the rate of the
`code is 114. The earlier code had rate ”3 and thus less error detection capability
`than given in the rate 114 case. Our concept of error detection here is different than
`in most textbooks on coding theory, where the number of errors detected is taken
`as d in -— 1. In traditional coding the rate U3 code is transmitted by using the BSC
`three times for each information bit. To realize this in practice, we must either
`speed up the rate of symbol transmission by a factor of 3 or keep the same rate
`of symbol transmission and be content with one-third the information transmission
`rate relative to when no channel coding is used. Thus. in either case. an increased
`channel bandwidth is required per information bit transmitted. This book is about
`an alternative to this approach that involves no change in information transmission
`rate; rather.
`the number of points in the signal constellation for modulation is
`increased to achieve the required redundancy.
`
`1.1.3 Modulator
`
`In Fig. 1.2 the modulator interfaces the channel encoder to the channel. The
`source, SOUl’CB encoder, and channel encoder taken together can be viewed as a
`modified binary source that feeds the modulator. and as such. the modulator can
`be regarded as interfacing the source to the channel. Physical channels can require
`electrical signals, radio signals. or optical signals. The modulator takes in the source
`outputs and outputs waveforms that suit the physical nature of the channel and are
`also chosen to yield either system simplicity or optimal detection performance. A
`baseband binary modulator is shown in Fi g. 1.6. We call this a baseband modulator
`because no sinusoidal carrier signal is involved. On a channel that has symmetric—
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 14
`
`IPR2018-01474
`Apple Inc. EX1021 Page 14
`
`

`

`1.1 I Digital Communications Structure
`
`5
`
`pl!)
`
`A
`
`Modulation Rule
`1 fi" PU)
`0 —"—p(!)
`
`
`
`FIGURE 1.6 Baseband modulator.
`
`the signal selection in Fig. 1.6 is optimal in that it will yield for a
`interference,
`fixed transmitted power the least number of errors in detection in the receiver. In
`the quaternary modulator shown in Fig. 1.7, two source bits per symbol
`inter-
`val T are required. whereas before, a single bit will do. In the quaternary case
`the symbol transmission rate is 2/?" bits per second (bits/s). This is an exam~
`ple of pulse amplitude modulation; the amount of channel bandwidth such signals
`
`1
`
`1 —*3p(!)
`
`Quaternary
`Gray
`code
`
`l 0 —> 1?“)
`
`0 0 —*- p(!)
`
`0 1—-—-3p(:)
`
`
`
`FIGURE 1.7 Quaternary modulator.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 15
`
`IPR2018-01474
`Apple Inc. EX1021 Page 15
`
`

`

`6 Ch. 1
`
`I Introduction
`
`require is related only to the rate at which the modulator signals are changed. that is.
`to the rate 1/ T. Thus the quaternary case has twice the throughput of the binary case
`and clearly cannot haVe the same enor performance. because the receiver must sort
`out which of four signals were transmitted for the quaternary case. whereas a signal
`selection over two possibilities suffices in the binary caSe.
`To consider carrier modulation. consider the sinusoidal signal
`
`5(t) = A(t)cos[mct + 9(0]
`
`(1.1)
`
`where AU) is the amplitude; cur is the frequency in radians per second and equals
`anc. with ft the frequency in hertz; and 6(1) is the phase. In carrier modulation
`we can vary any or all of the parameters (A. (or. 6): varying A is called amplitude
`modulation. varying 0 is called phase modulation. and varying cup is called fre-
`quency modulation; in all cases the variation is (hopefully) linearly related to the
`message to be transmitted. Some examples are given in Figs. 1.8 and 1.9. Note
`that binary phase modulation (called binary phase shift keying. BPSK) is equiva-
`lent to binary amplitude shift keying (BASK). The type of quadrature amplitude
`modulation (QAM) shown in Fig. 1.9 is called 64-QAM in that the signal constel-
`lation contains 64 points. Thus 6/ T bits per symbol interval T can be transmitted
`over the channel. Inherent in the use of this modulation is the fact that two carrier
`
`signals that differ in phase by :90° can be separated in the receiver to recover the
`signals XU) and 1’0). known as the in—phase and quadrature signals. respectively.
`This can be done in a coherent receiver. which is a receiver that must acquire and
`track any nonmodulation phases that exist in the received signal.
`
`1.1.4 The Communications Channel
`
`The simplest channel is the additive noise channel: here the signal is received
`with no distortion except additive noise. That is. if rm is the received signal.
`
`T“) = s(r) + not)
`
`(1.2)
`
`where 5(1) is the transmitted signal and ntr) is the additive noise. The classical
`theory of communication over the additive noise channel
`is given in reference
`[I]. A channel where the received signal is distorted. or at least can be distorted.
`is shown in Fig. 1.10. This phenomenon is called the intersymbol
`interference
`channel. as modulation symbols spill over into other symbol intervals. thus causing
`distortion. Additive noise is also present in the received signal but is not shown on
`the waveforms in Fig. 1.10.
`Define the distribution of signal power as a function of frequency as the pOWer
`spectrum of a signal. A power spectrum for a QPSK signal is displayed in Fig-
`l.11. A QPSK signal involves modulation with a discontinuous phase angle. The
`Other signal spectra in Fig. LII—minimum shift keying (MSK). duobinary min-
`imum shift keying (DuMSK). and tamed frequency modulation (.'I”FM)—'HW01Ve
`phase modulation with increasing smoothness [2]. This smoothness produces a more
`compact Spectrum. Call the bandwidth of a signal the set of frequencies that con-
`tain 98% of its power; that is, the area under the curve over this set of frequencies
`in Fig. 1.10 that contains 98% of the total area. If the 3-dB bandwidth of the linear
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 16
`
`IPR2018-01474
`Apple Inc. EX1021 Page 16
`
`

`

`
`
`1.1 I Digital Communications Structure
`
`7
`
`
`
`54'(’)=AC()5((D'1+9+2_JT1)
`i=0,] ..... M—l M
`6 = offset phase
`
`(3)
`
`Amp
`
`IA IA IA IA
`
`In
`
`f:
`
`f2
`
`f3
`
`'7
`
`Sill) = A cosafiz + 49 + 2mm)
`fl=fi+iA
`i:0.l ..... M—1
`
`(1))
`
`FIGURE 1.8 Digital phase and frequency modulation: (a) MPSK; (b) FSK.
`
`sir) =X(l) (205((Drl +6)+ Ym sin (cairn?)
`
`+l§l+
`
`FIGURE 1.9 Quadrature amplitude modulation.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 17
`
`IPR2018-01474
`Apple Inc. EX1021 Page 17
`
`

`

`8 Ch. 1 I Introduction
`
`
`
`s(r)
`
`)‘(Il
`
`Hm = RC low pass filter
`
`1
`Hm _
`_ ij/fol + 1
`
`f0: 3-dB bandwidth
`
`FIGURE 1.10
`
`Intersymbol interference channel.
`
`filter in Fig. 1.9 is significantly less than this bandwidth. intersymbol interference
`(181) results. The classical theory of communication over the [81 channel is given
`in [3]; a recent textbook [2] considers additive noise. 131. and some nonlinear
`channels.
`
`Much of this book is written for modulation and channel coding for the additive
`noise channel where the additive noise is white Gaussian noise—that is. the signal
`power spectrum is flat over the bandwidth of all signals sent over the channel. Very
`little is considered for the ISI channel because the application of trellis codes to this
`channel is in the early stages of research. A channel that will receive some attention
`is the nonfrequency selective fading channel (Fig. 1.12). Indeed. the greatest gains
`in performance that trellis codes have attained are for this channel. Let the input
`signal be s(r) in equation (1.1); then the output or faded signal is
`
`)’(!) = G(r)A(t)COS[w¢t + 8(r) + tam].
`
`11.3)
`
`In (1.3) the shape of 5(1) is not changed; only its amplitude and phase are altered.
`A typical fading function. C(r). for the model developed in [4] for the Canadian
`Mobile Satellite Communications System is shown in Fig. 1.13. The classical
`theory of fading channels in mobile radio systems is given in Jakes's textbook
`[5]. and a good section on fading channels appears in [6]. In addition. material
`of fading channel models can be found in Appendix A. It should be noted that
`fading channels represent an example of a multiplicative noise process rather than
`the additive noise case considered earlier.
`
`In frequency-selective fading. str) in (1.2) is distorted as well as attenuated in a
`time-varying manner. The channel in this case is a combination of the fading chan-
`nel under consideration and the ISI channel. We do not consider such challenging
`channels in this book.
`
`1.1.5 The Receiver
`
`in the block diagram in Fig. 1.1. NOW the
`The receiver follows the channel
`transmitter represents an operation on the source and the function of the transmitter
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 18
`
`IPR2018-01474
`Apple Inc. EX1021 Page 18
`
`

`

`1.1 I Digital Communications Structure
`
`9
`
`(18
`
`Meanpower
`
`—66
`
`0
`
`1
`
`2
`
`3
`
`4
`
`Frequency (LO/T)
`
`FIGURE 1.11
`
`Power spectrum of various signals.
`
`is to invert this operation and recover the source symbols. Figure 1.2 represents
`this Operation by showing the transmitter in block diagram form. The inverse of
`this block diagram. the receiver. is shown in Fig. 1.14. Indeed. each block in Fig.
`1.14 is the inverse of a corresponding block in Fig. 1.2. The demodulator is the
`inverse of the modulation process.
`the channel decoder inverts the channel en—
`coder process, and so on. The various blocks are viewed independently, much as in
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 19
`
`IPR2018-01474
`Apple Inc. EX1021 Page 19
`
`

`

`10 Ch. 1
`
`I Introduction
`
`NU)
`
`
`
`
`phasor
`GI r) A w 1)
`
`Random
`
`3(1): Aulcoslw‘w Hm]
`y“): C(rmm coslmrl + 6m + Mrll
`
`FIGURE 1.12 Nonfrequency-selecllve fading channel.
`
`IO
`
`—10
`
`20101;r
`
`0
`
`100
`
`200
`
`300
`
`400
`
`500
`
`Time I symbol length}
`
`FIGURE 1.13 Amplitude fading function.
`
`
`
`FIGURE 1.14 Receiver block diagram.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 20
`
`IPR2018-01474
`Apple Inc. EX1021 Page 20
`
`

`

`1.2 1 Discrete Memoryless Channels
`
`11
`
`the case of multilevel data protocols. Trellis coding serves to merge the processes
`of modulation and coding. and recent work [7] is aimed at merging the roles of
`source coding. channel coding. and modulation. We consider an example of a
`demodulator in the next section. An example of a channel decoder was treated
`earlier—the majority rule or nearest-neighbor decoder discussed in relation to
`Fig. 1.4.
`
`1.2 Discrete Memoryless Channels
`
`is given in Fig.
`An example of a discrete memoryless channel (DMC). the BSC.
`1.4. This channel has two inputs and two outputs. In general, a DMC can have a
`finite number of inputs and outputs.
`in any case. all of the channels described in
`Section 1.1.4 involved a continuous-time variable. In this section we show how to
`
`derive a DMC from a continuous—time channel description. The latter is a physical
`channel description, whereas the former is an abstract version of the channel.
`
`1.2.1 Uncoded Baseband Communication
`
`Consider the case of the transmitter, additive noise channel. and receiver shown
`
`1 —> {)0} and
`in Fig. 1.15. The modulation will be as shown in Fig. 1.6. namely,
`O —> —p(r), where pt!) is the rectangular pulse. The receiver is shown in Fig.
`1.16; this is a matched filter and it is optimum in the sense of having the smallest
`error probability among all receivers [1]. A typical output is shown in Fig. 1.16,
`together with a sampler that is synchronous with the end of a pulse. These samples
`are quantized into two levels with a threshold at zero. If the sample is positive. a
`binary l
`is declared to have been transmitted; otherwise, a binary 0 is declared.
`The BSC channel is shown in Fig. 1.4. This channel represents a summary of
`binary data transmission over the continuous-time channel shown in Fig. 1.15. The
`BSC is completely described by p, the probability of error per binary digit sent
`over the channel. Thus. to determine the BSC. we must find p.
`
`
`
`”(1)
`
`Binary
`
`source
`
`Modulation
`rule
`
`1 —. p(t)
`0 —~up(l)
`
`FIGURE 1.15 Baseband system with no coding.
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 21
`
`IPR2018-01474
`Apple Inc. EX1021 Page 21
`
`

`

`THE FIRST BOOK DEVOTED COMPLETELY TO TCM!
`
`Appropriate for students and professionals alike, this book provides a thor-
`ough introduction to trellis—coded modulation (TCM), helping readers grasp
`its theory as well as the techniques needed for its analysis. It offers both a
`conceptual and practical perspective by applying TCM theory to real-world
`problems and evaluating the results; examples include fading channels and
`commercial modems.
`
`Introduction to Trellis-Coded Modulation with Applications contains most
`of the results of TCM research that have occurred since its invention. In
`
`addition, numerical illustrations are included throughout to help describe
`results from the application of TCM theory.
`
`L’
`
`ammo margins; cams
`
`aHEEINF';'0‘02".; 9651—23
`
`
`MIL!”
`| 58$.[In
`lBlsagl'SLIERI
`(i
`
`INTRODUCJ'ION T0. .TIONF‘
`9 “780023"099030“ -- .-..-.-
`
`-L.‘
`
`|PR2018—01474
`
`Apple Inc. EX1021 Page 22
`
`IPR2018-01474
`Apple Inc. EX1021 Page 22
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket