`
`Introduction to ~~
`Trellis-Coded
`Modulation
`with Applications
`
`Marvin K. Simon
`
`| fi
`
`|
`
`Ezio Biglieri
`Dariush Divsalar
`Peter J. McLane
`
`IPR2018-01474
`Apple Inc. EX1021 Page 1
`
`IPR2018-01474
`Apple Inc. EX1021 Page 1
`
`
`
`
`
`
`
`Introduction to
`Trellis-Coded
`Modulation
`with Applications
`
`Ezio Biglieri
`
`Politecnico di Torino, Italy
`
`Dariush Divsalar
`Jet Propulsion Laboratory
`California Institute of Technology
`Peter J. McLane
`Queen’s University, Canada
`Marvin K. Simon
`Jet Propulsion Laboratory
`California Institute of Technology
`
`Macmillan Publishing Company
`New York
`Maxwell Macmillan Canada, Inc.
`Toronto
`Maxwell Macmillan International
`New York Oxford Singapore Sydney
`
`IPR2018-01474
`Apple Inc. EX1021 Page 2
`
`IPR2018-01474
`Apple Inc. EX1021 Page 2
`
`
`
`John Griffin
`Editor:
`Production Supervisor: Elaine W. Wetterau
`Production Manager:
`Pamela Kennedy Oborski
`Text Designer: Eileen Burke
`Cover Designer: Doris Chen
`CoverIllustration: Marvin Simon
`Illustrations:
`Publication Services
`
`This book was set in 10/12 Times Roman by Publication Services, printed
`and bound by Quinn Woodbine, Inc. The jacket was
`printed by Phoenix Color Corporation.
`
`Copyright ©1991 by Macmillan Publishing Company, a division of Macmillan, Inc.
`
`Printed in the United States of America
`
`All rights reserved. No part of this book may be reproduced or
`transmitted in any form or by any means, electronic or mechanical,
`including photocopying, recording, or any information storage and
`retrieval system, without permission in wnting from the publisher.
`
`Macmillan Publishing Company
`866 Third Avenue, New York, New York 10022
`
`Maxwell Macmillan Canada, Inc.
`1200 Eglinton Avenue, E.
`Suite 200
`Don Mills, Ontario M3C 3N1
`
`Library of Congress Cataloging in Publication Data
`
`Introduction to trellis-coded modulation, with applications/Ezio
`Biglien . . . [et al.].
`p-
`cm.
`Includes bibliographical references and index.
`ISBN 0-02-309965-8
`1. Digital modulation.
`theory.
`I. Biglieri, Ezio.
`TK5106.7.158
`1991
`621.381°536—dc20
`
`2. Modulation (Electronics)
`
`3. Coding
`
`91-10991
`CIP
`
`Printing: 123456789
`
`Year: 1234567890
`
`IPR2018-01474
`Apple Inc. EX1021 Page 3
`
`IPR2018-01474
`Apple Inc. EX1021 Page 3
`
`
`
`Contents
`
`CHAPTER |
`
`|
`
`6
`
`2
`
`16
`
`17
`
`Introduction
`1.1 Digital communications structure
`1.1.1
`Source encoder
`2
`1.1.2 Channel encoder
`1.1.3. Modulator
`4
`1.1.4 The communications Channel
`1.1.5 The receiver
`8
`11
`1.2 Discrete memoryless channels
`11
`1.2.1 Uncoded baseband communication
`1.2.2 Bit error probability: Binary signals 12
`1.3. Entropy
`15
`1.3.1 Discrete sources
`1.3.2 Continuous source
`1.4 Channel capacity
`18
`1.4.1 Related information theory Measures
`1.4.2 Channel capacity 2]
`21
`1.4.3.
`Symmetric channels
`23
`1.4.4 Kuhn-Tucker conditions
`1.4.5 Bandlimited Gaussian channel
`Shannon's two theorems 26
`1.5
`1.6 Computational cutoff rate
`27
`
`19
`
`25
`
`CHAPTER 2
`Error-correcting codes
`2.1
`Introduction
`33
`34
`2.2
`Parity check codes
`2.3 Matrix description: Error-cg,
`Tre
`2.4 Algebraic concepts
`40
`2.4.1.
`Primitive elemeny
`2.4.2
`Extensionfield Gy, 42 42
`2.5 Cyclic codes
`48
`(g)
`
`atin? Coges
`
`36
`
`{
`
`33
`
`IPR2018-01474
`Apple Inc. EX1021 Page 4
`
`IPR2018-01474
`Apple Inc. EX1021 Page 4
`
`
`
`xii Contents
`
`2.6
`
`2.7
`
`48
`BCH codes
`49
`2.6.1 Binary BCH codes
`2.6.2 Nonbinary BCH codes
`2.6.3 Reed-Solomon codes
`Convolutional codes
`56
`2.7.1 FSM and trellis representation
`2.7.2 G(D) and H(D) matrices
`58
`2.7.3. Viterbi algorithm 59
`2.7.4 Error-state diagrams
`
`52
`54
`
`61
`
`56
`
`CHAPTER 3
`TCM: Combined modulation and coding
`3.1
`Introducing TCM 67
`3.2
`The need forTCM 69
`3.3.
`Fundamentals of TCM 70
`3.3.1 Uncoded transmission
`Theconceptof TCM 71
`Trellis representation
`73
`Some examples of TCM schemes
`Set partitioning
`77
`Representations forTCM 79
`3.8.1 Ungerboeck representation
`3.8.2 Analytical representation
`Decoding TCM 87
`3.9.1 Definition of branch metric
`3.9.2 The Viterbi algorithm 90
`3.10 Bibliographical notes
`93
`Appendix 3A: Orthogonal expansion of the function f 94
`
`3.4
`3.5
`3.6
`3.7
`3.8
`
`3.9
`
`70
`
`74
`
`79
`82
`
`88
`
`67
`
`99
`
`CHAPTER 4
`
`Performance evaluation
`4.1
`99
`Upper boundto error probability
`4.1.1 The error-state diagram 102
`102
`4.1.2 The transfer function bound
`4.1.3 Consideration of different channels
`Lower boundto error probability
`114
`Examples
`116
`125
`Computation of dee
`4.4.1 Using the error-state diagram 125
`4.4.2 A computational algorithm 128
`4.4.3 The product-trellis algorithm 131
`Lower boundsto the achievable dpe.
`133
`4.5.1 A simple lower bound
`135
`
`4.2
`4.3
`4.4
`
`4.5
`
`113
`
`IPR2018-01474
`Apple Inc. EX1021 Page 5
`
`IPR2018-01474
`Apple Inc. EX1021 Page 5
`
`
`
`Contents
`
`xiii
`
`135
`4.6 Sphere-packing upper bounds
`4.6.1 A universal upper bound
`137
`4.7 Other sphere-packing bounds
`138
`4.7.1 Constant-energy signal constellations
`4.7.2 Rectangular signal constellations
`138
`4.7.3, An upper bound for PSK signals
`138
`4.7.4 An asymptotic upper bound
`145
`4.8 Power density spectrum 145
`
`138
`
`149
`
`207
`
`150
`
`CHAPTER 5
`One- and two-dimensional modulations for TCM
`5.1
`Introduction
`149
`149
`5.2 Step-by-step design procedure
`5.2.1 Derivation of the analytic description
`5.2.2 Design rules and procedure
`152
`5.3 One-dimensional examples
`153
`5.4 Two-dimensional examples
`163
`169
`5.5 Trellis code performance and realization
`5.6 Trellis coding with asymmetric modulations
`5.6.1 Analysis and design
`175
`5.6.2 Best rate 1/2 codes combined with asymmetric 4-PSK
`(A4-PSK)
`179
`5.6.3 Best rate 2/3 codes combined with asymmetric 8-PSK
`(A8-PSK)
`187
`5.6.4 Best rate 3/4 codes combined with asymmetric 16-PSK
`(Al6-PSK)
`195
`5.6.5 Best rate 1/2 codes combined with asymmetric
`4-AM 201
`5.6.6 Trellis-coded asymmetric 16-QAM 203
`
`174
`
`CHAPTER 6
`
`Multidimensional modulations
`6.1 Lattices
`209
`210
`6.1.1
`Some examplesof lattices
`212
`6.1.2
`Structural characteristics of lattices
`6.1.3 Example oflattice constellations for TCM 212
`6.1.4 Partition of lattices
`216
`6.1.5 Calderbank—Sloane TCM schemesbased onlattices
`6.2 Group alphabets
`220
`6.2.1
`Set partitioning of aGA 223
`6.3. Ginzburg construction
`224
`6.3.1
`Set partitioning
`229
`6.3.2 Designing a TCM scheme
`
`231
`
`217
`
`IPR2018-01474
`Apple Inc. EX1021 Page 6
`
`IPR2018-01474
`Apple Inc. EX1021 Page 6
`
`
`
`xiv Contents
`
`6.4
`
`6.5
`
`233
`
`243
`
`232
`Wei construction
`6.4.1 A design example
`Trellis-encoded CPM 240
`6.5.1 Review of CPM 240
`6.5.2 Parameter selection
`242
`6.5.3 Designing the TCM scheme
`6.5.4
`Performance examples
`246
`246
`Appendix 6A: Examples of group alphabets
`247
`6A.1
`Permutation alphabets
`250
`6A.2 Cyclic-group alphabets
`250
`Appendix 6B: Decomposition of the CPM modulator
`6B.1
`Phasetrellis and tilted phase trellis
`6B.2 Decomposing the CPM modulator
`
`251
`252
`
`CHAPTER 7
`Multiple TCM
`7.1
`Two-state MTCM 261
`7.1.1 Mapping procedure for two-state MTCM 264
`7.1.2. Evaluation of minimum squared free distance
`Generalized MTCM 268
`7.2.1
` Set-partitioning method for generalized MTCM 269
`7.2.2 Set mapping and evaluation of squared free distance
`Analytical representation of MTCM 282
`Bit error probability performance
`287
`Computational cutoff rate and MTCM performance
`Complexity considerations
`292
`Concluding remarks
`293
`
`265
`
`290
`
`7.2
`
`ded
`
`7.4
`
`eek
`
`7.6
`
`77
`
`273
`
`259
`
`295
`
`CHAPTER 8
`Rotationally invariant trellis codes
`Introduction
`295
`8.1
`296
`8.1.1 Rotational invariance
`8.1.2 Rotational invariant code
`8.1.3 Design rules
`302
`303
`8.1.4 Design procedure
`304
`8.1.5
`16-point examples
`Generation: Rotationally invariant codes
`8.2.1 Eight-point example
`310
`8.2.2
`Sixteen-point examples
`313
`Multidimensional RIC 322
`8.3.1 Linear examples
`323
`8.3.2. Nonlinear example
`329
`Bit error rate performance
`335
`
`8.2
`
`8.3
`
`8.4
`
`297
`
`310
`
`IPR2018-01474
`Apple Inc. EX1021 Page 7
`
`IPR2018-01474
`Apple Inc. EX1021 Page 7
`
`
`
`335
`8.4.1 Nonlinear codes
`8.4.2 Linear codes
`336
`
`Contents
`
`xv
`
`CHAPTER 9
`Analysis and performance of TCM for fading channels 343
`9.1 Coherent detection of trellis-coded M-PSK on a slow-fading Rician
`channel
`344
`
`346
`
`344
`9.1.1 Channel model
`344
`9.1.2
`System model
`9.1.3 Upper bound on pairwise error probability
`9.1.4 Upper bound on bit error probability
`350
`9.1.5 Simulation results
`366
`9.2 Differentially coherent detection of trellis-coded M-PSK on a
`slow-fading Rician channel
`371
`9.2.1
`System model
`371
`9.2.2 Analysis model
`372
`9.2.3. The maximum-likelihood metric fortrellis-coded M-DPSK 373
`9.2.4 Upper bound on pairwise error probability
`374
`9.2.5 Upper bound on average bit error probability
`378
`9.2.6 Simulation results
`385
`9.3 Differentially coherent detection of trellis-coded M-PSK on a
`fast-fading Rician channel
`387
`9.3.1. Analysis model
`388
`388
`9.3.2 Upper bound on pairwise error probability
`389
`9.3.3 Upper bound on average bit error probability
`9.3.4 Characterization of the autocorrelation and powerspectral
`density of the fading process
`390
`Simulation results
`392
`9.3.5
`9.4 Asymptotic results
`393
`9.4.1 Anexample
`398
`9.4.2 No interleaving/deinterleaving
`Further discussion
`401
`9.5
`Appendix 9A: Proof that d whose square is defined in (9.19) satisfies the
`conditions for a distance metric
`402
`Appendix 9B: Derivation of the maximum-likelihood branch metric for
`trellis-coded M-DPSK with ideal channel state
`information
`405
`
`399
`
`CHAPTER | Q
`Design of TCM for fading channels
`412
`10.1 Multiple trellis-code design for fading channels
`10.2
`Set partitioning for multiple trellis-coded M-PSK on the fading
`channel
`417
`
`411
`
`IPR2018-01474
`Apple Inc. EX1021 Page 8
`
`IPR2018-01474
`Apple Inc. EX1021 Page 8
`
`
`
`xvi Contents
`
`417
`10.2.1 The first approach
`LL
`10.2.2 The second approach
`427
`10.3. Design of Ungerboeck-type codes (unit multiplicity) for fading
`channels
`430
`10.4 Comparison of error probability performance with computational
`cutoff rate
`432
`10.5 Simulational results
`10.6 Further discussion
`
`433
`435
`
`437
`
`447
`
`ast
`
`476
`
`CHAPTER 1 1
`Analysis and design of TCM for other practical
`channels
`11.1
`Intersymbol interference channels
`11.1.1 Model
`438
`11.1.2
`LMSequalization 444
`11.1.3 Trellis-code performance
`11.2 Channels with phase offset
`453
`11.2.1 Upper bound on the average bit error probability
`performance of TCM 454
`11.2.2 Carrier synchronization loopstatistical model and average
`pairwise error probability evaluation
`461
`11.2.3 The case of binary convolutional coded BPSK
`modulation
`464
`470
`11.2.4 ATCMexample
`11.2.5 Concluding remarks
`474
`11.3 TCM oversatellite channels
`475
`11.3.1 A modem for land mobile satellite communications
`11.3.2 An SCPC modem 478
`479
`11.4 Trellis codes for partial response channels
`485
`11.4.1 Trellis codes for the binary (1 — D) channel
`11.4.2 Convolutional codes with precoder for (1 — D)
`channels
`487
`490
`11.5 Trellis coding for optical channels
`11.5.1
`Signal sets with amplitude and pulse-width
`constraints
`492
`11.5.2 Trellis-coded modulation for optical channels
`11.6 TCM with prescribed convolutional codes
`502
`11.6.1 Application to M-PSK modulation
`503
`11.6.2 Application to M-AM and QAM modulations
`
`494
`
`506
`
`IPR2018-01474
`Apple Inc. EX1021 Page 9
`
`IPR2018-01474
`Apple Inc. EX1021 Page 9
`
`
`
`APPENDIX A
`Fading channel models
`
`APPENDIX B
`Computational techniquesfor transfer functions
`
`APPENDIX C
`Computer programs: Design technique
`
`Index
`
`Contents
`
`xvii
`
`511
`
`521
`
`527
`
`541
`
`IPR2018-01474
`Apple Inc. EX1021 Page 10
`
`IPR2018-01474
`Apple Inc. EX1021 Page 10
`
`
`
`Introduction
`
`We begin by outlining the structure of digital communication systems and end the
`first chapter with an outline of Shannon’s information theory, which includes some
`aspects of elementary channel and source coding. The second chapter contains the
`rest of our material on traditional coding theory: a consideration of BCH codes and
`some material on convolutional codes. The main intent ofthe first two chaptersis
`to supply the reader with the necessary backgroundin information theory and error
`correction coding to understand the theory of trellis-coded modulation. Another
`goal is to provide the essentials of information theory and coding where the book
`is used for a single course on these subjects that contains a significant component
`on trellis-coded modulation.
`
`1.1 Digital Communications Structure
`
`Digital communications systems have a definite structure and knowledge ofthis
`structure is helpful in understanding the role of coding and modulation systems.
`The simplest structure, shown in Fig. 1.1,
`is for a point-to-point communication
`system—not that for a communications network or a point-to-multipoint system.
`We have a transmitter, 7,, a receiver, R,, and a channelthat links the transmitter
`and receiver.
`The transmitter, channel, and receiver shown in Fig. 1.1 can be further subdi-
`vided. Let us begin by considering a subdivision of the transmitter structure (Fig.
`1.2). We have an information source that we will take as binary, which meansthat
`its output is a sequence in which the only elements are 1s and Os. The sourceis
`followed by a source encoder, a channel encoder, and a modulator. We now de-
`scribe the function of each of these entities. Note that if the source is analog —for
`example, a speech or video source—weshall assume that it has been digitized.
`
`
`
`FIGURE 1.1
`
`Simplest digital communications structure.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 11
`
`IPR2018-01474
`Apple Inc. EX1021 Page 11
`
`
`
`2 Ch.1/ Introduction
`
`
`To
` Channel
`
`
`Source
`
`
`
`Modulator
`Source
`encoder
`
`encoder
`
`channel
`
`FIGURE 1.2 Transmitter block diagram.
`
`1.1.1 Source Encoder
`
`A good, or desirable source is random—such sources have maximum informa-
`tion. Clearly, if a | and a 0 have an equal probability of occurrence, knowledge
`of the source output provides a maximum amountof information. For instance,
`if
`the source nearly always outputs a 1,
`its output can be predicted and knowledge
`of the output providesvery little information. Usually, sources are not randomand
`contain significant amounts of redundancy. For example, in a video image. neigh-
`boring picture elements are usually strongly related. The role of a source encoderis
`to randomize the source. A measure of randomnessis entropy, a concept borrowed
`from thermodynamics. The function of a source encoder, then, is as illustrated in
`Fig. 1.3.
`Why do we want the source to be encoded to a disordered state? The answer
`lies in the utilization of one of the scarce resources of the telecommunications
`problem—the channel. We should not waste the scarce resources of the channel by
`sending predictable quantities over this link between the receiver and the transmitter.
`The channel should only be used to carry the unpredictable information from the
`source, that is, the output from the source encoder,
`
`1.1.2 Channel Encoder
`
`The goal of the channel encoderis to introduce an error correction capability into
`the source encoder output to combat channel transmission errors. To achieve this
`goal, some redundancy mustbe addedto the source encoder output. This may seem
`confusing at first because we have just arguedthat all redundancy mustbe stripped
`from the source outputs for efficient channel
`transmission. Indeed,
`this book is
`about a technique, trellis coding, for adding redundancy to the source outputs so
`that the channelis utilized from a very efficient point of view. However,
`thefirst
`two chapters are aboutthe traditional way of adding redundancy,
`through parity
`checks, and then transmitting the information plus parity bits across the channel in
`
`
`
`Source
`
`encoder
`
`
`Entropy
`E\
`
`
`
`
`
`
`Entropy
`E,2E,
`
`FIGURE 1.3
`
`Function of a source encoder.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 12
`
`IPR2018-01474
`Apple Inc. EX1021 Page 12
`
`
`
`1.1 / Digital Communications Structure
`
`3
`
`0
`
`l
`
`0
`
`|
`
`Pp
`
`P
`
`l-p
`
`(3, 1) Repetition code
`0—000
`l—e111
`
`FIGURE 1.4 BSC plus (3, 1) repetition code.
`
`a time-serial manner. Note that the same parity bits will be appended to a unique
`collection of source output bits called the message. In this way,
`the redundancy
`we add to the message is controlled and thereceiver will have knowledge of the
`structure of this redundancy. This is the difference between the original redundancy
`in the source symbols, which is not controlled, and the redundancy addedin channel
`coding, whichis controlled. Let us consider a simple example of channel coding.
`The binary symmetric channel (BSC)plus a (3, 1) repetition codeareillustrated in
`Fig. 1.4.
`The transmission diagram is a summarizing diagram for transmission over the
`channel, illustrating the fact that transmission errors occur with a prescribed prob-
`ability of p. The channel coder appends two identical parity bits to the source
`symbol, and the resulting 3-bit word is transmitted over the channel onebit at a
`time (in Section 1.1.3 we show how this could be done). We can regard channel
`transmission as three uses of the BSC shown in Fig. 1.4. If (0, 1, 0) is the 3-bit out-
`put from three uses of the BSC, we should declare (0, 0, 0) as transmitted (denoted
`as t,), since if 0 was the source bit sent, we have corrected a channel transmission
`error in position 2, Thus our decoder for the BSC’s 3-bit outputs is based on ma-
`jority rule and so will always result in one channel error being corrected no matter
`where it appears in the 3-bit word.
`The situation described above can be represented using the cube shownin Fig.
`1.5. Two possible code words transmitted over the BSC are separated by a Hamming
`distance, d4, of 3 and nearest-neighbor decoding results on a single error being
`corrected. In general, if two code words differ in their component position, we add
`one to their componentdistance, and examining all components gives the Hamming
`distance between the two code words. Here we have d = 3. In general, for more
`than two code words the greatest chance for error comes in comparing two code
`words of least distance, aa In addition, the numberoferrors that can be corrected
`by a code with the shortest Hamming distance, dH... is r = |(d4., — 1)/2], where
`|x| is the largest integer less than or equal to x. In the present example we have
`tf = | correctable channel transmission errors per code word received as ai. = 3)
`The detection of errors is also a key item in channel coding because we could
`always request,
`through a feedback channel, retransmission of a code word de-
`tected to contain errors. In the present example only one error can be detected: for
`instance, (O, 1, 1) could be received (denoted as r,) when (0, 0, 0) was transmitted,
`but this error pattern is not detectable since the decoder must also consider(1, 1, 1)
`as a candidate transmitted code word. Clearly, a single channel error can always
`
`IPR2018-01474
`Apple Inc. EX1021 Page 13
`
`IPR2018-01474
`Apple Inc. EX1021 Page 13
`
`
`
`4 Ch.1/ Introduction
`
`
`
`FIGURE 1.5 Decoding represented as points on a cube.
`
`be detected for the present example. In general, |d4,,/2| transmission errors can
`always be detected and for the present example we have onlyoneerror detected.
`However, if we used the code 1 — (1, 1, 1, 1) and 0 — (0,0, 0, 0), we would have
`di. = 4, and thus twoerrors detected butstill only one error corrected. Note here
`that only 1 out of 4 bits sent is an information bit, and wesay that the rate of the
`code is 1/4. The earlier code had rate 1/3 and thus less error detection capability
`than given in the rate 1/4 case. Our conceptoferror detection here is different than
`in most textbooks on coding theory, where the numberoferrors detected is taken
`as d4. — 1. In traditional coding the rate 1/3 codeis transmitted by using the BSC
`three times for each information bit. To realize this in practice, we must either
`speed up the rate of symbol transmission by a factor of 3 or keep the same rate
`of symbol transmission and be content with one-third the information transmission
`rate relative to when no channel coding is used. Thus, in either case, an increased
`channel bandwidth is required per information bit transmitted. This book is about
`an alternative to this approach that involves no change in information transmission
`rate; rather,
`the number of points in the signal constellation for modulation is
`increased to achieve the required redundancy.
`
`1.1.3 Modulator
`
`In Fig. 1.2 the modulator interfaces the channel encoder to the channel. The
`source, source encoder, and channel encoder taken together can be viewed as a
`modified binary source that feeds the modulator, and as such, the modulator can
`be regarded asinterfacing the source to the channel. Physical channels can require
`electrical signals, radio signals, or optical signals. The modulator takes in the source
`outputs and outputs waveformsthat suit the physical nature of the channel and are
`also chosen to yield either system simplicity or optimal detection performance. A
`baseband binary modulator is shownin Fig. 1.6. We call this a baseband modulator
`because no sinusoidal carrier signal is involved. On a channelthat has symmetric
`
`IPR2018-01474
`Apple Inc. EX1021 Page 14
`
`IPR2018-01474
`Apple Inc. EX1021 Page 14
`
`
`
`1.1 / Digital Communications Structure
`
`5
`
`p(t)
`
`A
`
`Modulation Rule
`1 —— p(t)
`0 ——p(r)
`
`Line ___.
`
`signal
`
`FIGURE 1.6 Baseband modulator.
`
`the signal selection in Fig. 1.6 is optimal in that it will yield for a
`interference,
`fixed transmitted power the least numberof errors in detection in the receiver, In
`the quaternary modulator shown in Fig. 1.7, two source bits per symbol
`inter-
`val T are required, whereas before, a single bit will do. In the quaternary case
`the symbol transmission rate is 2/T bits per second (bits/s). This is an exam-
`ple of pulse amplitude modulation; the amount of channel bandwidth suchsignals
`
`Quaternary
`Gray
`code
`
`1
`
`1 — 3p(t)
`
`10— p(t)
`
`00— p(t)
`
`0 1 —* -3p(r)
`
`
`
`FIGURE 1.7 Quaternary modulator.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 15
`
`IPR2018-01474
`Apple Inc. EX1021 Page 15
`
`
`
`6 Ch. 1
`
`/ Introduction
`
`require is related only to the rate at which the modulator signals are changed, that is,
`to the rate 1/7. Thusthe quaternary case has twice the throughputofthe binary case
`and clearly cannot have the sameerror performance, because the receiver must sort
`out which offour signals were transmitted for the quaternary case, whereasa signal
`selection over two possibilities suffices in the binary case.
`To consider carrier modulation, consider the sinusoidalsignal
`
`s(t) = A(t) cos[w.t + @(t)]
`
`(1.1)
`
`where A(t) is the amplitude; w, is the frequency in radians per second and equals
`2af-, with f, the frequency in hertz; and @(r) is the phase. In carrier modulation
`we can vary anyorall of the parameters (A, w-, 6): varying A is called amplitude
`modulation, varying @ is called phase modulation, and varying w.
`is called fre-
`quency modulation; in all cases the variation is (hopefully) linearly related to the
`message to be transmitted. Some examples are given in Figs. 1.8 and 1.9. Note
`that binary phase modulation (called binary phase shift keying, BPSK) is equiva-
`lent to binary amplitude shift keying (BASK). The type of quadrature amplitude
`modulation (QAM) shownin Fig. 1.9 is called 64-QAM in that the signal constel-
`lation contains 64 points. Thus 6/T bits per symbol interval T can be transmitted
`over the channel. Inherent in the use of this modulation is the fact that two carrier
`signals that differ in phase by 90° can be separated in the receiver to recover the
`signals X(t) and Y(t), known as the in-phase and quadrature signals, respectively.
`This can be donein a coherent receiver, whichis a receiver that must acquire and
`track any nonmodulation phases that exist in the received signal.
`
`1.1.4 The Communications Channel
`
`The simplest channelis the additive noise channel: here the signal is received
`with no distortion except additive noise. That is, if r(r) is the received signal,
`
`r(t) = s(t) + n(t)
`
`(1.2)
`
`where s(t) is the transmitted signal and n(r) is the additive noise. The classical
`theory of communication over the additive noise channel
`is given in reference
`[1]. A channel where the received signal is distorted, or at least can be distorted.
`is shown in Fig. 1.10. This phenomenonis called the intersymbol
`interference
`channel, as modulation symbols spill over into other symbolintervals, thus causing
`distortion. Additive noise is also present in the received signal but is not shown on
`the waveformsin Fig. 1.10.
`Define the distribution of signal power as a function of frequency as the power
`spectrum of a signal. A power spectrum for a QPSKsignal is displayed in Fig.
`1.11. A QPSKsignal involves modulation with a discontinuous phase angle. The
`other signal spectra in Fig. 1.11—minimumshift keying (MSK), duobinary min-
`imum shift keying (DuMSK), and tamed frequency modulation (TFM)—involve
`phase modulation with increasing smoothness[2]. This smoothness produces a more
`compact spectrum. Call the bandwidth ofa signal the set of frequencies that con-
`tain 98% of its power; that is, the area underthe curve overthis set of frequencies
`in Fig. 1.10 that contains 98% ofthe total area. If the 3-dB bandwidth ofthe linear
`
`IPR2018-01474
`Apple Inc. EX1021 Page 16
`
`IPR2018-01474
`Apple Inc. EX1021 Page 16
`
`
`
`
`
`1.1 / Digital Communications Structure
`
`7
`
`
`
`5;(t) = Acos(w,1+ a+ -2!)
`b=0! Vieuies M-—|
`@ = offset phase
`
`(a)
`
`Amp
`
`Fy fianceOLwe
`
`Pltete
`
`§;,(t) =A cos(2f.t + @ + 27iAt)
`fr=f+id
`i=0,1,..., M-1
`
`(b)
`
`FIGURE 1.8 Digital phase and frequency modulation: (a) MPSK; (b) FSK.
`
`s(t) =X(t) cos@_1 + @) + ¥(t) sin @,t +)
`
`oe
`
`FIGURE 1.9 Quadrature amplitude modulation.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 17
`
`IPR2018-01474
`Apple Inc. EX1021 Page 17
`
`
`
`8 Ch. 1 / Introduction
`
`
`
`s(t)
`
`y(t)
`
`H(f) = RC low passfilter
`
`I
`HY) =
`~ iFlfy) +1
`fy= 3-dB bandwidth
`
`FIGURE 1.10
`
`Intersymbol interference channel.
`
`filter in Fig. 1.9 is significantly less than this bandwidth, intersymbolinterference
`(ISI) results. The classical theory of communication over the ISI channel is given
`in [3]; a recent textbook [2] considers additive noise, ISI, and some nonlinear
`channels.
`Muchof this book is written for modulation and channel coding for the additive
`noise channel where the additive noise is white Gaussian noise—thatis, the signal
`powerspectrumisfiat over the bandwidth ofall signals sent over the channel. Very
`little is considered for the ISI channel because the applicationoftrellis codes to this
`channel is in the early stages of research. A channelthat will receive some attention
`is the nonfrequency selective fading channel(Fig. 1.12). Indeed, the greatest gains
`in performance that trellis codes have attained are for this channel. Let the input
`signal be s(f) in equation (1.1); then the output or faded signal is
`
`y(t) = G(t)A(t) cos[w.t + A(t) + W(t)).
`
`(1.3)
`
`In (1.3) the shape of s(t) is not changed; only its amplitude and phaseare altered.
`A typical fading function, G(r), for the model developed in [4] for the Canadian
`Mobile Satellite Communications System is shown in Fig. 1.13. The classical
`theory of fading channels in mobile radio systems is given in Jakes’s textbook
`[S], and a good section on fading channels appears in [6]. In addition, material
`of fading channel models can be found in Appendix A. It should be noted that
`fading channels represent an example of a multiplicative noise process rather than
`the additive noise case considered earlier.
`In frequency-selective fading, s(r) in (1.2) is distorted as well as attenuated in a
`time-varying manner. The channelin this case is a combination of the fading chan-
`nel under consideration and the ISI channel. We do not consider such challenging
`channels in this book.
`
`1.1.5 The Receiver
`
`in the block diagram in Fig. 1.1. Now the
`The receiver follows the channel
`transmitter represents an operation on the sourceand the function of the transmitter
`
`IPR2018-01474
`Apple Inc. EX1021 Page 18
`
`IPR2018-01474
`Apple Inc. EX1021 Page 18
`
`
`
`1.1 / Digital Communications Structure
`
`9
`
`SS
`
`0
`
`6
`
`-12
`
`-18
`
`—24
`
`m
`2 -30
`°
`a
`=
`o
`Ss —36
`
`42
`
`y\
`fn
`Y ot
`1
`7
`1
`48
`Tw
`reo
`
`1 17DMSK:
`
`—54
`1
`,
`:4!
`! !/TEMRECu
`
`
`60
`'
`
`i
`
`66
`
`0
`
`|
`
`2
`
`3
`
`4
`
`Frequency (1.0/T)
`
`FIGURE 1.11
`
`Power spectrum of various signals.
`
`is to invert this operation and recover the source symbols. Figure 1.2 represents
`this operation by showing the transmitter in block diagram form. The inverse of
`this block diagram, the receiver, is shownin Fig. 1.14. Indeed, each block in Fig.
`1.14 is the inverse of a corresponding block in Fig. 1.2. The demodulatoris the
`inverse of the modulation process,
`the channel decoder inverts the channel en-
`coder process, and so on. The various blocks are viewed independently, muchas in
`
`IPR2018-01474
`Apple Inc. EX1021 Page 19
`
` 6
`
`oO rx
`,
`r
`\
`Ag Ok
`\ I
`\
`\
`1
`\;
`i!
`|
`l
`i
`|
`V
`
`\|
`
`' QPSK
`
`4 MSK
`
`IPR2018-01474
`Apple Inc. EX1021 Page 19
`
`
`
`10 Ch. 1
`
`/ Introduction
`
`n(t)
`
`
`
`Random
`s(t)
`
`
`phasor
`GU) Zv(t)
`
`s(t) = A(t) cos[@,1+ A(t}
`y(1)= G(1)A(t) cos [Ot + A(t) + y(t)l
`FIGURE 1.12 Nonfrequency-selective fading channel.
`
`—40
`
`100
`
`200
`300
`Time (symbol length)
`FIGURE 1.13 Amplitude fading function.
`
`400
`
`500
`
`r(t)
`
`
`
`
`
`Channel
`Source
`
`decoder
`decoder
`
`
`
`FIGURE 1.14 Receiverblock diagram.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 20
`
`IPR2018-01474
`Apple Inc. EX1021 Page 20
`
`
`
`1.2 / Discrete Memoryless Channels
`
`11
`
`the case of multilevel data protocols. Trellis coding serves to merge the processes
`of modulation and coding, and recent work [7] is aimed at merging the roles of
`source coding, channel coding, and modulation. We consider an example of a
`demodulator in the next section. An example of a channel decoder was treated
`earlier—the majority rule or nearest-neighbor decoder discussed in relation to
`Fig. 1.4.
`
`1.2 Discrete Memoryless Channels
`
`An example of a discrete memoryless channel (DMC), the BSC, is given in Fig.
`1.4. This channel has two inputs and two outputs. In general, a DMC can have a
`finite number of inputs and outputs. In any case, all of the channels described in
`Section 1.1.4 involved a continuous-time variable. In this section we show how to
`derive a DMC from a continuous-time channel description. The latter is a physical
`channel description, whereas the former is an abstract version of the channel.
`
`1.2.1 Uncoded Baseband Communication
`
`Consider the case of the transmitter, additive noise channel, and receiver shown
`in Fig. 1.15. The modulation will be as shown in Fig. 1.6, namely,
`| — p(t) and
`0 — —p(t), where p(t) is the rectangular pulse. The receiver is shownin Fig.
`1.16; this is a matched filter and it is optimum in the sense of having the smallest
`error probability amongall receivers [1]. A typical output is shown in Fig. 1.16,
`together with a samplerthat is synchronous with the end of a pulse. These samples
`are quantized into two levels with a threshold at zero. If the sample is positive, a
`binary |
`is declared to have been transmitted; otherwise, a binary 0 is declared.
`The BSC channel is shown in Fig. 1.4. This channel represents a summary of
`binary data transmission over the continuous-time channel shownin Fig. 1.15. The
`BSCis completely described by p, the probability of error per binary digit sent
`over the channel. Thus, to determine the BSC, we must find p.
`
`
`
`
`
`
`
` Binarysource Modulator
`
`
`Modulation
`rule
`
`1 — p(t)
`0 —-—p(t)
`
`FIGURE 1.15 Baseband system with no coding.
`
`IPR2018-01474
`Apple Inc. EX1021 Page 21
`
`IPR2018-01474
`Apple Inc. EX1021 Page 21
`
`
`
`THE FIRST BOOK DEVOTED COMPLETELY TO TCM!
`
`Appropriate for students and professionals alike, this book providesa thor-
`oughintroduction to trellis-coded modulation (TCM), helping readers grasp
`its theory as well as the techniques neededforits analysis. It offers both a
`conceptual and practical perspective by applying TCM theory to real-world
`problems andevaluating the results; examples include fading channels and
`commercial modems.
`
`Introduction to Trellis-Coded Modulation with Applications contains most
`of the results of TCM research that have occurredsinceits invention. In
`addition, numericalillustrations are included throughoutto help describe
`results from the application of TCM theory.
`
`hyootaes CAMPUS
`
`Uu25°
`
`ee
`
`'
`
`(iyiime
`HIN*I}
`NeIi
`(7
`
`. TIONS
`INTRODUCTION TO.
`9"780025"U9¥0D0" ~~~
`
`—|
`
`IPR2018-01474
`Apple Inc. EX1021 Page 22
`
`IPR2018-01474
`Apple Inc. EX1021 Page 22
`
`