`
`COMMUNICATION
`SYSTEMS
`
`
`
`SIMON HAYKIN
`
`
`
`Page 1 of 109
`
`SAMSUNG EXHIBIT 1025
`
`
`
`COMMUNICATION
`
`SYSTEMS
`
`Page 2 of 109
`
`Page 2 of 109
`
`
`
`COMMUNICATION
`
`SYSTEMS
`
`
`——
`
`SIMON HAYKIN
`McMaster University
`
`JOHN WILEY & SONS, INC.
`New York
`-
`Chichester
`-
`
`Brisbane
`
`-
`
`Toronto
`
`-
`
`Singapore
`
`Page 3 of 109
`
`Page 3 of 109
`
`
`
`ACQUISITIONS EDITOR
`MARKETING MANAGER
`SENIOR PRODUCTION EDITOR
`DESIGNER
`MANUFACTURING MANAGER
`ILLUSTRATION COORDINATOR
`
`Steven Elliot
`Susan Elbe
`Richard Blander
`David Levy
`Andrea Price
`Jaime Perea
`
`This book was set in New Baskerville by CRWaldman Graphic Communications and
`printed and bound by Hamilton Printing Company. The cover was printed by Lehigh.
`
`Recognizing the importance of preserving what has been written, it is a policy ofJohn Wiley 8c Sons, Inc.
`to have books of enduring value published in the United States printed on acid-free paper,
`and we exert our best efforts to that end.
`
`Copyright 1978, 1983, 1994, byjohn Wiley 8c Sons, Inc.
`
`All rights reserved. Published simultaneously in Canada.
`
`Reproduction or translation of any part of
`this work beyond that permitted by Sections
`107 and 108 of the 1976 United States Copyright
`Act without the permission of the copyright
`owner is unlawful. Requests for permission
`or further information should be addressed to
`
`the Permissions Department, John Wiley 8c Sons, Inc.
`
`Library of Congress Cataloging in Publication Data:
`Haykin, Simon, 1931—
`Communication systems / Simon Haykin, — 3rd ed.
`p.
`cm.
`
`Includes bibliographical references and index.
`ISBN 0—471—57176-8
`
`l. Telecommunication. 2. Signal theory (Telecommunication)
`I. Title.
`
`9347663
`1994
`CIP '
`
`TK5101.H37
`621.382—dc20
`ISBN 0-471-57178—8
`ISBN 0—471—XXXXx—X (pbk)
`
`Page 4 of 109
`
`
`
`Digital
`Passband
`
`Transmission
`
`
`
`
`
`
`
`
`
`
`
`
`8.1
`
`INTRODUCTION
`
`In baseband pulse transmission, which we studied in the previous chapter, a data
`stream represented in the form of a discrete pulse-amplitude modulated (PAM)
`signal is transmitted directly over a low—pass channel. An issue of particular con-
`cern in baseband pulse transmission is that of pulse shaping designed to bring
`the intersymbol interference (ISI) problem under control. In digital passband
`transmission, on the other hand, the incoming data stream is modulated onto a
`carrier (usually sinusoidal) with fixed frequency limits imposed by a band-pass
`channel of interest; digital passband transmission is studied in the present chap-
`ter. The major issue of concern here is the optimum design of the receiver so
`as to minimize the average probability of symbol error in the presence of noise.
`This does not mean, of course, that noise is of no concern in baseband pulse
`transmission, nor does it mean that ISI is of no concern in digital passband
`transmission; it merely points out the issues that are of high priority in these two
`different domains of data transmission.
`
`The communication channel used for passband data transmission may be a
`microwave radio link, a satellite channel, or the like. In any event, the modula-
`tion process making the transmission possible involves switching (keying) the
`
`Page 5 of 109
`_——-——_—_____—__g,
`
`Page 5 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`amplitude, frequency, or phase of a sinusoidal carrier in some fashion in ac—
`cordance with the incoming data. Thus there are three basic signaling schemes
`known as amplitude-shift keying (ASK), frequency—shift keying (FSK), and phase—shift
`keying (PSK), which may be viewed as special cases of amplitude modulation,
`frequency modulation, and phase modulation, respectively. A distinguishing fea-
`ture of FSK and PSK signals is that ideally they both have a constant envelope.
`This feature makes them impervious to amplitude nonlinearities, commonly en-
`countered in microwave radio links and satellite channels. It is for this reason
`that we find that, in practice, FSK and PSK signals are preferred to ASK signals
`for digital passband transmission over nonlinear channels.
`In this chapter we study digital passband transmission techniques with em-
`phasis on the following issues: (1) optimum design of the receiver in the sense that
`it will make fewer errors in the long run than any other receiver, (2) calculation
`of the average probability of symbol error of the receiver, and (3) spectral properties of
`the modulated signals. Two different cases are considered in the study: coherent
`receivers and noncoherent receivers. In a coherent receiver the receiver is phase locked
`to the transmitter, whereas in a noncoherent receiver there is no phase synchro-
`nization between the local oscillator used in the receiver for demodulation and
`the oscillator supplying the sinusoidal carrier in the transmitter for modulation.
`
`PASSBAND TRANSMISSION MODEL
`
`We may model a digital passband transmission system as shown in Fig. 8.1. First,
`there is assumed to exist a message source that emits one symbol every T seconds,
`with the symbols belonging to an alphabet of M symbols, which we denote by
`ml, ”12,
`.
`.
`.
`, mM. Consider, for example, the remote connection of two digital
`computers, with one computer acting as an information source that calculates
`digital outputs based on observations and inputs fed into it. The resulting com—
`puter output is expressed as a sequence of Os and ls, which are transmitted to a
`second computer. In this example, the alphabet consists simply of the two binary
`symbols 0 and 1. A second example is that of a quaternary PCM encoder with
`an alphabet consisting of four possible symbols: 00, 01, 10, and 11. In any event,
`the a priori probabilities P(m1), P(m2), .
`.
`.
`, P(mM) specify the message source out—
`put. In the absence of prior information to the contrary, we assume that the M
`symbols of the alphabet are equally likely. Then we may write
`
`1% = PM.)
`1
`-—
`M
`
`II
`
`for all i
`
`(8.1)
`
`The Mary output of the message source is presented to a signal transmission
`encoder, producing a corresponding vector 5,- made up of N real elements, one
`such set for each of the M symbols of the source alphabet; the dimension N is
`less than or equal to M. With the vector 5,- as input, the modulator then constructs
`a distinct signal s,(t) of duration Tseconds as the representation of the symbol
`m,- generated by the message source. The signal s,(t) is necessarily of finite energy,
`
`Page 6 of 109
`
`
`
`
`
`
`
`
`
`.Ewtsmco_mm_Emcm.;Ucmnmmmq_m:m_n+0_on_>_
`
`(
`
`EmEzmm
`
`
`
`$>68m
`
`
`
`
`
`m>m>>6:50
`
`F111111111111LW
`
`
`:o_mw_Em:mbCoimoEsEEooBEBE—2.mco_ww_Emcmb
`Lmuoomn3::chEvoocm
`
`EcQw.6:me
`
`
`
`
`
`
`
`52:58:.
`
`2w93m:
`
`mmammmE
`
`850m
`
`Page 7 of 109
`
`Page 7 of 109
`
`
`
`
`
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`Note that si(t) is real valued. One such signal is transmitted every Tseconds. The
`particular signal chosen for transmission depends in some fashion on the incom-
`ing message and possibly on the signals transmitted in preceding time slots. With
`a sinusoidal carrier, the feature that is used by the modulator to distinguish one
`signal from another is a step change in the amplitude, frequency, or phase of the
`carrier. (Sometimes, a hybrid form of modulation is used, combining changes
`in both amplitude and phase or amplitude and frequency.) The result of the
`modulation process is amplitude—shift keying (ASK),
`frequency—shift keying
`(FSK), or phase-shift keying (PSK), respectively, as illustrated in Fig. 8.2 for the
`special case of a source of binary data for which the symbol duration Tis the same
`as the bit duration Tb. It is of interest to note that although in general it is not
`easy to distinguish between frequency-modulated and phase-modulated signals
`(on an oscilloscope, say), this is not so in the case of FSK and PSK signals; for
`example, compare the waveforms in Figs. 8.21) and 8.26.
`Returning to the model of Fig. 8.1, the bandpass communication channel,
`coupling the transmitter to the receiver, is assumed to have two characteristics:
`1. The channel is linear, with a bandwidth that is Wide enough to accommodate
`the transmission of the modulated signal si(t) with negligible or no
`distortion.
`2. The transmitted signal si(t) is perturbed by an additive, zero-mean, stationary,
`white, Gaussian noise process, a sample function of which is denoted by w(t).
`The reasons for this assumption are that it makes calculations tractable, and
`also it is a reasonable description of the type of noise present in many prac-
`tical communication systems.
`
`Binary
`data
`
`0
`
`1
`
`1
`
`0
`
`1
`
`0
`
`O
`
`1 (
`
`b)
`
`(C)
`
`Page 8 of 109
`
`
`
`x(t) = W) + wot),
`
`OStST
`
`{
`
`2 = 1, 2, .
`
`.
`
`.
`
`, M
`
`7
`
`We may thus model the channel as in Fig. 8.3.
`The receiver has the task of observing the received signal x(t) for a duration
`of Tseconds and making a best estimate of the transmitted signal si(t) or, equiv—
`alently, the symbol m». This task is accomplished in two stages. The first stage is
`a detector that operates on the received signal x(t) to produce an observation
`vector x. By using the observation vector X, prior knowledge of the modulation
`format used in the transmitter, and the a priori probabilities P(mi), the signal
`transmission decoder constituting the second stage of the receiver produces an
`estimate m. However, owing to the presence of additive noise at the receiver
`input, this decision—making process is statistical in nature, with the result that the
`receiver will make occasional errors. The requirement is to design the receiver
`so as to minimize the average probability of symbol error defined as
`M
`
`P. = 2 Poi # m.>P<m.->
`i=1
`
`where m,- is the transmitted symbol, a is the estimate produced by the decision
`device, and P(m # m)
`is the conditional error probability given that the ith
`symbol was sent. The resulting receiver is said to be optimum in the minimum
`probability of error sense.
`It is customary to assume that the receiver is time synchronized with the trans—
`mitter, which means that the receiver knows the instants of time when the mod-
`ulation changes state. Sometimes, it is also assumed that the receiver is phase
`locked to the transmitter. In such a case, we speak of coherent detection, and we
`refer to the receiver as a coherent receiver. On the other hand, there may be no
`phase synchronism between transmitter and receiver. In this second case, we
`speak of noncoherent detection, and we refer to the receiver as a noncoherent receiver.
`In this chapter, we assume the existence of time synchronism; however, we shall
`distinguish between coherent and noncoherent detection.
`
`Transmitted
`signal
`51' ( t) +
`
`Received
`signal
`x ( t)
`
`White noise
`in l t)
`
`Figure 8.3 Model of
`additive white Gaus-
`sian noise channel.
`
`Page 9 of 109
`
`
`
`l li i
`
`i
`
`
`
`Page 9 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`The model described above provides a basis for the design of the optimum
`receiver, for which we Will use geometric representation of the known set of trans-
`mitted signals, {3,-(t)}. This method provides a great deal of insight, with consid-
`erable simplification of detail.
`
`GRAM—SCHMIDT ORTHOGONALIZATION PROCEDURE
`
`According to the model of Fig. 8.1, the task of transforming an incoming message
`m, i = 1, 2, .
`.
`.
`, M, into a modulated wave 5,.(t) may be divided into separate
`discrete-time and continuous—time operations. The justification for this separa—
`tion lies in the Gram—Schmidt mthogonalization procedure, which permits the rep—
`resentation of any set of M energy signals, {si(t)}, as linear combinations of
`N orthonormal basis functions, where N .<_ M. That is to say, we may represent the
`given set of real-valued energy signals 51(t), 52(t), .
`.
`.
`, sM(t), each of duration
`Tseconds, in the form
`
`W) =
`
`i
`j=1JJ
`
`Si'qbit),
`
`{
`
`0 s t5 T
`i=1,2,...,M
`
`where the coefficients of the expansion are defined by
`
`sit =
`J
`
`IT ()(I)()d
`- t
`o
`J
`
`s,
`
`t
`
`t,
`
`{t=1,2,...,M
`j=1,2,...,N
`
`(8.5)
`
`(8.6
`
`)
`
`The real-valued basis functions ¢1(t), (1)2(t), .
`we mean
`
`.
`
`.
`
`, ¢N(t) are ofihonormal, by which
`
`T
`
`0
`
`dainty-(t) dt =
`
`1
`
`0
`
`ifi = j
`.
`.
`.
`1f 2 ¢ ]
`
`(8.7)
`
`The first condition of Eq. (8.7) states that each basis function is normalized to
`have unit energy. The second condition states that the basis functions gb1(t),
`¢2(t), .
`.
`.
`, ¢N(t) are orthogonal with respect to each other over the interval
`0 3 ts T.
`The coefficient 5,]- may be viewed as the jth element of the N-dimensional
`vector 5, in Fig. 8.1. Given the Nelements of the vector Si, that is, 5,1, 5,2,
`.
`.
`., siN,
`operating as input, we may use the scheme shown in Fig. 8.40 to generate the
`signal 3,-(t), which follows directly from Eq. (8.5). It consists of a bank of N
`multipliers, with each multiplier supplied with its own basis function, followed
`by a summer. This scheme may be Viewed as performing a similar role to that of
`the second stage or modulator in the transmitter of Fig. 8.1. Conversely, given
`the signals 3,-(t),
`i = 1, 2, .
`.
`.
`, M, operating as input, we may use the scheme
`shown in Fig. 8.4b to calculate the coefficients 5“, 5,2, .
`.
`., sinhich follows directly
`from Eq. (8.7). This second scheme consists of a bank of N product—integrators or
`correlators with a common input, and with each one supplied with its own basis
`
`Page 10 of 109
`
`
`
`
`
`(b)
`
`Figure 8.4 (a) Scheme for generating
`the signal s,-(t). (b) Scheme for generat-
`ing the set of coefficients {5,}.
`
`To prove the Gram~Schmidt orthogonalization procedure, we may proceed
`by defining the first basis function as
`
`(1)10) =
`
`
`510:)
`
`V51
`
`Page 11 of 109
`
`Page 11 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`Where E1 is the energy of the signal 51(t). Then, clearly, we have
`
`5105) z VEd’iU)
`
`= 511%“)
`
`where the coefficient 511 = VE and 4510) has unit energy, as required.
`Next, using the signal 52(t), we define the coefficient 521 as
`T
`
`Sm=L&W%mw
`
`We may thus introduce a new intermediate function
`
`£20) 2 S2“) — 52191510)
`
`(8.9)
`
`om
`
`(811)
`
`which is orthogonal to (1)1(13) over the interval 0 S t S T. Now, we are ready to
`define the second basis function as
`
`£20)
`@m=—7fl——
`(IL 5'30) dt
`
`(8.12)
`
`Substituting Eq. (8.11) in (8.12) and simplifying, we get the desired result
`
`52”) — 52145105)
`(#20) = m
`
`(8-13)
`
`where E2 is the energy of the signal 52(t). It is clear from Eq. (8.12) that
`T
`
`ffimm=1
`
`O
`
`T L
`
`hmemm=0
`
`and from Eq. (8.13) that
`
`That is to say, (1)10) and (1)20) form an orthonormal set, as required.
`Continuing in this fashion, we may in general define
`i—l
`
`gm = 5.0) — Z sijdym
`1:1
`
`(8.14)
`
`where the coefficients 5,]- are themselves defined by
`T
`
`Page 12 of 109
`
`
`
`¢,(t) = A, i: 1, 2,...,N
`refit) dt
`
`0
`
`which form an orthonormal set. The dimension N is less than or equal to the
`number of given signals, M, depending on one of two possibilities:
`
`- The signals s1(t), 52(t), .
`N = M
`
`.
`
`.
`
`, sM(t) form a linearly independent set, in which case
`
`, sM(t) are not linearly independent, in which case
`.
`.
`' The signals 31(t), 52(t), .
`N < M, and the intermediate function g,(t) is zero for 2' > N.
`
`Note that the conventional Fourier series expansion of a periodic signal is
`an example of a particular expansion of this type. Also, the representation of a
`band-limited signal in terms of its samples taken at the Nyquist rate may be
`Viewed as another example of a particular expansion of this type. There are,
`however, two important distinctions that should be made:
`
`, ¢N(t) has not been speci-
`.
`.
`1. The form of the basis functions ¢1(t), (1)20), .
`fied. That is to say, unlike the Fourier series expansion of a periodic signal
`or the sampled representation of a band—limited signal, we have not re-
`stricted the Gram—Schmidt orthogonalization procedure to be in terms of
`sinusoidal functions or sinc functions of time.
`
`2. The expansion of the signal 5,0) in terms of a finite number of terms is not
`an approximation wherein only the first N terms are significant but rather
`an exact expression where N and only N terms are significant.
`
`EXAMPLEI
`
`Consider the signals 51(15), 52(t), 33(t), and 54(t) shown in Fig. 8.5a. We wish to
`use the Gram—Schmidt orthogonalization procedure to find an orthonormal
`basis for this set of signals.
`
`Step 1 We note that the energy of signal 51(t) is
`T
`
`E1 3] s§(t) dt
`
`0
`
`T/S
`
`I
`
`(1)2 dt
`
`_ I
`3
`
`The first basis function (1)10?) is therefore [see Eq. (8.8)]
`
`5105)
`(1510) = Vii
`= {0,
`
`elsewhere
`
`W 0 5 ts T/3
`
`
`
`Page 13 of 109
`
`Page 13 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`51(13)
`
`szlt)
`
`salt)
`
`34(t)
`
`1L lb‘
`
`z
`
`t
`
`0 I
`3
`
`0
`
`gr
`3
`
`lifl lb
`
`t
`
`z
`
`0 I
`3
`
`T
`
`0
`
`T
`
`$1”)
`
`dis/TE
`
`0 I
`3
`
`(a)
`
`¢2ltl
`
`\/3/ l1]
`
`0 I 21
`3
`3
`
`t
`
`(b)
`
`¢3ltl
`
`t/3/
`
`—-—
`
`t
`
`t
`
`0
`
`g: T
`3
`
`(a) Set of signals to be orthonormalized. (b) The resulting set
`Figure 8.5
`of orthonormal functions.
`
`Step 2 From Eq. (8.10), we find that
`T
`
`521 Z l 52(0‘1’1“) dt
`im»
`II m
`
`0 J
`
`II
`
`The energy of signal 52(13) is
`
`m don
`
`N)
`
`A ,_. v
`
`9..N-
`
`a;
`
`II
`
`H
`
`The second basis function (1)20) is therefore [see Eq. (8.13)]
`
`520) — 521%“)
`$2“) m
`
`Page 14 of 109
`
`
`
`531
`
`J 53(t)¢1(t) dt
`
`0
`
`and the coefficient s32 equals
`
`0
`
`T
`
`0
`
`532 = f 53(t)q52(t) dt
`= [2773 (1)<\[§> dt
`
`T/3
`
`T
`
`_ I
`_
`3
`
`The corresponding value of the intermediate function gi(t), with 2' = 3, is there—
`fore [see Eq. (8.14)]
`
`II
`
`gs“)
`
`530‘)
`
`‘53145109 _ 532¢2(t)
`
`II
`
`{1,
`
`0,
`
`2T/3 3 ts T
`
`elsewhere
`
`Using Eq. (8.16), we find that the third basis function (1)30?) is
`
`(1’3“) =
`
`g3(t)
`[ng09 dt
`
`0
`
`{Vg/T,
`
`0,
`
`2T/35tsT
`
`elsewhere
`
`Finally, using Eq. (8.14) with i = 4, we find that g4(t) = 0 and the orthogonal-
`ization process is completed.
`The three basis functions ¢1(t), (1)20), and (1330) form an orthonormal set,
`as shown in Fig. 8.51). In this example, we thus have M = 4 and N = 3, which
`means that the four signals 51(t), 52(t), 53(t), and 54(t) described in Fig. 8.5a do
`not form a linearly independent set. This is readily confirmed by noting that
`34(2)) = 51(t) + 53(t). Moreover, we note that any of these four signals can be
`expressed as a linear combination of the three basis functions, which is the es—
`sence of the Gram—Schmidt orthogonalization procedure.
`
`8.4
`
`GEOMETRIC INTERPRETATION OF SIGNALS
`
`Once we have adopted a convenient set of orthonormal basis functions
`{¢j(t)Ij = l, 2, .
`.
`.
`, N}, then each signal in the set {si(t)li = 1, 2, .
`.
`.
`
`%
`
`E
`
`
`
`Page 15 of 109
`
`Page 15 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`be expanded as in Eq. (8.5), reproduced here for convenience:
`
`N
`...t,
`igsy¢]()
`
`,t =
`sl()
`
`OStST
`i=1a2a-"’M
`
`8.17
`
`)
`
`(
`
`The coefficients of the expansion sij are themselves defined by Eq. (8.6), also
`reproduced here for convenience:
`
`[T
`0 si(t)d>j(t) dt,
`
`i=1,2,...,M
`j: 1, 2””,N
`
`(8.18)
`
`Accordingly, we may state that each signal in the set {5,(t)} is completely deter-
`mined by the vector of its coefficients
`
`5-:
`
`5i1
`
`5:2
`.
`
`SW
`
`,
`
`i=1,2,...,M
`
`(8.19)
`
`The vector 5, is called the signal vector. Furthermore, if we conceptually extend
`our conventional notion of two— and three~dimensional Euclidean spaces to
`an N-dimensional Euclidean space, we may visualize the set of signal vectors
`{sili = 1, 2, .
`.
`.
`, M} as defining a corresponding set of Mpoints in an N—dimen‘
`sional Euclidean space, with N mutually perpendicular axes labeled (1)1, (1)2,
`.
`.
`.
`,
`diN. This N-dimensional Euclidean space is called the signal space.
`The idea of visualizing a set of energy signals geometrically, as described
`above, is of profound importance. It provides the mathematical basis for the
`geometric representation of energy signals, thereby paving the way for the noise
`analysis of digital passband transmission systems in a conceptually satisfying man—
`ner. This form of representation is illustrated in Fig. 8.6 for the case of a two-
`dimensional signal space with three signals, that is, N = 2 and M = 3.
`In an N-dimensional Euclidean space, we may define lengths of vectors and
`angles between vectors. It is customary to denote the length (also called the
`absolute value or norm) of a signal vector Si by the symbol ”5,”. The squared—length
`of any signal vector 5, is defined to be the inner product or dot product of si with
`itself, as shown by
`
`IISiII2
`
`Sszi
`
`II
`
`23 .
`.
`1]
`
`1
`
`(8.20)
`
`where 52-] is the jth element of si, and the superscript T denotes matrix trans—
`
`Page 16 of 109
`
`
`
`
`
`Illustrating the geometric representation of sig-
`Figure 8.6
`nals for the case when N = 2 and M = 3.
`
`the angle between the vectors Si and sj. The cosine of this angle is defined by
`
`
`slrsj
`cos 01y —
`_ llsill HSJ-H
`
`The two vectors 51' and sj are thus orthogonal or perpendicular to each other if their
`inner product is zero, in which case 61-]- = 90 degrees.
`There is an interesting relationship between the energy content of a signal
`and its representation as a vector. By definition, the energy of a signal si(t) of
`duration T seconds is equal to
`
`T
`
`E1. = f 530:)
`
`0
`
`ab:
`
`‘
`
`Therefore, substituting Eq. (8.17) in (8.22), we get
`
`T
`
`N
`
`N
`
`E. = i [Z sg¢j(t)] [2 mm] dt
`
`k=1
`
`0
`
`1:1
`
`Interchanging the order of summation and integration, and rearranging terms:
`T
`
`E. = E E W [0 ¢j<t>¢k<t> dt
`
`j=1k=1
`
`Page 17 of 109
`
`Page 17 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`But, since the ¢j(t) form an orthonormal set, then, in accordance with the two
`conditions of Eq. (8.7), we find that Eq. (8.23) reduces simply to
`N
`
`E : E 53].
`1': 1
`
`(8.24)
`
`Thus Eqs. (8.20) and (8.24) show that the energy of a signal si(t) is equal to the
`squared-length of the signal vector si representing it.
`In the case of a pair of signals si(t) and sk(t) , represented by the signal vectors
`si and sk, respectively, we may similarly show that
`
`(8.25)
`
`(51] _ 51g)?
`
`N Z
`
`j=1
`T
`
`I0 Mt) — stun? dt
`
`“52- -' skIF
`
`II
`
`H
`
`where ”51 — SA] is the Euclidean distance dik between the points represented by the
`signal vectors si and sk.
`
`RESPONSE OF BANK OF CORRELATORS TO NOISY INPUT
`
`Suppose that the input to the bank of N product integrators or correlators in
`Fig. 8.417 is not the transmitted signal si(t) but rather the received signal x(t)
`defined in accordance with the idealized AWGN channel of Fig. 8.3. That is
`to say,
`
`x(t) = si(t) + w(t),
`
`0 5 ts T
`'
`i: 1, 2,. .
`
`.
`
`, M
`
`(8.26)
`
`where w(t) is a sample function of a white Gaussian noise process W(t) of zero
`mean and power spectral density NO/2. Correspondingly, we find that the output
`of correlator j, say, is the sample value of a random variable Xj, as shown by
`T
`
`x H
`j
`
`J0 x(t)¢j(t) dt
`
`sij+wj,
`
`j=l,2,...,N
`
`(8.27)
`
`The first component, Sip is a deterministic quantity contributed by the transmit—
`ted signal si(t); it is defined by
`
`T
`
`Sij = I Si(t)¢j(t) dt
`
`O
`
`(8.28)
`
`Page 18 of 109
`
`
`
`Consider next a new random process X’(t) whose sample function x’(t) is
`related to the received signal x(t) as follows:
`N
`
`x’(t) = W) — 21 man)
`J:
`Substituting Eqs. (8.26) and (8.27) in (8.30), and then using the expansion of
`
`Eq. (8.17), we get
`
`N
`
`x’(t)
`
`= W) + we) — g (s, + wj>¢,<t>
`N
`J
`= w(t) — 21 wj¢j(t)
`F
`H
`
`w’(t)
`
`The sample function x’ (t) therefore depends only on the noise w(t) at the front
`end of the receiver, but not at all on the transmitted signal s,(t). On the basis of
`Eqs. (8.30) and (8.31), we may thus express the received signal as
`N
`
`II
`
`x(t)
`
`J12 x¢(t + x(t)
`
`mm) + W)
`
`2 ;
`
`Accordingly, we may View 10' (t) as a sort of remainder term that must be included
`on the right in order to preserve the equality in Eq. (8.32). It is informative to
`contrast the expansion of the received signal x(t) given in Eq. (8.32) with the
`corresponding expansion of the transmitted signal s,(t) given in Eq. (8.17): The
`latter expansion is entirely deterministic, whereas that of Eq. (8.32) is entirely
`random (stochastic).
`
`Statistical Characterization of the Correlator Outputs
`
`We now wish to develop a statistical characterization of the set of N correlator
`outputs. Let X(t) denote the random process a sample function of which
`is represented by the received signal x(t). Correspondingly, let Xj denote the
`random variable whose sample value is represented by the correlator output
`xj,j = 1, 2,. .
`.
`, N. According to the AWGN model of Fig. 8.3, the random
`process X( t) is a Gaussian process. It follows therefore that ins a Gaussian random
`variable for all j (see Property 1 of a Gaussian process, Section 4.12). Hence, Xj
`is characterized completely by its mean and variance, which are determined next.
`Let Vié denote the random variable represented by the sample value wj pro-
`duced by the jth correlator in response to the white Gaussian noise component
`
`i
`
`1
`
`i
`l
`
`‘
`;
`
`
`
`Page 19 of 109
`
`Page 19 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`w(t). The random variable VI; has zero mean, because the noise process W(t)
`represented by w(t) in the AWGN model of Fig. 8.3 has zero mean by definition.
`Consequently, the mean of Xj depends only on 52-], as shown by
`
`“X;
`
`= E[X,1
`
`= E[sij + W]
`
`= 5,7 + ENE]
`
`231,],
`
`To find the variance of X], we note that
`
`0% = var[Xj]
`= E[(Xj — sf]
`
`= E[W§]
`
`According to Eq. (8.29), the random variable Hg is defined by
`T
`
`W) = J W<t>¢j<t> dt
`
`0
`
`We may therefore expand Eq. (8.34) as follows:
`
`T
`
`T
`
`T
`
`T
`
`0‘32]. = EU) W(t)¢j(t) dtJO W(u)q§j(u) du]
`EU,
`(0 ¢j(‘)¢j(u>W(t)W(u) dtdu]
`
`(8.33)
`
`(8.34)
`
`(8.35)
`
`(8.36)
`
`Interchanging the order of integration and expectation:
`
`H
`
`qN
`
`T
`
`T
`
`[0 J0 ¢j(t)¢j(u)E[W(t)W(u)] dt du
`
`T
`
`T
`
`J0 J0 ¢j(t)¢j(u)Rw(t,u) dt du
`
`where RW(t,u) is the autocorrelation function of the noise process W(t). Since
`this noise is stationary, RW(t,u) depends only on the time difference t — u.
`Furthermore, since the noise W(t) is white with a constant power spectral density
`N0/2, we may express RW(t,u) as follows [see Eq. (4150)]:
`
`RW(t,u) = M29 5(t — u)
`
`(8.37)
`
`Page 20 of 109
`
`
`
`H N0
`T
`3—1) qb§(t) dt
`
`Since the (EU) have unit energy, by definition, we finally get the simple result
`
`N
`0% = 39
`
`for all j
`
`This important result shows that all the correlator outputs denoted by X]. with
`j : 1, 2, .
`.
`.
`, N, have a variance equal to the power spectral density N0/2 of the
`noise process WU).
`Moreover, since the ¢j(t) form an orthogonal set, we find that the Xj are
`mutually uncorrelated, as shown by
`
`E[<X) — pyxj><Xk — W]
`
`E[(Xj — Sty-“X1; “ Sik)]
`
`Etngk]
`T
`
`T
`
`ll
`
`H I
`
`I
`
`ll
`
`cov[X]-Xk]
`
`T
`
`T
`
`ELL W(t)q.’>j(t) dt J1) W(u)¢k(u) du]
`f 4 (15-0) ¢k(u)RW(t,u) d2: du
`
`0
`
`O
`
`N T
`
`T
`
`a L l, (mm 60: _ u) M
`M NO
`T
`—2— J0 ¢J~<t>¢km dt
`
`=0,
`
`jvék
`
`Since the X} are Gaussian random variables, Eq. (8.40) implies that they are also
`statistically independent (see Property 4 of a Gaussian Process, Section 4.12).
`Define the vector of N random variables
`
`X =
`
`.
`
`whose elements are independent Gaussian random variables with mean values
`equal to sij and variances equal to N0/2. Since the elements of the vector X are
`statistically independent, we may express the conditional probability density
`function of the vector X, given that the signal 5,0) or correspondingly the symbol
`
`Page 21 of 109
`
`Page 21 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`m was transmitted, as the product of the conditional probability density functions
`of its individual elements, as shown by
`
`N
`fx(lez-) = Hanan),
`,:
`
`2': 1, 2,...,M
`
`(8.42)
`
`where the vector x and scalar xj are sample values of the random vector X and
`random variable X], respectively. The vector X is called the observation vector, cor-
`responding, xj is called an observable element. The conditional probability density
`functions, fX(xlmi), for each transmitted message 712,, i = l, 2, .
`.
`.
`, M are called
`likelihood functions. These likelihood functions, which are in fact the channel
`characterization, are also called channel transition probabilities. Any channel whose
`likelihood functions satisfy Eq. (8.42) is called a memmyless channel.
`J
`Since each ins a Gaussian random variable with mean 5,“ and variance NO/2,
`we have
`
`fXJ-(xjimi) =
`
`
`1
`
`\/ 7N0
`
`1
`
`€XP[—'Z\7O (’9' ' ‘ij 2],
`
`vb—J N
`
`...,
`
`Z
`
`ll
`
`\
`'
`
`-.
`
`(8'43)
`
`Therefore, substituting Eq. (8.43) in (8.42) , we find that the likelihood functions
`of an AWGN channel are defined by
`
`fx(ximi) = (77'N0)‘N/2 exp[—~— 2 (xj — sly-)2],
`
`1
`
`'
`
`i = l, 2,. .
`
`.
`
`, M (8.44)
`
`It is now clear that the elements of the random vector X completely char-
`acterize the summation term Ethbj-(t) , whose sample value is represented by the
`first term in Eq. (8.32). However, there remains the noise term w’(t) in this
`equation, which depends only on the original noise w(t). Since the noise process
`W(t) represented by w(t) is Gaussian with zero mean, it follows that the noise
`process W’(t) represented by the sample function w'(t) is also a zero-mean Gaus—
`sian process. Finally, we note that any random variable W'(tk), say, derived from
`the noise process W’(t) by sampling it at time tk,
`is in fact statistically inde-
`pendent of the set of random variables {Xj}; that is to say (see Problem 8.4)
`
`,
`ElXjWUkH = 0,
`
`.
`j = 1, 2, .
`0 s t}, s T
`
`.
`
`, N
`
`(8-45)
`
`Since any random variable based on the remainder noise process W’(t) is in-
`dependent of the set of random variables {Xj} and the set of transmitted signals
`{si(t)}, we conclude that such a random variable is irrelevant to the decision as to
`which signal was transmitted. In other words, the correlator outputs determined
`by the received signal x(t) are the only data that are useful for the decision-
`
`Page 22 of 109
`
`
`
`Assume that, in each time slot of duration T seconds, one of the M possible
`signals 31(t), 52(t), .
`.
`.
`, sM(t) is transmitted with equal probability, namely 1 /M.
`Then, for the AWGN channel model of Fig. 8.3, the received signal x(t) is de-
`fined by Eq. (8.26), reproduced here for convenience of presentation
`
`W) = s.(t) + w(t).
`
`{
`
`OSL‘ST
`
`i=1,2,...,M
`
`where w(t) is a sample function of a white Gaussian noise process of zero mean
`and power spectral density N0/2. Given the received signal x(t) , the receiver has
`to make a “best estimate” of the transmitted signal si(t) or equivalently the
`symbol m.
`, M, is applied
`.
`.
`We note that when the transmitted signal si(t), i = 1, 2, .
`to a bank of correlators, with a common input and supplied with an appropriate
`set of N orthonormal basis functions, the resulting correlator outputs define the
`signal vector 5, [see Eq. (8.19)]. Since knowledge of the signal vector 5, is as good
`as knowing the transmitted signal si(t) itself, and vice versa, we may represent
`s,(t) by a point in a Euclidean space of dimension N S M. We refer to this point
`as the transmitted signal point or message point. The set of message points corre-
`sponding to the set of transmitted signals {si(t) Ii = 1, 2, .
`.
`.
`, M} is called a signal
`constellation.
`
`However, the representation of the received signal x(t) is complicated by
`the presence of the additive noise w(t). We note that when the received signal
`x(t) is applied to the bank of N correlators, the correlator outputs define the
`observation vector x. The vector x differs from the signal vector s,- by the noise
`vectorw whose orientation is completely random. In particular, in light of Eq.
`(8.27) we have
`
`x=si+w,
`
`i
`
`1,2,...,M
`
`ll
`
`which may be Viewed as the vector counterpart to Eq. (8.46). The noise vector
`w is completely characterized by the noise w(t); the converse of this statement,
`however, is not true. The noise vector w represents that portion of the noise w(t)
`that will interfere with the detection process; the remaining portion of this noise,
`denoted by w’(t), is tuned out by the bank of correlators.
`Now, based on the observation vector X, we may represent the received signal
`x(t) by a point in the same Euclidean space used to represent the transmitted
`signal. We refer to this second point as the received signal point. The received
`signal point wanders about the message point in a completely random fashion,
`in the sense that it may lie anywhere inside a Gaussian-distributed “cloud” cen—
`tered on the message point. This is illustrated in Fig. 8.7a for the case of a three-
`dimensional signal space. For a particular realization of the noise vector w (i.e.,
`a particular point inside the random cloud of Fig. 8.7a), the relationship between
`the observation vector x and the signal vector 5,- is as illustrated in Fig. 8.71).
`We are now ready to state the detection problem: Cdven the observation vector
`X, we have to perform a mappingfrom x to an estimate m of the transmitted symbol, mi,
`in a way that would minimize the probability of error in the decision-making process.
`
`Page 23 of 109
`
`Page 23 of 109
`
`
`
`DIGITAL PASSBAND TRANSMISSION
`
`
`
`Received
`
`
`signal point
`
`Observation
`
`vector
`Message
`X
`point
`
`
`Signalvector
`s.I
`
`
`
`Noise
`vector
`w
`
`(a)
`(/1)
`Illustrating the effect of (a) noise perturbation on (b) the location of the received
`signal point.
`
`Assuming that all the M transmitted symbols are equally likely, the maximum-
`likelihood decoder, discussed next, provides the solution to this basic signal proc-
`essing problem.
`
`Maximum-Likelihood Decoder
`
`Suppose that given the observation vector x, we make the decision m = m. The
`probability of error in this decision, which we denote by Pg(mi,x), is sim