`Exhibit 1021 Page 1
`
`
`
`THE OXFORD SERIES IN ELECTRICAL AND COMPUTER ENGINEERING
`
`SERIES EDITORS
`
`Adel S. Sedra, Electrical Engineering
`Michael R. Lightner, Computer Engineering
`
`SERIES TITLES
`
`Allen and Holberg, CMOS Analog Circuit Design
`Bobrow, Elementary Linear Circuit Analysis, 2nd Ed.
`Bobrow, Fundamentals ofElectrical Engineering, 2nd Ed.
`Campbell, The Science and Engineering ofMicroelectronic Fabrication
`Chen, Linear System Theory and Design, 3rd Ed.
`Chen, System and Signal Analysis, 2nd Ed.
`Comer, Digital Logic and State Machine Design, 3rd Ed.
`Cooper and McGillem, Probabilistic Methods of Signal and System Anaylsis, 3rd Ed.
`Franco, Electric Circuits Fundamentals
`Jones, Introduction to Optical Fiber Communication Systems
`Krein, Elements ofPower Electronics
`Kuo, Digital Control Systems, 3rd Ed.
`Lathi, Modern Digital and Analog Communications Systems, 3rd Ed.
`McGi1lem and Cooper, Continuous and Discrete Signal and System Analysis, 3rd Ed.
`Miner, Lines and Electromagnetic Fields for Engineers
`Roberts, SPICE, 2nd Ed.
`Santina, Stubberud and Hostetter, Digital Control System Design, 2nd Ed.
`Schwarz, Electromagnetics for Engineers
`Schwarz and Oldham, Electrical Engineering: An Introduction, 2nd Ed.
`Sedra and Smith, Microelectronic Circuits, 4th Ed.
`Stefani, Savant, and Hostetter, Design ofFeedback Control Systems, 3rd Ed.
`Van Valkenburg, Analog Filter Design
`Wamer and Grung, Semiconductor Device Electronics
`Wolovich, Automatic Control Systems
`Yariv, Optical Electronics in Modern Communications, 5th Ed.
`
`DISH
`
`Exhibit 1021 Pae 2
`
`DISH
`Exhibit 1021 Page 2
`
`
`
`MODERN DIGITAL AND
`
`I
`
`ANALOG
`
`COMMUNICATION
`
`SYSTEMS
`
`Third Edition
`
`B. P. LATHI
`
`DISH
`Exhibit 1021 Page 3
`
`
`
`126
`
`ANALYSIS AND TRANSMISSION OF SIGNALS
`
`Although we have proved these results for a real g(t), Eqs. (3.79), (3.80), (3.81), and (3.84)
`are equally valid for a complex g(t).
`The concept and relationships for signal power are parallel to those for signal energy.
`This is brought out in Table 3.3.
`
`Signal Power Is Its Mean Square Value
`A glance at Eq. (3.75) shows that the signal power is the time average or mean of its squared
`value. In other words Pg is the mean square value of g(t). We must remember, however, that
`this is a time mean, not a statistical mean (to be discussed in later chapters). Statistical means
`are denoted by ovcrbars. Thus, the (statistical) mean square of a variable x is denoted by
`To distinguish from this kind of mean, we shall use a wavy overline to denote a time average,
`Thus, the time mean square value of g(t) will be denoted by 235). The time averages are
`conventionally denoted by pointed brackets, such as < g2(t) >. We shall, however, use the
`wavy overline notation because it is much easier to associate means with a bar on top rather
`than the brackets. Using this notation, we see that
`WM
`1
`T/2
`(3.853)
`Pg =g2(t) = lim — / 522(1) dt
`T—>oo T _2-/2
`Note that the rms value of a signal is the square root of its mean square value. Therefore,
`
`(3-35b)
`I£’(f)ltms = mg
`From Eqs. (3.82), it is clear that for a real signal g(t), the time autocorrelation function
`Rg(r) is the time mean of g(t)g(t + r). Thus,
`
`(3.86)
`Raw) = g(t)g(t as r)
`This discussion also explains why we have been using the term time autocorrelation
`rather than just autocorrelation. This is to distinguish clearly the present autocorrelation
`function (a time average) from the statistical autocorrelation function (a statistical average) to
`be introduced in a future chapter.
`
`Interpretation of Power Spectral Density
`Because the PSD is a time average of the ESD of g(t), we can argue along the lines used in the
`interpretation of ESD. We can readily show that the PSD Sg (cu) represents the power per unit
`Table 3.3
`
`
`
`0°
`
`2
`En
`-
`2
`T/2
`1
`-
`g (t) dt‘
`Eg =
`PR =T11rr;Q ? T/2g (t) dt = Tllrcic —T—
`00
`T/2
`T—>oo II/g; (I)
`1//g(r) :f g(t)g(t + 1:) dr
`fRg(r) = lim
`g(t)g(t + 1?) dz‘ = lim
`T
`H,»
`T»oo T _T/2
`‘1*g(w):|G<w)|2
`saw): lim IG"“’)’z : lim "’"“”’
`T—>oo
`T
`T—>no
`T
`oo
`1#g(I) <=> ‘I/g(w)
`Rgtr) <=> Sg(a>)
`1
`00
`
`
`/w‘I/g(a>)daJE8 : 2” P, g LC s,(w) dw
`
`DISH
`
`Exhibit 1021 Pae 4
`
`DISH
`Exhibit 1021 Page 4
`
`
`
`3.81), and (3.84)
`
`or signal energy.
`
`an of its squared
`er, however, that
`Statistical means
`
`s denoted by F.
`3 a time average.
`me averages are
`rowever, use the
`bar on top rather
`
`(3.85a)
`
`:. Therefore,
`
`(3.85b)
`
`relation function
`
`(3.86)
`
`: autocorrelation
`: autocorrelation
`
`;tical average) to
`
`lines used in the
`
`e power per unit
`
`3.8 Signal Power and Power Spectral Density
`
`127
`
`bandwidth (in hertz) of the spectral components at the frequency a). The power contributed by
`the spectral components within the band an to C02 is given by
`(L);
`r
`IT
`APg = —
`
`59]
`
`Sg(a)) da)
`
`(3.87)
`
`Autocorrelation Method: A Powerful Tool
`
`For a signal g(t), the ESD, which is equal to |G(w) |2, can also be found by taking the
`Fourier transform of its autocorrelation function. If the Fourier transform of a signal is enough
`to determine its ESD, then why do we needlessly complicate our lives by talking about
`autocorrelation functions? The reason for following this alternate route is to lay a foundation
`for dealing with power signals and random signals. The Fourier transform of a power signal
`generally does not exist. Moreover, the luxury of finding the Fourier transform is available only
`for deterministic signals, which can be described as functions of time. The random message
`signals that occur in communication problems (e.g., random binary pulse train) cannot be
`described as functions of time, and it is impossible to find their Fourier transforms. However. the
`autocorrelation function for such signals can be determined from their statistical information.
`This allows us to determine the PSD (the spectral information) of such a signal. Indeed, we
`may consider the autocorrelation approach as the generalization of Fourier techniques to power
`signals and random signals. The following example of a random binary pulse train dramatically
`illustrates the power of this technique.
`
`EXAMPLE 3.23
`
`Figure 3.4221 shows a random binary pulse train g(t). The pulse width is Tb /2, and one binary
`digit is transmitted every T;, seconds. A binary 1 is transmitted by the positive pulse, and a binary
`0 is transmitted by the negative pulse. The two symbols are equally likely and occur randomly.
`We shall determine the autocorrelation function, the PSD, and the essential bandwidth of this
`signal.
`
`We cannot describe this signal as a function of time because the precise waveform is
`not known due to its random nature. We do, however, know its behavior in terms of
`the averages (the statistical information). The autocorrelation function, being an average
`parameter (time average) of the signal, is determinable from the given statistical (average)
`information. We have [Eq. (3.82b)j
`
`.
`
`1
`
`T/2
`
`Rm) = 1.13130 ; /_T/2 g(t)g(t — r) dt
`
`Figure 3.42b shows g(t) by solid lines and g(t — T), which is g(t) delayed by 1:, by
`dashed lines. To determine the integrand on the right-hand side of the above equation, we
`multiply g(t) with g(z — r), find the area under the product g(t)g(t — 1'), and divide it
`by me averaging interval T. Let there be N bits (pulses) during this interval T so that
`T = NT;,, andas T —+ oo, N 4 00. Thus,
`
`1
`NTb/2
`Rg(r)= lim ff g(t)g(tvI) dt
`b —NT1,/2
`N—>oo
`Let us first consider the case of 1: < Tb/2. In this case there is an overlap (shown
`by the shaded region) between each pulse of g(t) and that of g(r — 1'). The area under
`
`DISH
`
`Exhibit 1021 Pae 5
`
`DISH
`Exhibit 1021 Page 5