`
`JEER TR~NSACTIONS ON INFORMATION TID;()RY1 VOL. IT- 15, NO. 2, MARCH 1969
`
`On Optimum Quantization
`
`ROGER C. WOOD, MEMBER, IEEE
`
`tion in the presence of bufi'ering (i.e., for fixed transmission
`rate and, therefore, fixed entropy).
`
`Abstract- The problem of minimizing mean-square quantization
`error is considered and simple closed fonn approximations based
`on the work of Mu and Roe are derived for the quantization error
`and entropy of signals quantized by the optimum fu:ed-N quantizer.
`These approximations are then used to show that, when N is mod(cid:173)
`erately large, it is better to use equi-interval quantizing than the
`optimum fi:.ted-N quantizer if the signal is to be subset uently buf(cid:173)
`fered and transmitted at a fixed bit rate. Finally, the problem of
`optimum quantizing in the presence of buffering is examined, and
`the numerical results presented for Gaussian signals indicate that
`equilevel quantizing yields nearly optimum results.
`
`T HE REDUCTION of quantization error by tailor(cid:173)
`
`ing the structure of the quantizer to the signal to
`be processed has received <;onsiderable theoretical
`attention in the past. We shall consider this concept for
`the special case of stochastic signals whose samples are
`independently and identically distributed. This problem
`of quantizing for minimum distol'tion for a. signal of known
`probability density p(x) was first considered in detail by
`Max [1} in 1961. By assuming the number of levels N to
`be fixed, Max derived equations for the optimum intervals
`(y._, y.) and levels x •. When the criterion is minimum
`mean-square error, the appropriate equations are
`J.:~. xp(x) dx
`J.:~. p(x) dx
`
`(1)
`
`and
`
`(2)
`
`Hence, the representative levels are the conditional means
`on the given intervals, and the interval boundaries are
`halfway between the levels. The analytica.l solution of
`these equations is impossible for all but trivial cases, but
`a numerical solution is straightforward. Moreover, Roe
`[21 has derived excellent approximate formulas based on
`Max's results.
`The above equations will, in general, require an iteration
`technique for their solution. One such technique is given by
`Max, and many others are also feasible.
`The purpose of this paper is to derive a simple estimate
`of the error saving to be obtained by Max's quantizer,
`which shall be labeled the optimum fixed-N quantizer; to
`examine the effect on. signal entropy of such quantizing,
`and finally to examine the problem of optimum quantiza-
`
`Manuscript received November 17, 196~~ revised August 9, 1968.
`This research was supported in part b,y _the l."i ational Science Founda(cid:173)
`tion under Grant GK 144 and by NASA under Research Grant
`G237-62.
`·
`The author is with the Department of Electrical Engineering,
`University of California, Santa Barbara., Calif. 93106.
`
`A CoNVENIENT APPRoXIMATION
`Since all forms of the optimizing equations depend on
`the conditional mean J.l('T/,, 112), we shall generate an ap(cid:173)
`proximation for that function. To do so, we note that if
`p(x) is sufficiently well behaved' on the interval (~ -
`A/2, ~ + A/2) we can generate Taylor's series expansions
`about ~ for both the numerator and denominator, and
`therefore, formally, we can write
`A
`A) ~W + ~~ [~"W + 2p'(t)]
`' ~ + - ,...._
`-
`p(~) + ~p"<t)
`2
`2
`24
`
`2
`
`(3)
`
`(
`
`J.l ~ -
`
`A2
`-p'W
`= ~ +
`12
`2
`A
`p(~) + -p"W
`24
`~ ~ + A• p'W
`-
`12 p(~)
`
`(4)
`
`(5)
`
`for A small enough.
`Thus we have derived, for small intervals, an approxi(cid:173)
`mate expression for the conditional mean in terms of the
`midpoint and length of the given interval.
`
`THE SEcoND-ORDER MoMENTs oF THE QuANTIZED
`D ISTRIBUTION
`We can give a considerable amount of information about
`the behavior of the first two moments of the quantized
`variable. In particular we have the following theorem.
`
`Theorem 1
`When the optimum fixed-N quantizer is employed,
`the first moment of the quantized variable is given by "''
`and the second moment can be approximated by
`V(x*) ~ (l - ~ ~~ p(~.)
`
`1 Since in all cases we truncate the Taylor's series after several
`terms, it will suffice for our purposes that the first few (at most, five)
`derivatives exist and are continuous. Moreover, for the approxi(cid:173)
`mation which will be developed to be close, the number of N levels
`must be large enough (i.e., the interval lengths 6 small enough)
`so that
`
`p(x) > > t1p'(x) > > t12p"(x) > > · · · .
`Thus the critical interval size, for application of the approximations,
`is seen to depend closely upon the nature of the probability density
`p(x).
`
`Page 1 of 5
`
`
`
`WOOD: OPTIMUM QUANTIZATION
`
`249
`
`where ilk is the length and h the midpoint of the kth
`quantizer interval.
`Proof: For the first moment, E(x*) = E[E(x I YH <
`x < yk)] = J.1. so that the quantized variable has the same
`mean as the original continuous variable. Considering
`the second moment of x*, we note first that we can write
`r'•+1/2t..
`p(x) dx = f: J«-uzt.. X
`Expanding x 2p(x) about x = ~ yields, after integrating
`t D.k, h + t ilk),
`over the intervals (h -
`) = f: ( Llk~!p(h) + 4~L [~~p"(~k)
`+ 4~kp'(~k) + 2p(~k)] + 0(.1%)).
`Therefore, letting J.l.k = E[x I Yk-1 < x < yk] and pk =
`P[yk-1 < x < yk], we can write
`] = L J.I.!Pk
`E[(x*) 2
`
`2p(x) dx.
`
`(6)
`
`(7)
`
`E(x
`
`2
`
`) =
`
`100
`
`-oo x
`
`2
`
`E(x 2
`
`k
`
`. [.1 t cc) + Ll~hp''c~k) + 2.1~p'(h) -JJ
`24
`24
`
`k<;.kp <;.k
`
`"-' f: ( Ll,~fp(~k) + ~ [4~kp'(~k) + ~!p"(~k)l)
`l2 L D.!p(~k),
`'"'"' E(x2
`
`)
`
`-
`
`k
`
`(8)
`
`In addition, we derive an approximation to the entropy
`of the quantized sample.
`For a mean-square error criterion, Roe has shown that
`the interval points for the optimum fixed-N quantizer
`can be approximated by
`···
`
`[' [p(x)r 13 dx '"'"'2Clk + Cz
`
`(10)
`
`where C1 and C2 are constants, provided only that, in
`the sense described previously, N is large and p(x) suffi(cid:173)
`ciently differentiable. Clearly, if (yo, YN) spans the domain
`of definition of p(x), the quantity
`
`[N [p(x)r;a dx - [" [p(:r)Jl/3 dx
`depends only on p(x) and C1 = 0(1/N).
`
`Theorem 2
`For any signal with probability distribution p(x) well
`enough behaved for Roe's approximations (10) to be
`applicable, the mean-square quantization error of the
`optimum fixed-N quantizer can be approximated, for
`large N (.1 small), by
`
`(11)
`where cl is given by evaluating (10) and is of the order
`N-1.
`Proof: For any p(x) well enough behaved
`
`f• [p(x)f 13 dx '"'"'2C 1k + C2 •
`z = r [p(t)r 13 dt,
`
`1f we now define z(x) to be
`
`•o
`
`for ilk small enough. Hence, the variance of the quantized
`variable is less than that of the continuous variable and
`is given by
`V(x*) = (/ - T\ L il!p(~k) + 0(.1~)
`which completes the proof.
`
`(9)
`
`and
`
`dx
`dz
`
`1
`dz
`dx
`
`[p(x)rua
`
`The significance of this result is that the variance of the
`quantized variable is less than that of the original signal.
`Hence, the signal and noise are dependent and no pseudo(cid:173)
`independence of the sort considered by Widrow [3] is
`possible. Thus, the common additive noise model is not
`appropriate for the case of optimum fixed-N quantizing.
`
`CLOSED FORM APPROXIMATIONS FOR THE MEAN-SQUARE
`
`ERROR AND THE ENTROPY oF THE QuANTIZED SIGNAr,
`Although the correction term for the second moment
`derived above did not possess a convenient closed form,
`it enabled us to demonstrate the lack of independence
`between signal and noise. We now develop a general
`technique for deriving a closed form approximation to
`the error, and therefore also the correction term for
`signals with well-behaved probability density functions.
`
`Hence, we can write
`~2 ,.._, i2(2C1)2 j"N [[p(x)r113Yp(x) dx
`
`Yo
`
`= l2(2C1) 2[2C1N + C2 - 2C1 ·0 - C2]
`l2(2C1) 3N
`
`=
`
`which concludes the proof.
`
`For the caRe of a Gaussian signal, this procedure yields
`
`2
`
`2.73N
`~ = (N + o.s5W
`1.6/(N + 0.853) (see [2]).
`
`since cl
`
`(12)
`
`Page 2 of 5
`
`
`
`250
`
`IEEE TRANSACTIONS ON INFORMATION 'rHEORY, MARCH 1969
`TABLE I
`CoMPARISON OF ExACT AND APPROXIMATE VALUES l<'OR MEAN-SQUARE ERROR AND ENTROPY FOR Two QuANTIZATION METHODS
`
`Optimum Fixed N
`
`Equilevel
`
`Mean-Square Error
`
`Entropy
`
`l.Vlean-Square Error
`
`Entropy
`
`Exact
`
`Approximate
`
`Exact
`
`Approximate
`
`Exact
`
`Approximate
`
`Exact
`
`Approximate
`
`0.0799
`0.0229
`0.0107
`0.00620
`0.00404
`0.00283
`0.00210
`
`0.0797
`0.0232
`0.0109
`0.00628
`0.00408
`0.00287
`0.00212
`
`2.20
`3.13
`3.68
`4.07
`4.38
`4.64
`4.86
`
`2.24
`3.12
`3.67
`4.07
`4.38
`4.64
`4.86
`
`0.176
`0.0507
`0.0240
`0.0132
`0.00847
`0.00590
`0.00434
`
`0.213
`0.0533
`0.0237
`0.0133
`0.00852
`0.00592
`0.00435
`
`1.50
`2.40
`2.97
`3.37
`3.69
`3.95
`4.17
`
`1.37
`2.37
`2.95
`3.36
`3.69
`3.95
`4.17
`
`N
`
`5
`10
`15
`20
`25
`30
`35
`
`We now derive asymptotic expressions for the entropy
`of the quantized signal that depend only upon the prop(cid:173)
`erties of the probability density p(x), for both the optimum
`fixed Nand the equilevel quantizer.
`
`Theorem 3
`For signals of finite range R and such that the entropy
`H(x) of the continuous signal is finite, the entropy of the
`quantized signal, when the optimum fixed-N quantizer
`is employed, can be approximated by
`(12 2
`! log NE
`
`)
`
`(13)
`
`1I(x*.) "'"' jH(x) -
`
`Thus, we have derived an expression for the entropy of
`the quantized signal, which depends only on the properties
`of the probability density p(x). We can, in a similar
`fashion, derive an expression for H (x*) when equi~interval
`quantizing is employed. To do this, we note that
`H(x~) = - L Pk log Pk
`"'"' - L p(h) A[log p(h) + log A]
`
`k
`
`k
`
`rv H(x) -
`
`""H(x) -
`
`i log
`
`log A
`2
`1!E
`- i log R
`since A = R/N and i = /2 A2 for N large enough.
`The rapid convergence of these approximations, for the
`case of Gaussian signals, is readily apparent from the data
`of Table I, which contains exact and approximate compu(cid:173)
`tations of entropy and mean-square error for equilevel
`and optimum fixed-N quantization. In performing the
`equilevel computations, R was taken to be 8 in the design
`of the quantizer and the approximations. The exact results,
`however, are based on the true (infinite) range.
`
`THE APPLICATION OF BUFFERING AND ENCODING TO THE
`QuANTIZED SIGNAL
`In the previous paragraphs, we derived expressions for
`the mean-square error and the entropy of the quantized
`signal for both the optimum fixed-N quantizer and for
`simple equi-interval quantizing. Those estimates are now
`employed to evaluate the effect of encoding the quantized
`signals and buffering so that the average bit rate is fixed.
`We assume, for purposes of comparison, that the mean(cid:173)
`square error is fixed, and examine the difference in entropy
`between signals quantized by the above two devices. For
`this case, again under the assumption that the probability
`density p(x) is well behaved, we are able to prove a quite
`startling and significant theorem about the relative asymp(cid:173)
`totic behavior of the two types of quantizers.
`
`Theorem 4
`Within the limits of our approximation, and therefore
`asymptotically for large N (given p(x) well behaved and
`H(x) finite) the output of the optimum fixed-N quantizer
`has entropy greater than or equal to that of the output
`of an equilevel quantizer yielding the same mean-square
`
`for large N.
`If equilevel quantizing is performed, the entropy of
`the quantized signal approaches
`H(x~) = H(x) -
`
`(14)
`
`2)
`
`1!E
`
` - ! log R
`
`! log (
`
`Proof: We note that
`Pk log Pk = p(h) Ak[log p(~k) + log Ak]
`
`so that
`H(x*.) = - L Pk log P•
`
`k
`
`for small Ak
`
`p(~) log p(~) d~
`
`rv H(x) -
`
`= jH(x) -
`
`1 JYN
`log 2C1 + 3
`"'
`
`log 2C1
`
`(12l)
`x -3 og N'
`1 l
`"'"' 2H( )
`= 3
`
`since
`
`and
`
`2 "'"' (2C1) 3 N
`12
`E =
`
`1
`
`(12/) 113
`2C ~ - -
`1 - N
`
`Page 3 of 5
`
`
`
`WOOD: OPTI:vi:U:If QL:AKTIZATIOX
`
`error, provided the range can be assumed to be finite.
`Therefore, assuming N is large, it is always better 2 to
`quantize with an equilevel quantizer than with an opti(cid:173)
`mum fixed-N quantizer, if the output signal is to be en(cid:173)
`coded and transmitted at a fixed average bit rate.
`Proof: We note that from (13) and (14), we can write
`
`H(x~) - H(x~) r-.J! log R- !H(x) +!log (No/N,)
`
`(15)
`
`where the subscripts o and e represent optimum fixed N
`and equilevel quantizer;;, respectively. Since the errorfl
`are assumed to be equal
`
`R2
`(2(\)"
`-12- Nn = i21V!'
`Now applying (10) for yl, and Yo we can write
`
`2C 1 = {N [p(x)]' 13 dx/No .
`
`(16)
`
`(17)
`
`Thus, by combining (16) and (17), "·e can solve for the
`ratio Na/N, and (15) becomes
`
`H(x*a) - H(x~)
`
`r-.J iH(x) + ! log [fN [p(x)] 113 dx]
`1 [21"N
`(l"N
`
`= 2"
`
`3" "' p(x) log p(x) dx + log
`
`""
`
`)]
`[p(x)]' 13
`
`(18)
`
`![E(log [p(x)r 13
`
`~ !E[log [p(x)] 213 + log [p(x)r 213
`
`) + log E([p(x)r 213
`] = 0,
`
`)]
`
`since log x is a concave function. :Moreover, equality is
`achieved if and only if [p(x)] 213 is a constant, that is, for
`the uniform distribution. For this case, however, there
`is no difference between the two devices, for the optimum
`fixed-N quantizer is, in fact, equi-interval. Thus we con(cid:173)
`clude that for fixed mean-square error, the entropy of a
`signal quantized by the optimum quantizer is not less
`than that which obtains if the same signal is quantized
`by an equi-interval device, at least to the order of our ap(cid:173)
`proximation. Since a signal can, by means of encoding, be
`transmitted at an average bit rate approaching the signal
`entropy, this implies that an optimum fixed-N quantizer
`should never by employed if encoding is also to be per(cid:173)
`formed, and the theorem is proved.
`To illustrate more fully the significance of these remarks,
`we will consider the case of unit-variance Gaussian signals
`in detail. For this case,
`
`H(x) = ! log 21re
`
`and
`
`H(x~) = -0.3115 + log (N + 0.853).
`If the range is taken to be 8 (i.e., ±4tT), \ve have for the
`equi-interval case
`
`(19)
`
`H(x~) "'"' -0.953 + log N.
`
`(20)
`
`2 From the viewpoint of minimum error. The practical questions
`of implementation are discussed later.
`
`The mean-square error for each case is
`
`251
`
`(21)
`
`for the optimum fixed-N quantizer and
`t! "'"' 5.33/N2
`for the equi-interval quantizer. If the entropies of the two
`methods are equated, i.e., if a transmission rate is fixed,
`
`(22)
`
`-0.3115 +log (No + 0.853) "'"'0.9.13 + log N,
`
`so that as N "' N, become large
`
`0.64 +log No"'"' log N,
`
`and N, "'"' 1.559 No·
`Thus, expressing the mean-square errors for each case in
`terms of N 01 vve have
`
`and
`
`(23)
`
`(24)
`
`Hence, for No moderately large, the error using equi(cid:173)
`interval quantizing is less than that using the optimum
`fixed-N quantizer, if the signals are to be subsequently
`encoded and transmitted at a fixed bit rate. The reason
`for this apparent anomaly is that, for a given N, the
`entropy of the equi-interval quantized signal is consider(cid:173)
`ably less than that of the optimum fixed-N quantized
`signal; and by employing more levels, this smaller entropy
`can be converted into lower mean-square error. Thus it
`is apparent that the optimum fixed-N quantizer loses more
`in terms of increased entropy than it gains in reduction
`of mean-square error, if encoding is to be practiced. It
`should be pointed out that it is necessary to use a buffer
`to achieve an advantage from any encoding scheme which
`involves words of variable length. There is therefore an
`apparent tradeoff between fixing N and using the more
`complex optimum quantizer but no buffer, and using a
`ordinary quantizer with a buffer.
`It should be noted that as N becomes very large, so also
`does the quantized entropy. Thus, the indicated difference
`may be trivial compared to H(x*). However, for the case of
`Gaussian signals, (treated as an example in the following
`sections), there does exist a wide range of values over which
`the difference is appreciable.
`
`THE OPTIMIZING EQUATIONS FOR THE CASE OF ENCODED
`
`SIGNALS
`It was shown in the previous section that if the quan(cid:173)
`tized signal is to be encoded, buffered, and transmitted
`at a fixed average bit rate, the use of the optimum fixed-N
`quantizer yields suboptimum results; results which are
`worse, in fact, than for a simple equilevel quantizer. We
`derive, in this section, conditions upon the optimum
`
`Page 4 of 5
`
`
`
`252
`
`IEEE TRANSACTIONS ON INFORMATION THEORY, MARCH 1969
`
`____--Equlle\ci,R=8
`
`' -r\
`\ v-
`' '
`
`' '
`
`/Optimum Fixed-N
`\
`
`0:
`0
`0:
`0:
`w
`~ 1.0
`f=
`<(
`!"
`f-
`
`~ co a
`
`w
`0:
`<(
`co
`a
`<JJ ±
`::5 0 I
`;:;
`
`0
`
`1.0
`
`2.0
`
`3.0
`
`4.0
`
`50
`
`ENTROPY OF QUANTIZED
`
`SIGNAL
`
`Fig. 1. Comparison of entropy versus mean-square error for equi(cid:173)
`level, optimum fixed-N and optimum fixed entropy quantization.
`
`obviously gives a poor choice of levels, and a better meas(cid:173)
`ure for comparison would be the optimum equilevel
`quantizer [1].
`Thus, equilevel quantizing, currently employed because
`of its ease of implementation, is superior to optimum
`fixed-N quantizing if the output signal is to be buffered,
`for all but very small values of N. Moreover, because of
`the insensitivity of the mean-square quantization error
`to moderate changes in interval structure, equilevel quan(cid:173)
`tizing gives nearly optimum results for the special case of
`Gaussian signals.4
`
`REFERENCES
`[1] J. Max, "Quantizing for minimum distortion," IRE Trans.
`Information Theory, vol. IT-6, pp. 7-12, March 1960.
`[2] G. M. Roe, "Quantizing for minimum distortion," IEEE Trans.
`Information Theory, vol. IT-10, I?P· 384--385_, O_ctober 1964.
`[3] B. Widrow, "A study of rough amphtude quant1zatwn by means
`of Nyquist sampling theory," IRE Trans. Circuit Theory, vol.
`CT-3, pp. 266-276, December 1956.
`[4] R. C. Wood, "Optimum quantizing in hybrid computation,"
`Ph.D. dissertation, Dept. of Engrg., University of California,
`Los Angeles, August 1966.
`
`4 It has been brought to my attention by Dr. G. M. Roe, in pri(cid:173)
`vate communication, that it is possible to obtain an analytical deri(cid:173)
`vation of these computer based conclusions, and to extend them
`to well-behaved density functions other than the Gaussian. Speci(cid:173)
`fically, for the fixed-entropy quantil'!er discussed ab~ve, the optimum
`level spacing approaches the equal-mterval case, w1th
`
`quantizer subject to the constraint of fixed average bit
`rate rather than fixed N. For simplicity, we assume that
`the encoding will be performed efficiently enough so that
`the entropy of the quantized signal is an adequate measure
`of the the average output bit rate. We again take the mean(cid:173)
`square error as our optimization criterion, although other
`criteria might be more desirable for certain applications.•
`Under these assumptions, our optimization problem is to
`find the values of Yk> xk, and N that minimize the mean(cid:173)
`square quantization error
`
`subject to
`
`where
`
`c,
`
`1"'
`
`llk-J.
`
`Pk =
`
`p(x) dx,
`
`(25)
`
`(26)
`
`i.e., so that the entropy of x* is constant.
`It is easily shown that the xk must be given by
`
`xk = fl.(Yk-1, Yk),
`so that the problem is expressible solely in terms of the
`yk and N.
`As was the case for the optimum fixed-N quantizer, an
`analytic solution to these equations can be found only for
`trivial cases. Therefore, for the particular case of Gaussian
`signals,
`the optimizing equations were converted to
`steepest-descent equations, which were implemented and
`solved on the System Development Corporation Q-32
`computer under the control of the TSS time-sharing sys(cid:173)
`tem. The correctness of the computer program was checked
`by suppressing the entropy constraint, which gave results
`that agreed with Max's. Next, the entropy term was
`made dominant and the constraint set to that for maxi(cid:173)
`mum entropy. The results again agreed with the theo(cid:173)
`retical (i.e., equal pk).
`The convergence of the steepest-descent equations is
`very slow, demonstrating that, as is also true for the
`fixed-N optimum, the mean-square quantization error
`is very insensitive to moderately small deviations from
`the optimum interval structure.
`The numerical results for Gaussian signals are rather
`surprising, in that the optimum fixed-entropy quantizer
`yields an error rate almost negligibly different from that
`of simple equilevel quantizing, except for very small N.
`These results are displayed in Fig. 1, from which we can
`see that the optimum fixed-entropy quantizer displays a
`marked improvement over the optimum fixed-N quan(cid:173)
`tizer, but not over equilevel quantizing. In fact, the
`equilevel quan~izer suffers in the comparison given because
`the range was assumed to be 8. For very small N, this
`
`!i_ (y ) ""' -v>: {1 + ~ [(p') 2
`
`dk
`
`k
`
`p
`with X (a Lagrange multiplier) given by
`
`i! p"] + o('A2
`
`5 p
`
`)}
`
`-
`
`24
`
`Hl
`
`rN p(x) log p(x) dx
`
`"'
`
`3 Prof. Leo Breiman, of the University of California, Los Angeles,
`has suggested as an alternative formulation that the mutual infor(cid:173)
`mation I(x, x*) be maximized subject to H(x*) = C.
`
`and
`
`Page 5 of 5