`____________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`_____________________
`
`LG Electronics, Inc.
`Petitioner,
`
`v.
`FastVDO LLC
`Patent Owner.
`
`
`
`Patent No. 5,850,482
`________________________
`
`Inter Parte Review No. ____________
`
`___________________________________________________________
`
`U.S. Patent No. 5,850,482 to Meany et al.
`
`Exhibit 1001
`
`
`
`
`
`United States Patent [19]
`Meany et al.
`
`US005850482A
`[11] Patent Number:
`[45] Date of Patent:
`
`5,850,482
`Dec. 15, 1998
`
`[54] ERROR RESILIENT METHOD AND
`APPARATUS FOR ENTROPY CODING
`
`[75] Inventors: James J. Meany, Des Peres;
`Christopher J. Martens, Creve Coeur,
`both of Mo.
`
`[73] Assignee: McDonnell Douglas Corporation, St.
`Louis, Mo.
`
`[21] Appl. No.: 633,896
`[22]
`Filed:
`Apr. 17, 1996
`
`[51] Int. Cl.6 ..................................................... .. G06K 9/00
`[52] US. Cl. ........................................... .. 382/232; 382/275
`[58] Field of Search ................................... .. 382/232, 233,
`382/234, 235, 236, 237, 238, 239, 240,
`244, 245, 246, 248, 249, 250, 251, 252,
`253, 270, 274, 275, 254, 305, 309, 310,
`311, 224, 190, 166, 172, 306; 375/286,
`298, 254, 264; 371/377, 43.1, 37.08, 37.5;
`380/20, 10; 370/314; 348/476, 5.5; 340/825.44
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`3/1989 Adelson et al. ........................ .. 382/56
`4,817,182
`5/1991 Lawton et al. ..... ..
`358/2613
`5,014,134
`3/1992 Resnikoff et al.
`.. 382/56
`5,101,446
`5,168,493 12/1992 Nelson et al. ..... ..
`370/84
`5,289,501
`2/1994 Seshadri et al. .
`375/17
`
`5,315,670
`
`5/1994 Shapiro . . . . . . . . . . . .
`
`. . . . .. 382/56
`
`5,321,750
`
`6/1994 Nadan . . . . . .
`
`. . . . .. 380/20
`
`6/1994 Shapiro ................................... .. 382/56
`5,321,776
`5,471,486 11/1995 Baggen et al. ....................... .. 371/37.1
`
`OTHER PUBLICATIONS
`
`Edward R. Fiala and Daniel H. Greene, Data Compression
`with Finite Windows, Communications of the ACM , vol. 32,
`No. 4, Apr. 1989, pp. 490—505.
`Timothy C. Bell, John G. Cleary and Ian H. Written, Text
`Compression, Prentice Hall, Copyright 1990, pp. 290—295.
`Imran Ali Shah, Olu Akiwumi—Assani and Brian Johnson, A
`Chip Set for Lossless Image Compression, IEEE Journal of
`Solid—State Circuits, vol. 26, No. 3, Mar. 1991, pp. 237—244.
`
`Gregory K. Wallace, The JPEG Still Picture Compression
`Standard, Communication of the ACM, vol. 34, No. 4, Apr.
`1991, pp. 30—45.
`Olivier Rioul and Martin Vetterli, Wavelets and Signal
`Processing, IEEE SP Magazine, Oct. 1991, pp. 14—38.
`Albert Cohen, Biorthogonal Wavelets, Wavelets—A Tutorial
`in Theory and Applications, Copyright 1992, pp. 123—152.
`A. S. Lewis and G. Knowles, Image Compression Using the
`2—D Wavelet, IEEE Transaction on Image Processing, vol.
`1, No. 2, Apr. 1992, pp. 244—250.
`
`(List continued on next page.)
`
`Primary Examiner—Leo H. Boundreau
`Assistant Examiner—Bij an Tadayon
`Attorney, Agent, or Firm—Bell Seltzer Intellectual Property
`Group of Alston & Bird, LLP
`
`[57]
`
`ABSTRACT
`
`The error resilient method and apparatus for encoding data
`includes an encoder including a code word generator for
`generating a plurality of code words representative of
`respective portions of the data. The code word generator
`encodes data pursuant to split ?eld coding in which each
`code word includes a pre?x ?eld and an associated suf?x
`?eld. The pre?x ?eld includes information representative of
`a predetermined characteristic of the associated suf?x ?eld,
`such as the predetermined number of characters which form
`the associated suf?x ?eld. In addition, the suf?x ?elds
`include information representative of at least some of the
`original data. Consequently, if the pre?x ?eld of a code word
`is decoded correctly, i.e, without the occurrence of bit error,
`the error resilient method and apparatus can correctly deter
`mine the length of the associated suf?x ?eld and the range
`of coef?cient values to be represented by the associated
`suf?x ?eld such that the associated suffix ?eld is resilient to
`errors. In order to increase the probability that the pre?x
`?eld will be correctly decoded, the method and apparatus
`protects the pre?x and suf?x ?elds of the encoded data to
`greater and lesser degrees, respectively, such that the data
`can be more ef?ciently compressed.
`
`30 Claims, 6 Drawing Sheets
`
`12
`
`DRTSTNAL
`14
`DATA
`_DATA ENCODER
`DATA
`DATA
`
`TRANSFORMER OUANTIZER v_0D——- WORD /26
`EN
`
`/29
`
`UNEQUAL
`ERROR
`RROTECTlON
`MEANS
`
`Apple Inc. Exhibit 1001 Page 1
`
`
`
`5,850,482
`Page 2
`
`OTHER PUBLICATIONS
`
`Mladen Victor Wickerhauser, High—Resolution Still Picture
`Compression, Apr. 19, 1992, pp. 1—33. (No Title of Publi
`cation).
`Marc Antonini, Michel Barlaud, Pierre Mathieu and Ingrid
`Daubechies, Image Coding Using Wavelet Transform, IEEE
`Transactions on Image Processing, vol. 1, No. 2, Apr. 1992,
`pp. 205—220.
`Naoto Tanabe and Nariman Farvardin, Subband Image Cod
`ing Using Entropy—Coded Quantization over Noisy Chan
`nels, IEEE Journal on Selected Areas in Communications,
`vol. 10, No. 5, Jun. 1992, pp. 926—943.
`Amir Said and William A. Pearlman, Reversible image
`compression via multiresolution representation and predic
`tive coding, SPIE, vol. 2094, 1993, pp. 664—674.
`Jerome M. Shapiro, Embedded Image Coding Using
`Zerotrees of Wavelet Coef?cients, IEEE Transactions on
`Signal Processing, vol. 41, No. 12, Dec. 1993, pp.
`3445—3462.
`Michael L. Hilton, Bjorn D. JaWerth and Ayan Sengupta,
`Compressing Still and Moving Images With Wavelets, Mul
`timedia Systems, vol. 2, No. 3, Apr. 18, 1994, pp. 1—20.
`
`Keith A. Birney and Thomas R. Fischer, On the Modeling of
`DCT and Subband Image Data for Compression, IEEE
`Transactions on Image Processing, vol. 4, No. 2, Feb. 1995,
`pp. 186—193.
`
`Parthasarathy Sriram and Michael W. Marcellin, Image
`Coding Using Wavelet Transforms and Entropy—Con
`strained Trellis—Coded Quantization, IEEE Transactions on
`Image Processing, vol. 4, No. 6, Jun. 1995, pp. 725—733.
`
`Bjorn J aWerth and Wim SWeldens, An Overview Of Wavelet
`Based Multiresolution Analyses, pp. 1—39. (No Title Of
`Publication), (No Date Of Publication).
`
`Wim SWeldens, The Lifting Scheme: A New Philosophy in
`Biorthogonal Wavelet Constructions. (No Place Or Date Of
`Publication & No Page #).
`
`Ahmad Zandi, James D. Allen, EdWard L. SchWartZ and
`Martin Boliek, CREW" Compression with Reversible
`Embedded Wavelets. (No Place Or Date) & (No Page #).
`
`Apple Inc. Exhibit 1001 Page 2
`
`
`
`U.S. Patent
`
`Dec. 15,1998
`
`Sheet 1 0f 6
`
`5,850,482
`
` GA AND 0
`
`M w A IN|TI+
`
`DN A
`Tm
`AR
`AS
`
`I E
`M
`
`U .
`
`Q G 2 R _L|_
`
`D >
`
`4 _
`DA
`I R
`E mm .
`AN I
`
`6 8 7 9 2 2 2 2 w // / /
`m / DDR R R N
`ORO O O L O
`
`ACG G G P
`TOE E E U Du
`
`NWACIA FA UOCN E RFR ER IQREA O EEUE RE ERTE
`
`I ADNSN PN NEOM
`COTMT MT ARDS
`
`COIv1IIgRESSED DATA
`
`50
`PERFORM A WAVELET TRANSFORM ON THE ORIGINAL IMAGE
`
`TRANSMITTER
`
`22% $24
`
`I
`
`COLLECT COEFFICIENT STATISTICS IN HISTOGRAM
`
`52
`
`55
`NSIGNIFICANT COEFFICIENTS
`SEPARATE‘ SIGN
`IFICANT AND I
`ACCORDING TO CLI PING THRESHOLD
`
`I
`34
`OUANTIZE THE
`SIGNIFICANT
`COEFFICIENTS
`
`55
`
`36
`
`I
`RUN LENGTH ENCODE
`INSIGNIFICANT COEFFICIENTS
`TO REPRESENT POSITIONS
`OF SIGNIFICANT COEFFICIENTS
`57
`ENTROPY ENCODE RUN LENGTH
`VALUES
`
`I
`58
`APPLY UNEQUAL ERROR PROTECTION TO ENCODED DATA
`USING HIGHER ERROR PROTECTION FOR ENCODED RUN LENGTHS
`AND FOR PREFIX FIELDS OF ENCODED COEFFICIENTS
`AND LOWER ERROR PROTECTION OR NO ERROR PROTECTION
`FOR SUFFIX FIELDS OF ENCODED COEFFICIENTS
`
`Apple Inc. Exhibit 1001 Page 3
`
`
`
`U.S. Patent
`
`Dec. 15,1998
`
`Sheet 2 0f 6
`
`5,850,482
`
`//72
`
`Apple Inc. Exhibit 1001 Page 4
`
`
`
`U.S. Patent
`
`Dec. 15,1998
`
`Sheet 3 0f 6
`
`5,850,482
`
`START
`
`FIG. 4.
`
`Apple Inc. Exhibit 1001 Page 5
`
`
`
`U.S. Patent
`
`1
`
`5,850,482
`
`Ndo.0_n_
`
`%‘I:wmmH3<>
`5,pzyormmoo
`
`»zm_oEmooH
`BN:z<:oH
`mm3<>H
`
`_
`
`H_an.@EHH6HWMHHMHHC__mHoooH.8iom_o_
`
`mmozmmmzooo
`
`“HoEmssz
`
`OOH
`
`mmozmmmaooo
`..._ommmssz<
`
`Apple Inc.
`
`Exhibit 1001
`
`Page 6
`
`Apple Inc. Exhibit 1001 Page 6
`
`
`
`
`
`U.S. Patent
`
`Dec. 15,1998
`
`Sheet 5 of6
`
`5,850,482
`
`STORAGE MEDIUM
`PREF|><1 PREFIXZ PREF|><3
`
`‘8
`
`DATA BLOCK 1
`
`SUFFIXT SUFFIXZ SUFFI><3
`
`DATA BLOCK 2
`
`FIG. 6.
`
`66
`
`/
`
`/68
`
`Apple Inc. Exhibit 1001 Page 7
`
`
`
`U.S. Patent
`
`Dec. 15,1998
`
`Sheet 6 0f 6
`
`5,850,482
`
`@3
`
`L DECODE ERROR PROTECTED DATA
`
`|
`
`ENTRORY DECODE
`RUN LENGTH VALUES
`II
`ENTRORY DECODE
`QUANTIZED COEFFICIENTS
`(USING SRLIT FIELD CODING)
`TO RECONSTRUCTED
`' COEFFICIENT VALUES
`
`I___T_I
`
`EXPAND RUN LENGTl-IS TO
`RUNS OF ZEROED COEFFICIENTS
`AND INSERT RECONSTRUCTED
`COEFFICIENT VALUES BETWEEN
`THESE RUNS
`I
`PERFORM INVERSE WAVELET TRANSFORM
`TO RECOVER RECONSTRUCTED IMAGE
`
`@
`
`FIG. 7.
`
`Apple Inc. Exhibit 1001 Page 8
`
`
`
`1
`ERROR RESILIENT METHOD AND
`APPARATUS FOR ENTROPY CODING
`
`FIELD OF THE INVENTION
`
`The present invention relates generally to methods and
`apparatus for compressing and decompressing data by
`entropy encoding and decoding and, more particularly, to
`error resilient methods and apparatus for entropy encoding
`and decoding. The present invention further relates to the
`application of said error resilient entropy coding methods
`and apparatus to image compression.
`
`BACKGROUND OF THE INVENTION
`
`In order to transmit data over channels With limited
`throughput rates or to store data in limited memory space, it
`is frequently necessary to compress the data so as to repre
`sent the data With feWer bits. Upon receiving or retrieving
`the compressed data, the data may be decompressed to
`recover the original data or an approximation thereof.
`Compression and decompression techniques are com
`monly applied to imagery data in order to reduce otherWise
`massive transmission and storage requirements. By Way of
`example, a single monochrome image is typically formed by
`an array of pixels, such as a 512x512 array of pixels. In
`addition, the intensity level of each pixel is generally
`assigned a numeric value betWeen 0 and 255 Which is
`digitally represented by an 8 bit pattern. Therefore, the
`digital representation of such a monochrome image requires
`approximately 2 million bits of data. As a further example,
`a typical digital color video format is the Common Inter
`mediate Format (CIF) having a resolution of 360x288. This
`color video format includes three color components Which
`are each represented as an array of pixels Which are dis
`played at a rate of 30 frames per second. The three color
`components are an intensity component at full resolution
`(360x288) and tWo chrominance components at half reso
`lution (180x144) each. Thus, the total throughput require
`ment for CIF is about 37 million bits per second. Thus, the
`transmission of uncompressed digital imagery requires rela
`tively high throughput rates or, alternatively, relatively long
`transmission times. Likewise, the storage of digital imagery
`requires relatively large memory or storage devices.
`In order to reduce the storage and transmission require
`ments for image processing applications, a variety of image
`compression techniques have been developed. Substantial
`compression of imagery is possible due to the statistical
`redundancy typically found in image data. Such redundancy
`takes several forms, namely, spatial redundancy due to
`correlation betWeen spatially proximate pixels, temporal
`redundancy due to correlation betWeen successive frames in
`an image sequence, and spectral redundancy betWeen the
`color planes or bands of multispectral images.
`Image compression techniques attempt to reduce the
`volume of data by removing these redundancies. Image
`compression techniques fall into tWo broad categories:
`“lossless” and “lossy”. With lossless image compression, a
`reconstructed image is guaranteed to be identical to the
`original image. Unfortunately, lossless approaches can pro
`vide only a very limited amount of compression (typically
`less than 3:1). In contrast, lossy techniques achieve higher
`compression ratios by alloWing some of the visual informa
`tion to be removed, thereby resulting in a difference or
`distortion betWeen the original image and the reconstructed
`image. To minimiZe the perceived or subjective distortion of
`the reconstructed image, the removal of information should
`ideally take into account the characteristics of the human
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`55
`
`60
`
`65
`
`5,850,482
`
`2
`visual system (HVS). Since subjective distortion is difficult
`to characteriZe quantitatively, hoWever, numerical measures
`of distortion are commonly used With the most popular
`measure being RMS error, i.e., the square root of the mean
`squared pixel intensity differences betWeen the original and
`reconstructed images. In general, higher compression ratios
`can be achieved by tolerating higher distortion. Finally, if the
`distortion from a lossy compression process is not visually
`apparent under normal vieWing conditions, the compression
`is termed a “visually lossless” image compression.
`A common approach to image compression, called
`transform-based compression or transform coding, involves
`three primary steps, namely, a transform step, a quantiZation
`step, and an encoding step. See, for example, an article
`entitled “High-Resolution Still Picture Compression” by M.
`V. Wickerhauser dated Apr. 19, 1992 Which is available on
`the Internet as “http://Wuarchive.Wustl.edu/doc/techreports/
`Wustl.edu/m ath/papers/dsp.ps.Z”. As described in US. Pat.
`No. 5,014,134 to Wayne M. LaWton, et al. and US. Pat. No.
`4,817,182 to Adelson, et al., an invertible transform decom
`poses the original image data into a Weighted sum of simple
`building blocks, called basis functions, such as sinusoids or
`Wavelet functions. Accordingly, a number of image trans
`forms have been developed, including the Fourier transform,
`the discrete cosine transform and the Wavelet transform. If
`the basis functions have suf?cient correspondence to the
`correlation structure of the imagery to be compressed, most
`of the energy (or information) in the image Will be concen
`trated into relatively feW of the transform coefficients With
`correspondingly large coef?cient values. Consequently,
`most of the remaining transform coefficients Will have small
`or Zero coefficient values.
`The Wavelet transform decorrelates the image data at
`multiple resolutions by use of basis functions Which are
`dilations and translations of a single prototype function. The
`prototype basis function is a bandpass ?lter called a
`“Wavelet”, so named because the ?lter is both oscillatory and
`spatially localiZed. The translations and dilations of the
`prototype Wavelet yield a set of basis functions Which
`produce a signal or image decomposition localiZed in posi
`tion and resolution, respectively.
`As knoWn to those skilled in the art, the Wavelet transform
`can be ef?ciently computed using a fast discrete algorithm,
`called the Fast Wavelet Transform (FWT), Which recursively
`applies the Wavelet ?lter and a companion loWpass ?lter
`called a “scaling” ?lter. For a single iteration of the FWT
`applied to a one-dimensional signal, the Wavelet and scaling
`?lters are convolved against the signal, folloWed by a
`decimation by tWo. This process splits the signal into a loW
`resolution approximation signal (extracted by the scaling
`?lter) and a high resolution detail signal (extracted by the
`Wavelet ?lter). By recursively applying the Wavelet ?lter and
`the scaling ?lter to the loW resolution approximation signal
`generated by the prior iteration of the FWT, a multiresolu
`tion decomposition of the original signal is produced Which
`consists of the detail signals at various resolutions and a ?nal
`loW resolution approximation signal.
`The Wavelet transform can be easily extended to tWo
`dimensional imagery by separately ?ltering the roWs and
`columns and by iteratively processing the loWpass approxi
`mation image. This Wavelet transform is equivalent to
`decomposing the image in terms of basis functions Which
`are 2-D tensor products of the 1-D Wavelet and scaling
`?lters. See, for example, in US. Pat. Nos. 5,014,134 and
`4,817,182, the contents of Which are expressly incorporated
`by reference herein. See also Oliver Rioul, et al., “Wavelets
`and Signal Processing”, IEEE Signal Processing MagaZine,
`
`Apple Inc. Exhibit 1001 Page 9
`
`
`
`3
`pp. 14—38 (October 1991); Bjorn JaWerth, et al., “An Over
`view of Wavelet-Based Multi-Resolution Analyses”, SIAM
`Review, Vol. 36, No. 3, pp. 377—412 (1994); and Michael L.
`Hilton, et al., “Compressing Still and Moving Images With
`Wavelets”, Multimedia Systems, Vol. 2, No. 3 (1994) for
`further descriptions of the Wavelet transform.
`Once the image data has been transformed, the compres
`sion algorithm then proceeds to quantize and encode the
`transform coef?cients Which are generated by the Wavelet
`transform. The quantization step discards some of the image
`content by approximating the coef?cient values. As knoWn
`to those skilled in the art, a quantization is a mapping from
`many (or a continuum) of input values to a smaller, ?nite
`number of output levels. The quantization step divides the
`range of input values by a set of thresholds {ti, i=0, .
`.
`.
`,
`N-1} and maps an input value falling Within the interval (ti,
`tin] to the output value represented by the discrete symbol
`or variable i. Correspondingly, dequantization (used to
`recover approximate coef?cient values during
`decompression) maps the discrete variable i to a recon
`structed value ri Which lies in the same interval, i.e., (ti, tin].
`For minimum mean squared error, the reconstructed value
`should correspond to the mean of those coef?cient values
`falling Within the interval, but, in practice, a reconstruction
`value at the center of the interval is often used. Further,
`scalar quantization maps a single scalar value to a single
`discrete variable, Whereas vector quantization jointly maps
`a plurality (or vector) of M values to each discrete variable.
`While the quantized coef?cient values have reduced
`precision, they also can be represented With feWer bits, thus
`alloWing higher compression at the expense of distortion in
`the reconstructed image. This image distortion is referred to
`as quantization error and accounts for all of the distortion
`inherent in lossy compression schemes. Thus, the quantiza
`tion step is omitted for lossless compression approaches.
`As knoWn to those skilled in the art, a variety of factors
`contribute to the choice of the actual quantization intervals,
`such as the desired compression ratio, the statistical distri
`bution of the coef?cient values, the manner in Which the
`quantized coefficient values Will be encoded, and the dis
`tortion metric used to measure image degradation. When the
`quantized coef?cients Will be entropy-coded, mean squared
`error can be (approximately) minimized by using uniform
`quantization intervals. See R. C. Wood, “On Optimum
`Quantization”, IEEE Transactions on Information Theory,
`Vol. 15, pp. 248—52 (1969). In the absence of entropy
`coding, the mean squared error is minimized by choosing
`nonuniform quantization intervals in accordance With the
`Lloyd-Max algorithm as described in S. P. Lloyd, “Least
`Squares Quantization in PCM”, Bell Lab. Memo. (July
`1957), reprinted in IEEE Transactions on Information
`Theory, Vol. 28, pp. 129—37 (1982), and also in J. Max,
`“Quantizing for Minimum Distortion”, IRE Transactions on
`Information Theory, Vol. 6, pp. 7—12 (1960).
`Due to the decorrelating properties of the Wavelet
`transform, the distribution of transform coef?cient values is
`typically sharply peaked at zero. This type of coef?cient
`distribution results in a preponderance of coef?cients falling
`into the quantization interval at the origin, i.e., the quanti
`zation interval centered on the value of zero. Due to the
`preponderance of coef?cients near zero, more ef?cient com
`pression performance can be achieved by treating the quan
`tization interval at the origin separately. In particular, the
`overall coding efficiency may be increased by using a larger
`quantization interval around the origin, often called a “dead
`zone”. In one preferred embodiment, the dead zone interval
`is tWice as large as the adjacent intervals. The dead zone is
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`55
`
`60
`
`65
`
`5,850,482
`
`4
`centered about the origin With a reconstruction value exactly
`equal to zero to prevent artifacts resulting from the use of
`nonzero reconstruction values for the many coef?cients
`close to zero. The magnitude of the positive and negative
`bounds of the dead zone is often termed the “clipping
`threshold” because all coef?cients Whose magnitudes fall
`beloW this threshold are “clipped” to zero. In addition, those
`coef?cients Whose magnitudes exceed the clipping threshold
`are termed “signi?cant” coef?cients, While those coefficients
`Whose values lie beloW the threshold are termed “insigni?
`cant” coefficients.
`Because most of the coef?cients produced by the trans
`form have small magnitudes or are equal to zero, the
`quantization process typically results in the majority of the
`coef?cients being deemed insigni?cant, While only rela
`tively feW of the quantized coefficients have magnitudes
`exceeding the clipping threshold Which are deemed signi?
`cant. Thus, as indicated above, it is advantageous to treat the
`signi?cant coef?cients separately from the insigni?cant
`coef?cients.
`In one preferred embodiment, this separate treatment is
`accomplished by separately indicating the positions and the
`quantized values of the signi?cant coef?cients. To achieve
`further compression, the quantized values of the signi?cant
`coef?cients may then be entropy coded using a technique
`knoWn to those skilled in the art, such as Huffman coding or
`arithmetic coding. In addition, the positions of the signi?
`cant coef?cients can be represented using one of a variety of
`conventional approaches, such as tree structures, coef?cient
`maps, or run length coding. In one preferred embodiment,
`the positions of the signi?cant coef?cients are represented
`by means of run lengths of consecutively occurring insig
`ni?cant coefficients. The resulting position representation
`may then be entropy coded to obtain additional compres
`sion.
`As knoWn to those skilled in the art, entropy coding
`reduces the number of bits required to represent a data set by
`using variable length coding in a manner Which exploits the
`statistical probabilities of various symbols in the data set.
`For example, entropy coding assigns shorter code Words to
`those symbols Which occur frequently, While longer code
`Words are assigned to those symbols Which occur less
`frequently. Anumber of different entropy coding approaches
`have been developed including Huffman coding Which rep
`resents the data symbols using code Words that each have a
`length consisting of an integer number of bits, and arithmetic
`coding Which is capable of producing code Words Whose
`length is a fractional number of bits. Entropy coding is
`completely reversible so that no additional distortion is
`introduced beyond that due to the quantization process.
`The assignment of code Words for entropy coding is
`typically governed by means of a codebook Which must be
`knoWn to both the encoder and decoder. If the statistics of
`the data sets to be encoded are unstable or unknoWn, then the
`codebook itself or the statistics from Which the codebook
`may be generated must be transmitted along With the
`entropy encoded data. In this event, any such codebook or
`statistics Which must be transmitted to the decoder is
`referred to as “side information”. A signi?cant issue in the
`design of entropy coding schemes is the reduction of side
`information. In many cases, it is bene?cial to estimate and
`transmit as side information only an approximation of the
`symbol statistics because the resulting reduction in side
`information may outWeigh any loss of coding ef?ciency
`resulting from the approximation of the symbol statistics.
`Once an image is compressed, the compressed image is
`typically transmitted over a communications link or stored
`
`Apple Inc. Exhibit 1001 Page 10
`
`
`
`5,850,482
`
`5
`in a medium, such as a magnetic disk. In this context, the
`communications or storage medium is referred to as the
`“channel”. A digital data set, such as a compressed image,
`conveyed through such a channel is subject to corruption of
`some of the bits, thereby creating erroneous bit values. The
`rate at Which erroneous bit values occur is termed the bit
`error rate (BER) of the channel. If the BER is suf?ciently
`loW, the incidence of errors is infrequent and may safely be
`ignored. For higher BERs, hoWever, additional measures
`must be employed to maintain the integrity of the data. One
`approach knoWn as Automatic Repeat reQuest (ARQ) is to
`use communications protocols Which provide mechanisms
`for detecting errors and repeatedly transmitting corrupted
`data packets.
`HoWever, for continuous streams of data or for channels
`With high BERs, the delays due to retransmission and
`protocol handshaking required by ARQ may be unaccept
`able.
`As knoWn to those skilled in the art, another approach
`uses channel coding (also called forWard error coding
`(FEC)) Which further encodes the data so that channel errors
`may be detected and, in some instances, corrected With high
`probability. Channel coding can detect and correct some bit
`errors by adding redundancy to the data in a controlled
`fashion. The effectiveness of channel coding is related to the
`amount of redundancy employed. By adding more
`redundancy, channel coding can effectively detect and cor
`rect a higher level of errors. The doWnside of channel coding
`is that the redundancy introduced can consume a signi?cant
`percentage of the channel bandWidth (typically in the range
`of 10% to 90% of the channel bandWidth depending on the
`level of error detection and correction required).
`As knoWn to those skilled in the art, one variant of
`channel coding uses an approach called Unequal Error
`Protection (UEP) Which separates a data set into several
`subsets and provides different levels of error protection for
`each subset by varying the amount of redundancy for each
`subset. The rationale for UEP is that different subsets of a
`data set may vary in importance. The most important data
`may require correction of virtually all bit errors, Whereas
`some higher level of bit errors may be acceptable in less
`important data. By providing loWer levels of protection to
`the less important subsets of the data, the amount of redun
`dancy added by the channel coding can be reduced, and
`channel bandWidth may correspondingly be conserved.
`An unfortunate consequence of data compression is the
`increased susceptibility of the compressed data to channel
`errors. For uncompressed image data, the effects of bit errors
`are localiZed to the affected piXel(s) and are manifested as
`“salt and pepper” noise. On the other hand, image compres
`sion algorithms concentrate the information or energy of an
`image into feWer bits so that the effects of even a single bit
`error can be far-reaching. The susceptibility of the com
`pressed data to channel errors and the resultant effects on the
`reconstructed data depend on the manner in Which the data
`Was compressed and the particular bit(s) Which is/are cor
`rupted.
`For example, a bit error Which causes a transform coef
`?cient value to be misdecoded introduces a noise artifact in
`the reconstructed image consisting of the corresponding
`basis function scaled according to the magnitude of the error
`on the misdecoded coef?cient. Such artifacts may be toler
`able because the image noise associated With a feW per
`turbed basis functions is often not suf?cient to severely
`hamper the recognition of image content. If the coef?cient
`magnitudes are encoded differentially, hoWever, the effects
`
`10
`
`15
`
`25
`
`35
`
`45
`
`55
`
`65
`
`6
`of such an error may persist for all coef?cients Which are
`dependent on the misdecoded coef?cient. In this case, the
`resulting artifact Will be an undesirable streak across the
`reconstructed image.
`As previously mentioned, the positions of the signi?cant
`coef?cients are commonly represented using tree structures,
`coef?cient maps or run length coding. Such representations
`often represent the position of a given coefficient relative to
`a previously designated coef?cient position. Thus, a bit error
`Which causes the position of a signi?cant coef?cient value to
`be misdecoded can also introduce errors in the positions of
`succeeding coef?cients. In other Words, the image content
`associated With the corrupted coefficient positions is mis
`placed Within the reconstructed image, thereby producing an
`effect called “tearing” characteriZed by a displacement of a
`portion of the image Which is sometimes catastrophic.
`Abit error on entropy-coded data can, due to the variable
`length of the code Words, cause a loss of code Word
`synchroniZation Which may persist for all succeeding data,
`thereby resulting in catastrophic effects on a reconstructed
`image. Because bit errors on compressed data can result in
`a variety of catastrophic effects as described above, it is
`imperative that some form of error protection be provided
`for compressed data Which Will be transmitted through a
`noisy channel. HoWever, such error protection increases the
`amount of information Which must be transmitted over the
`noisy channel, thereby decreasing the ef?ciency or speed at
`Which the compressed data is transmitted as described
`above.
`
`SUMMARY OF THE INVENTION
`
`It is therefore an object of the present invention to provide
`an improved error resilient method and apparatus for
`entropy coding of data Which can utiliZe unequal error
`protection techniques of channel coding.
`It is another object of the present invention to provide an
`error resilient method and apparatus Which isolates the
`effects of a bit error to a single code Word and Which
`constrains the resulting error on the decoded value such that
`a misdecoded value falls Within a constrained or limited
`interval about the correct value.
`It is a further object of the present invention to provide an
`improved method and apparatus for image compression
`Which utiliZes error resilient entropy coding Which, in turn,
`can utiliZe unequal error protection techniques of channel
`coding.
`These and other objects are provided, according to the
`present invention, by an error resilient method and apparatus
`for entropy coding of data Which includes code Word gen
`erating means for generating a plurality of code Words
`representative of respective items in the data set. Each code
`Word has tWo portions Which We shall hereafter refer to as
`“?elds”, namely, a ?rst or pre?X ?eld Which is susceptible to
`bit errors, and an associated second or suffix ?eld Which is
`resilient to bit errors. As eXplained hereinafter, the code
`Words can be generated such that a bit error in the pre?X ?eld
`of a code Word could result in a potential loss of code Word
`synchroniZation, While a bit error in the suf?X ?eld of a code
`Word shall only effect that particular code Word. In
`particular, the code Words can be generated such that a bit
`error in the suf?X ?eld of a code Word Will not result in a loss
`of code Word synchroniZation, but the resulting misdecoded
`value shall, instead, fall Within a predetermined interval
`about the correct value. Thus, according to the present
`invention, the error resilient method and apparatus for
`entropy coding of data shall be suitable for use With unequal
`
`Apple Inc. Exhibit 1001 Page 11
`
`
`
`5,850,482
`
`7
`error protection means such that the pre?x ?elds are channel
`encoded With a relatively higher level of error protection and
`the suffix ?elds are channel encoded With a relatively loWer
`level of error protection, if any at all.
`According to one embodiment, the code Word generating
`means includes pre?x generating means and suf?x generat
`ing means for generating the pre?x and suffix ?elds of each
`code Word, respectively. In particular, the pre?x ?eld
`includes information representative of a predetermined char
`acteristic of the associated suf?x ?eld. Preferably, each
`pre?x ?eld includes information representative of the pre
`determined number of characters, such as bits, Which form
`the associated suf?x ?eld of the code Word. In addition, each
`suffix ?eld includes information representative of respective
`portions of the original data. Each suf?x ?eld is a