throbber
FILE HISTORY
`US 5,850,482
`
`5,850,482
`PATENT:
`INVENTORS: Meany, James J.
`Martens, Christopher J.
`
`TITLE:
`
`Error resilient method and apparatus for
`entropy coding
`
`APPLICATION
`NO:
`FILED:
`ISSUED:
`
`US1996633896A
`
`17 APR 1996
`15 DEC 1998
`
`COMPILED:
`
`17 JUN 2015
`
`MICROSOFT CORP. ET AL.
`EXHIBIT 1003
`
`Page 1 of 437
`
`

`
`_
`
`PATEl'.£TDATE
`:BEC
`rm '
`
`5.1.993
`_
`
`_
`PA-"I-EN-r
`NUMER :-
`
`.
`
`‘
`
`I
`.
`
`:‘
`
`I
`
`I.
`
`*'!=I-’|§1Fx‘E.I[EiN/F'|3T H "LI|3F\'fI|3!I\|éf”‘3“‘**;*““"‘*"""
`‘JE'r-HF"-‘IEO -
`
`" .1-25!-I-.CH'.l
`
`'_WABNlflG:""Fl1-e |n[Ul!l1Ill10n disc-loud hnrdn
`‘.
`-bymaunltldfilniau-0ode11'Na$5.‘
`._-
`_ Pg1anl&:TI'ado_Imfl6 Ofllpe-in
`
`_
`
`'
`
`Page 2 of 437
`
`Page 2 of 437
`
`

`
`5,850,482
`
`ERROR RESILIENT METHOD AND APPARATUS FOR ENTROPY
`CODING
`
`Transaction History
`
`Transaction Description
`Date
`04-29-1996 Initial Exam Team nn
`06-06-1996 Notice Mailed--Application Incomplete--Filing Date Assigned
`07-08-1996 Information Disclosure Statement (IDS) Filed
`07-08-1996 Information Disclosure Statement (IDS) Filed
`08-07-1996 Application Is Now Complete
`08-15-1996 Application Captured on Microfilm
`08-26-1996 Case Docketed to Examiner in GAU
`07-05-1997 Case Docketed to Examiner in GAU
`07-21-1997 Non-Final Rejection
`07-24-1997 Mail Non-Final Rejection
`01-26-1998 Response after Non-Final Action
`01-26-1998 Request for Extension of Time - Granted
`02-09-1998 Date Forwarded to Examiner
`03-30-1998 Mail Notice of Allowance
`03-30-1998 Notice of Allowance Data Verification Completed
`03-30-1998 Mail Examiner's Amendment
`03-30-1998 Examiner's Amendment Communication
`04-30-1998 Issue Fee Payment Verified
`11-12-1998 Issue Notification Mailed
`12-15-1998 Recordation of Patent Grant Mailed
`
`Page 3 of 437
`
`

`
`
`
` ;;$;.;.?;.r-%L%*%.s;;cmas."L
` 1
`
`lllllllfflflfilwdfllllmfll!
`
`mi.
`H
`usssssee
`
`aw“
`%
`%
`“_"E6fifi§fi§”"“
`Counted
`‘
`¢3/633393
`
`
`
`_
`
`.
`
`_
`-
`.
`.
`. _
`.‘ _:_'
`_. jjjnj
`
`
`
`Page 4 of 437
`
`Page 4 of 437
`
`

`
`.1.
`
`arm:
`:\
`
`-_
`r
`
`'
`
`ma
`
`.
`
`-\
`_:
`
`_
`
`.
`
`I_:
`
`
`
`13: U3. uovE!INMEN1 PRINTING omczz ms—:m»oeI
`
` .;;:.AFPiJ{t2iQ,ll]‘|'sENAf(|\E‘n§?E?HIPlT)
`
`
`
`u‘
`
`
`
`.
`
`IA -3.,
`
`' ‘ ' "1""-"-I name
`-
`ussue cusslrncmou sslIP£mIIn‘5 Ararratmmifi
`'-
`
`_ .-GRfli1P2?.___._,,,,
`____.
`_
`__ _ .
`_
`.
`
`
`
`
`
`3
`
`,P,;'§,,_=?:,,,
`
`_
`
`‘HI‘nl"-J
`
` I:l>—2-I-<
`
`
`
`Bfififififlifilflflfiflfififlfi
` I‘L
`
`Page 5 of 437
`
`Page 5 of 437
`
`

`
`
`
`
`
`-——
`
` -
`. H-lfl—_
`
`
`jfl
`
`
`
`
`
`
`
`
`
`“IliaEEEE%IllliEEIIEIIIIIIIII=IIIIX-::=
`
`
`
`
`
`
`
`
`
`
`
`Page 6 of 437
`
`
`‘-I‘-IEEIEIEEEEEE
` IIIIIIIIIIIIIIIIIIIIIIEEEEEEHEEEEEEEEEEEEBEEEEJEEEIIIIIIIII
`
`
`
`5?
`
`mmax OFHCLAIMS
`
`I
`I‘-jl-;IIIlmum:----an
`Imnunnnnnuu
`IFIIIII-III-—
`
`lIv.1l£lI_rlII—HH-——
`l!'lI!II'JIIHUH——C
`l'fiIE‘ll’d.lII——
`MIHLVAIIH
`
`
`
`2..IsI_-I.v-
`
`Page 6 of 437
`
`

`
`Page 7 of 437
`
`Page 7 of 437
`
`

`
`WW MMMWWWWWWWMWM
`
`
`USO05850482/\
`
`United States Patent
`
`[:9]
`
`[11] Patent Number:
`
`5,850,482
`
`Meany et at.
`
`[45] Date of Patent:
`
`Dec. 15, 1998
`
`Gregory K. Waflace, The JPEG Still Picture Connpression
`Standard, Comimim'enrion ofrhe ACM. vol. 34. No. 4, Apr.
`1991. [J13.30—45.
`Olivier Rioul and Marlin Vetterli, Wavelets and Signal
`Processing, IEEE SP M’rigaz.r'rze, Oct. 1991, pp. 14-38,
`Albert Cohen, Biorthogonal Wavelets, Wavelei‘.s"—A Turnrirrt
`in Theory rmdAppIi'ennToris. Copyright 1992, pp. 123—152.
`A. S. Lewis and G. Knowles, Image (Tompression Using the
`2-1) Wavelet, Iizfffz‘ Tmrisactinri on bridge Processing, vol.
`1, No. 2, Apr. 1992, pp. 244-250.
`
`(List continued on next page.)
`
`Priiiirrry EL‘fl'lIllfl€F—[_£(t H. Boundreau
`Assisirmr I5.\‘aim'nr=r—|-lijan Tad ayo It
`AIi’0i'ncy, Agent. or Fir'm—BeIl Seltzer Intellectual Property
`Group of Alston 8: Bird, LLP
`
`[57]
`
`ABSTRACT
`
`The error resilient method and apparatus for encoding data
`includes an encoder inctuding a code word generator for
`generating a plurality of code words representative of
`respective portions 01' the data. The code word generator
`encodes data pursuant to split field coding in which each
`code word includes tt prefix field and an associated sutflx
`field. The pretix field includes inthrrnalion representative of
`a predetermined characteristic of the azaiociated sutlix field,
`Such as the predetermined number of characters which form
`the associated sullix Lield.
`In addition,
`the :su|.1\i fields
`include inlbrn1ation representative of at least some oi‘ the
`original data. Consequently, if the prefix field ofa code word
`is decoded correctly, i.e, Without the occurrence of bit error,
`the error resilient method and apparatus can correctly deter-
`mine the length 0!" the associated suflix lield and the range
`01‘ coellicient values to he represented by the associated
`suffix field such that the associated suftix field is resilient to
`errors. In order to increase the probability that
`the prelix
`field will be correctly decoded, the method and apparatus
`protects the prefix and suffix fields of the encoded data to
`greater and lesser degrees, respectively, such that the data
`can he more effieierltly contpresi-ied.
`
`30 Claims, 6 Drawing Sheets
`
`[54] ERROR RESILIENT METHOD AND
`APPARATUS FOR ENTROPY CODING
`
`[75]
`
`Inventors: James J. Meany, Des Peres;
`Christopher J. Martens, Creve Coeur,
`both of Mo.
`
`[73] Assignee: Mcllnnnell Douglas Corporation, St.
`Louis, Mo.
`
`[21 J Appl. No.: 633,896
`
`[22]
`
`Filed:
`
`Apr. 17, 1995
`
`............ ._ G06K 9/00
`Int.Cl."
`[51]
`382/232; 382/275
`[52] U.S. Cl.
`3835332. 233.
`[58]
`Field of Search
`383234, 235, 236, 237, 238, 239, 240,
`244, 245, 246, 248, 249, 250, 251, 252,
`253, 270, 274, 275, 254, 305, 309, 310,
`311, 224, 190, I66, 172, 306; 375/286,
`298, 254, 264; 37137.7, 43.], 37.08, 37.5;
`380/20. 10; 370/314; 348/476. 5.5; 340325.44
`
`[So]
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`332150
`35R,i2hl.3
`
`33tlf2{l
`382156
`.. 37lf'37.l
`
`
`
`..
`
`4,317,182 M1989 Adcisoil et al.
`5_,l'tl-‘L134
`5.tl99l Lawton et al.
`5.101.440
`3,tI9‘J2 Resuikofl et at.
`:'I.I68,493
`1211902 Nelson etal.
`.
`5,28‘),5t.t1
`211904 Seshndri el al,
`5,315,r;7n
`5il994 Shapiro ..
`5,321,750
`fifl99-'1 Nadan
`5_,32l_,776
`(#1994 5-ihapiro
`.
`5,4713%!)
`llfl995 Baggclt Ct at.
`DTIIER PUBLICATIONS
`
`Edward R. I"-iala and Daniel II. Greene, Data Compression
`with Finite Windows, Canmimiicntforis nfffze/lCM, vol. 32,
`No. 4, Apr. 1989, pp. 49t)—5ti5.
`Timothy C. Bell, John G. Cieary and Ian ll. Writteti, Text
`Cr)2J1pres.w'r;rr, Prentice Hall, Copyright 1990, pp. 29(L295.
`trnranAli Shah, Olu Akiwun1i—A5sanj and Brian .lol.1nson, A
`Chip Set for Lossless Image Compression, 1Eb3E‘JoiirnaI of
`.S‘oh'd—.S‘rrn‘e Cfrctiirs, vol. 26, No. 3, Mar. 1991, pp. 337-344.
`
`LTJRICINAL
`EJATA
`i__J
`
`/12
`DATA"
`TPANSFSRMFF!
`
`DATA
`
`14
`
`_
`
`DU./\t~ITtZER
`
`Page 8 of 437
`
`Page 8 of 437
`
`

`
`5,850,482
`Page 2
`
`OTHER PUBLICATIONS
`
`Mlaclcn Victor Wickcrhauscr. 1figl:—Reso!z:I£or: .5'nT1'n' Picture
`Compression. Apr. 19, 1992. Plh. 1—33. (No Title of Publi-
`cation).
`Marc Antonini, Michel Harland, Pierre Malhicu and Ingrid
`Daubeehies, Image Coding Using Wavelet Transform, IEEE
`Transactions on Image Processing, vol. 1, No. 2,/\pr. 1992,
`pp. 205-220.
`Naoto Tanabe and Nariman Farvardin, Subbancl Image Cod-
`ing Using Enlr0py—Coded Quantization over Noisy Chan-
`nels, IEEE J'onrn'm' on Sefecled Areas in Commurricttricins,
`vol. 10, N0. 5, Jun. 1992, pp. 926-943.
`Arnir Said and William A. Pearlman, Reversible image
`compression via multiresolution representation and predic-
`tive coding, SPIE, vol. 2094, 1993, pp. G64—674.
`Jerome M. Shapiro, Emlacdtlcd Image Coding lxlsirlg
`Zcrotrces of Wavelet Cnefficients, Ili'!:'E Trnn.mcrions mt
`Sigmr.’ Prc2ce.s'.w'r:g, vol. 41, N0.
`12, Dec. 1993, pp.
`3445—34f)2.
`Michael l_. Hilton, Bjfirn D. Jawerth and Ayztn Scngupla,
`Compressing Still and Moving Images with Wavelets, Mu!-
`rimcdin Systems, vol. 2, No. 3, Apr. 18, 1994, pp. 1—20.
`
`Keith A. Bimuy and Thomas. R. Fischer, On the Modeling 01'
`DCT and Subband Image Data for Compression, IEEE
`Imnsactiorts on Image Processing, vol. 4, No. 2, Feb. 1995.
`pp. 186493.
`
`Image
`Parthasarathy Sriram and Michael W. Marccllin,
`Coding Using, Wavelet Transforms and Entropy—Con-
`strained 'I'relli9(.‘ndcd Quantization, II:'I:']:? 1"mn_5acrrTrm5 an
`Image Processing, vol. 4, No. 6, Jun. 1995, pp. 725-733.
`
`Bj-iirn Jawerth and Wim Sweldens, An Overview 0fWawIet
`Based Mulriresoiurion Anm.'y.s'es, pp.
`l—39. (No Title Of
`Publication), (No Date Of Publication).
`
`Wim Sweldens, The Lifting .5‘clt.eme.' A New Pftilosoplzy in
`B.ior£h()gnmn' Wavelet Cr)rl.S'{!'L£Cfir)J’I.S'. (No Place Dr Date 01'
`Publication & No Page #).
`
`Ahmad Zandi, James D. Allen, Edward L. Schwartz and
`Martin Bnliek, Q'RI:7'W.' C'(mipre5:s'i0n wilit Reversible
`Embedded Wavelets. (No Place Dr Date) & (No Page is‘).
`
`Page 9 of 437
`
`Page 9 of 437
`
`

`
`U.S. Patent
`
`Dec. 15, 1998
`
`Sheet 1 of 6
`
`5,850,482
`
`OR1.“ NA‘
`><L
`D TA "
`L
`
`12
`
`DATA
`TRANSFORMER
`
`T4
`
`_[
`DATA
`QUANTIZER
`
`/10
`
`GENERATOR
`
`CODE WORD
`
`I8
`
`:50
`
`PERFORM A WAVELET TRANSFORM ON THE oRT”(i|NAIf
`
`IMAGE
`
`
`
`
`
`_
`L ?>_\.3
`I
`SEF’/»\RATE SIGNIFICANT AND INSIGNII-I ANT COEFFICIENTS
`ACCORDING TO CLIPI-“INC EH FSHOID
`
`
`
`
`36
`
`“F
`34
`QUANTIZE T E SICNII-ICANT
`COEI-HCIFNTS
`
`
`
`'I
`RUN LENGTH ENCODE
`INSICNIFICANT COEFFICIENTS
`TO REPRESENT POSITIONS
`OI-' SICNII-FICAN T COEFFICIENTS
`ENTROPY ENCODE
`COFIFFICIEN TS USI
`ENTROPY ENCODE RUN LLNGIII
`VALUES
` "T" """“"""
`I
`58\.___,.._-.
`APPLY UNZQUAL ERROR
`_PROTECTION TO ENCODED DATA
`.
`USING HIGHER ERROR PROTECTION FOR ENCODED RUN LENGTHS
`I
`AND FOR PREFIX FIFI OS OI-
`I-_NCODI-_IQ COEFF*CIEN_TS
`
`
`AND LOIIII/;R ERROR PROTECTION OR NO -"RRO_R PROTECTION
`
`
`FOR SUFFIX FIELDS OF ENCODED COEFI-IC*I;I\.1'_F3
`
`
`
`("sJ*ToT>>
`:___C_3 L
`
`
`
`
`
`Page 10 of 437
`
`Page 10 of 437
`
`

`
`U.S. Patent
`
`Dec. 15, 1998
`
`Sheet 2 of 6
`
`5,850,482
`
`/72
`
`75
`
`*
`
`-.-
`
`ME
`
`
`
`___________ __J
`
`
`
`
`
`L _
`
`_ W _ _ __ _ _ _ __J
`
`Page 11 of 437
`
`82sex‘
`
`
`“a”
`
`87 1%
`
`“
`
`2 I
`
`Page 11 of 437
`
`

`
`U.S. Patent
`
`Dec. 15, 1998
`
`Sheet 3 of 6
`
`5,850,482
`
`START
`
`HO. 4.
`
`Page 12 of 437
`
`Page 12 of 437
`
`

`
`..I.HEtaPQMU
`
`Dec. 15, 1993
`
`Sheet 4 of 6
`
`5,850,482
`
`.:,m__o_n_n_mooE
`
`mm_3<>
`
`mm.o_h_
`
`.:,_m_u_nEm_ou
`
`ommfizqzo
`
`mm51_<>
`
`OOO
`
`WMOZMEKDUOO
`
`LOm_mm:2DZ
`
`______________
`
`_OO_On__
`
`09
`
`Page 13 of437
`
`Page 13 of 437
`
`
`
`

`
`U.S. Patent
`
`Dec. 15, 1993
`
`Sheet 5 of 6
`
`5,850,482
`
`STORAGE
`
` Memum 18
`
`
`PREHXQ PREHX3
`
`PREHX1
`
`DATA BLOCK 1
`
`DATA BLOCK 2
`
`
`
`68
`
`FIG. 6.
`
`Page 14 of 437
`
`Page 14 of 437
`
`

`
`U.S. Patent
`
`Dec. 15, 1993
`
`Sheet 6 of 6
`
`5,850,482
`
`
`DECODE ERROR PROTECTED DATA
`
`
`
`ENTROPY DECODE
`RUN LENGTH VALUES
`
`
`
`
`
`
`
`ENTROPY DECODE
`
`
`QUANTIZED COEFFICIENTS
`(USING SPLIT FIELD CODING)
`TO RECONSTRUCTED
`
`
`I COEFFICIENT VALUES
`
`EXPAND RUN LENGTHS TO
`RUNS OF ZEROED COEFFICIENTS
`AND 1NSERT RECONSTRUCTED
`
`COEFFICIENT VALUES BETWEEN
`THESE RUNS
`
`
`
`
`
`
`
`
`
`
`
`
`PERFORM INVERSE WAVELET TRANSFORM
`TO RECOVER RECONSTRUCTED IMAGE
`
`
`
`
`
`FIG. 7.
`
`Page 15 of437
`
`Page 15 of 437
`
`

`
`5,850,482
`
`1
`ERROR RESILIICNT METHOD AND
`APPARATUS FOR ENTROPY CODING
`
`FIELD OF THE INVENTION
`
`The present invention relates generally to methods and
`apparatus for compressing and decompres.-sing data by
`entropy encoding and decoding and, more particularly,
`to
`error resilient methods and apparatus for entropy encoding
`and decoding. The present invention further relates to the
`application of said error resilient entropy coding methods
`and apparatus to image compression.
`HA(.‘KGR()Ul'*lD OF THE INVENTION
`In order to transmit data over channels with limited
`throughput rates or to store data in limited memory space. it
`is frequently necessary to compress the data so as to repre-
`sent the data with fewer bits. Upon receiving or retrieving
`the compressed data,
`the data may be decornpressed to
`recover the original data or an approximation thereof.
`Compression and decompression techniques are com-
`monly applied to imagery data in order to reduce otherwise
`massive transmission and storage requirements. By way of
`example, a single monochrome image is typically formed by
`an array of -pixels, such as a 512x512 array of pixels. In
`addition,
`the intensity level of each pixel
`is generally
`assigned 2| nurnetic value between 0 and 255 which is
`digitally represented by an 8 hit pattern. Therefore,
`the
`digital representation of such a monochrome image requires
`approximately 2 million hits of data. As a further example,
`a typical digital color video fooznat is the Common Inter-
`mediate Format (CIF) having a resolution of 3t':(]x2»S8. This
`color video format includes three color components which
`are each represented as an array of pixels which are dis-
`played at a rate of 30 frames per second. The three color
`components are an intensity component at full resolution
`(3(:U><288) and two ehrominance components at half reso-
`lution (18(l><l44) each. Thus, the total throughput require-
`ment for CIF is about 37 million hits per second. Thus, the
`transmission ofnncompressed digital imagery requires rela-
`tively high throughput rates or, alternatively, relatively long
`transmission times. Likewise, the storage of digital imagery
`requires relatively large memory or storage devices.
`In order to reduce the storage and transmission require-
`ments for image processing applications, a variety of image
`compression techniques have been developed. Substantial
`compression of imagery is possible due to the statistical
`redundancy typically found in image data. Such redundancy
`takes several
`fcrrns, namely, spatial redundancy due to
`correlation between spatially proximate pixels,
`temporal
`redundancy due to correlation between successive frames in
`an image sequence, and spectral redundancy between the
`color planes or bands of rnultispectral images.
`Image compression techniques attempt
`to reduce the
`volume of data by removing these redundancies. Image
`compression techniques fall
`into two broad categories:
`“losslcss” and “lossy". Witt) lossless image compression, a
`reconstructed image is guaranteed to be identical
`to the
`original image. Unfortunately, lossless approaches can pro-
`vide only a very limited amount of compression (typically
`less than 3:l). In contrast, lossy techniques achieve higher
`compression ratios by allowing some of the visual informa-
`tion to be removed,
`thereby resulting in a difference or
`distortion between the original image and the reconstructed
`image. To minimize the perceived or subjective distortion of
`the reconstntcted image, the removal of information should
`ideally take into account the characteristics of the human
`
`‘Jl
`
`l0
`
`2t}
`
`tou
`
`40
`
`45
`
`oil
`
`2
`visual system (IIVS). Since subjective distortion is dillicult
`to characterize quantitatively, however, numerical measures
`of distortion are commonly used with the most popular
`measure being RMS error, i.e., the square root of the mean
`squared pixei intensity dillerences between the original and
`reconstructed images. In general, higher compression ratios
`can be achieved by tolerating higher distortion. Finally, if the
`distortion from a lossy compression process is not visually
`apparent under normal viewing conditions, the compression
`is termed a "visually lossalcss” image compression.
`A common approach to image compression, called
`transform-based compression or transform coding, involves
`three primary steps, namely, a transforni step, a quantization
`step, and an encoding step. See, for example, an article
`entitled “iligh-Resolution Still Picture Compression” by M.
`V. Wiclrerhauser dated Apr. 19, i992 which is available on
`the Internet as "hLlp:.v’/wuarchive.wustl.etltlfdocrtechreportsf
`wustledujm athfpapersfdsp.ps.Z". As described in US. Pat.
`No. 5,014,134 to Wayne M. Lawton, et at. and US. Pat. No.
`4,817,182 to Adelson, et ‘.11., an invertible transforrn decom-
`poses the original image data into a weighted sum of simple
`building blocks, called basis functions. such as sinusoids or
`wavelet functions. Accordingly, a number of image trans-
`forms have been developed, including the Fourier transform,
`the discrete cosine transform and the wavelet transform. If
`the basis functions have sufficient oorrcsporidence to the
`correlation structure of the imagery to be compressed, must
`of the energy (or infomiation) in the image will be concen-
`trated into relatively few of the transform coelficients with
`correspondingly large coelficient values. Consequently.
`most of the remaining transform coefficients will have small
`or zero eoeflicicnt values.
`The wavelet
`transform decorrelates the image data at
`multiple resolutions by use of basis functions which are
`dilations and translations ol'a single prototype function. The
`prototype basis function is a bandpass filter called a
`"wavelet"’, so named because the filter is both oscillatory and
`spatially localized. The translations and dilutions of the
`prototype wavelet yield a set of basis functions which
`produce a signal or image decomposition localized in posi-
`tion and resolution, respectively.
`As known to those skilled in the art, the wavelet transform
`can be effieicntly computed using a fast discrete algorithm,
`called the Fast Wavelet Transform
`which recursively
`applies the wavelet filter and a companion lowpass tiller
`called a “scaling" litter. For a single iteration of the FWT
`applied to a one-dimensional signal, the wavelet and scaling
`filters are convolved against
`the signal,
`followed by a
`decirnation by two. This process splits the signal into a low
`resolution approximation signal (extracted by the scaling
`filter) and a high resolution detail signal (extracted by the
`wavelet filter). By recursively applying the wavelet filter and
`the scaling lilter to the low resolution approximation signal
`generated by the prior iteration of the PVVT, a multiresolu-
`tion decomposition ofthe original signal is produced which
`consists ofthe detail signals at various resolutions and a linal
`low resolution approximation signal.
`The wavelet transform can be easily extended to two-
`dirnentsional
`imagery by separately filtering the rows and
`columns and by iteratively processing the lowpass approxi-
`mation image.
`lhis wavelet
`transform is equivalent
`to
`decomposing the image in terms of basis functions which
`are 2-1) tensor products of the l—l) wavelet and scaling
`filters. See, for example, in US. Pat. Nos. 5,tll.4,l34 and
`4,817,182, the contents of which are expressly incorporated
`by reference herein. See also Oliver Rioul, et al., "Wavelets
`and Signal Processing”, IEEE Signal Processing Magazine,
`
`Page 16 of 437
`
`Page 16 of 437
`
`

`
`5,850,482
`
`3
`pp. i4—38 (October 1991); Bjorn Jawerth, et al., “An Over-
`view of Wavelet-Based Multi-Resolution Analyses", SIAM
`Review, Vol. 36, No. 3, pp. 377-412 (1994); and Michael L.
`Hilton, et al., "Compressing Still and Moving Images with
`Wavelets", Multimedia Systems, Vol. 2, No. 3 (1994) for
`further descriptions of the wavelet transform.
`Once the image data has been transformed, the compres-
`sion algorithm then proceeds to quantize and encode the
`transform coefficients which are generated by the wavelet
`transform. The quantization step discards some of the image
`content by approximating the coeflicicnt values. As known
`to those skilled in the art, a quantization is a mapping from
`many (or a continuum) of input values to a smaller, finite
`number of output levels. The quantization step divides the
`range of input values by a set of thresholds 5,1,, i=0, .
`.
`. ,
`N—l} and maps an input value falling within the interval (t,,
`t,-,_,] to the output value represented by the discrete symbol
`or variable i.
`(Iorrcsponrlingly, dcquantization ("used to
`recover approximate eoellicicnt values during
`decompression) maps the discrete variable i
`to El recon-
`structed value r,- which lies in the same interval, l.t-.’.,(l,-,l,-+1].
`For minimum mean squared error. the reconstructed value
`should correspond to the mean of those coefficient values
`falling within the interval, but, in practice, a reconstruction
`value at the center of the interval is often used. Further,
`scalar quantization maps a single scalar value to a single
`discrete variable, whereas vector quantization jointly maps
`a plurality (or vector) of M values to each discrete variable.
`While the quantized cocflicicnl values have reduced
`precision, they also can be represented with fewer bits, thus
`allowing higher compression at the expense ofdistortion in
`the reconstructed image. This image distortion is referred to
`as quantization error and accounts for all of the distortion
`inherent in lossy compression schemes. Thus, the quantiza-
`tion step is omitted for lossless compression approaches.
`As known to those skilled in the art, a variety of factors
`contribute to the choice of the actual quantizaitiorl intervals,
`such as the desired compression ratio, the statistical distri-
`bution of the coefficient values, the manner in which the
`quantized coefficient values will be encoded, and the dis-
`tortion metric used to measure image degradation. When the
`quantized coefficients will be entmpy—coded, mean squared
`error can be (approximately) minimized by using uniform
`quantization intervals. See R. C. Wood, "On Optimum
`Quantization", IEEE Transactions on lnl.'on'nation Theory,
`Vhl. 15, pp. 248432 (1969).
`In the absence of entropy
`coding, the mean squared error is minimized by choosing
`nonuniform quantization intervals in accordance with the
`l.loyd—Max algorithm as described in S.
`I’. Lloyd, “Least
`Squares Quantization in PCM”, Bell Lab. Memo. [luly
`1957),
`reprinted in IEEE Transactions on Information
`Theory, Vol. 3, pp. 129-37 (1982), and also in J. Max,
`"Quantizing for Minimum Distortion”, IRE Transactions on
`Information Theory, Vol. 6, pp. 7—12 (1960).
`Due to the decorrelating properties of the wavelet
`transform, the distribution of transform eocfiicicnt values is
`typically sharply peaked at zero. This type of coefficient
`distribution results in a preponderance of eoctiicieots falling
`into the quanti‘/.ation interval at the origin, i.e., the quanti-
`zation interval centered on the value of zero. Due to the
`preponderance of coelficients near zero, more efiicient com-
`pression performance can be achieved by treating the quan-
`tization interval at the origin separately.
`In particular, the
`overall coding efficiency may be increased by using a larger
`quantization interval around the origin, often called a “dead
`zone”. In one preferred embodiment, the dead zone interval
`is twice as large as the adjacent intervals. The dead mm: is
`
`vi
`
`ltl
`
`2t}
`
`I~.a-A
`
`40
`
`45
`
`{st}
`
`4
`centered about the origin with a reconstruction value exactly
`equal to zero to prevent artifacts resulting from the use of
`nonzero reconstruction values for
`the many coefficients
`close to zero. The magnitude of the positive and negative
`bounds of the dead zone is often temied the “clipping
`threshold" because all coellicierlus whose magnitudes fall
`below this threshold are “clipped" to 'r'.01'(). In addition, those
`coetfficients whose magnitudes exceed the clipping threshold
`are termed "signi.tieant” coeflicierits, while those eoefilcients
`whose values lie below the threshoid are termed “insignili—
`cant" coellicients.
`Because most of the cocfficicnts produced by the trans-
`form have small magnitudes or are equal
`to zero,
`the
`quantization process typically results in the majority of the
`eocfiicients being deemed insignificant, while only rela-
`tively few of the quantized coeificients have magnitudes
`exceeding the clipping threshold which are deemed signifi-
`cant. Thus, as indicated above, it is ttdvarltageous to treat the
`significant coelficients separately from the insignificant
`coeflieients.
`In one preferred embodiment, this separate treatment is
`accomplished by separately indicating the positions and the
`quantized values of the significant coefiicients. To achieve
`further compression, the quantized values of the significant
`eocfiicicnts may then be entropy coded using a technique
`known to those skilled in the art, such as Huffman coding or
`arithmetic coding. In addition, the positions of the signifi-
`cant coellicients can be represented using one of a variety of
`conventional approaches, such as tree structures, coelficient
`maps, or run length coding. In one preferred embodiment,
`the positions of the significant ooeificients are represented
`by means of run lengths of consecutively occurring insig-
`nificant coetficicnts. The resulting position representation
`may then be entropy coded to obtain additional compres-
`sion.
`As known to those skilled in the art, entropy coding
`reduces the number of bits required to represent a data set by
`using variable length coding in a manner which exploits the
`statistical probabilities of various symbols in the data set.
`For example, entropy coding assigns shorter code words to
`those symbols which occur frequently, while longer code
`words are assigned to those symbols which occur
`less
`Frequently. A number of different entropy coding approaches
`have been developed inciuding Huffman coding which rep-
`resents the data symbols using code words that each have a
`length consisting of an integer number ofbits, and arithmetic
`coding which is capable of producing code words whose
`length is a fractional number of bits. Entropy coding is
`completely reversible so that no additional distortion is
`introduced beyond that due to the quantization process.
`The assignment of code words for entropy coding is
`typically governed by means of a codebook which must be
`known to both the encoder and decoder. If the statistics of
`the data sets to be encoded are unstable or unknown, then the
`codebook itself or the statistics from which the codebook
`may be generated must be transmitted along with the
`entropy encoded data. In this event, any such coclehook or
`statistics which must be transmitted to the decoder is
`referred to as "side information”. A significant issue in the
`design of entropy coding schemes is the reduction of side
`information. In many cases, it is beneficial to estimate and
`transmit as side information only an approximation of the
`symbol statistics because the resulting reduction in side
`information may outweigh any loss of coding ellicicncy
`resulting from the approximation of the symbol statistics.
`Once an image is compressed, the compressed image is
`typically trarlsmitted over a communications link or stored
`
`Page 17 of 437
`
`Page 17 of 437
`
`

`
`5,850,482
`
`5
`in a medium, such as a magnetic disk. In this context, the
`communications or storage medium is referred to as the
`“channel”. A digital data set, such as a compressed image,
`conveyed through such a channel is subject to corruption of
`some of the hits, thereby creating erroneous hit values. The
`rate at which erroneous hit values occur is termed the bit
`error rate (BER) of the channel. If the BER is sulliciently
`low, the incidence of errors is infrequent and may safely be
`ignored. For higher BERS, however, additional measures
`must be employed to maintain the integrity of the data. One
`approach known as Automatic Repeat reQuest (ARQ) is to
`use communications protocols which provide mechanisms
`for detecting errors and repeatedly transmitting corrupted
`data packets.
`However, for continuous streams of data or for channels
`with high BERS,
`the delays due to retransmission and
`protocol handshaking required by ARQ may be unaccept-
`able.
`As known to those skilled in the art, another approach
`uses channel coding (also called forward error coding
`which further encodes the data so that channel errors
`may be detected and, in some instances, corrected with high
`probability. Channel coding can detect and correct some bit
`errors by adding redundancy to the data in a controlled
`fashion. The effectiveness of channel coding is related to the
`amount of redundancy employed. By adding more
`redundancy, channel coding can effectively detect and cor-
`rect a higher level of errors. The downside of channel coding
`is that the redundancy introduced can consume a significant
`percentage of the channel bandwidth (typically in the range
`of 10% to 90% of the channel bandwidth depending on the
`level of error detection and correction required).
`As known to those skilled in the art, one variant of
`channel coding uses an approach called Unequal Error
`Protection (UEP) which separates a data set
`into several
`subsets and provides different levels of error protection for
`each subset by varying the amount of redundancy for each
`subset. The rationale for UEP is that dillerent subsets of a
`data set may vary in importance. The most important data
`may require correction of virtually all bit errors, whereas
`some higher level of bit errors may be acceptable in less
`important data. By providing lower levels of protection to
`the less important subsets of the data, the amount of redun-
`dancy added by the channel coding can be reduced, and
`channel bandwidth may correspondingly be conserved.
`An unfortunate consequence of data compression is the
`increased susceptibility of the compressed data to channel
`errors. For uncompressed image data, the ellects of bit errors
`are localized to the allected pixcl(s) and are manifested as
`“salt and pepper" noise. On the other hand, image compres-
`sion algorithms concentrate the information or energy of an
`image into fewer hits so that the elfects of even a single bit
`error can be fzu--reaching. The susceptibility of the com-
`pressed data to channel errors and the resultanteffects on the
`reconstructed data depend on the manner in which the data
`was compressed and the particular hills) which isfare cor-
`rupted.
`For example, a bit error which causes a transform coef-
`ficient value to be misdccoded introduces a noise artifact in
`the reconstructed image consisting of the corresponding
`basis function scaled according to the magnitude of the error
`on the misdecoded coefficient. Such artifacts may he toler-
`able beeausc the image noise associated with a few per-
`turbed basis functions is often not sufiicient
`to severely
`hamper the recognition of image content. If the coellicient
`magnitudes are encoded differentially, however, the effects
`
`‘JV
`
`ll)
`
`2[}
`
`I~.a-A
`
`40
`
`45
`
`{st}
`
`of such an error may persist for all coeflicients which are
`dependent on the misdecoded eoeffieicnt. In this case, the
`resulting artifact will be an undesirable streak across the
`reconstructed image.
`As previously mentioned, the positions of the significant
`coefficients are commonly represented using tree structures,
`coefiicient maps or run length coding. Such representations
`often represent the position of a given coefficient relative to
`a previously designated coefficieiit position. Thus, a bit error
`which causes the position of a significant coellicient value to
`be misdecoded can also introduce errors in the positions of
`succeeding eocfilcients. In other words, the image content
`associated with the corrupted coefficient positions is mis-
`placed within the reconstructed image, thereby producing an
`elleet called “tearing" characterized by a displacement of a
`portion of the image which is sometimes catastrophic.
`A bit error on entropy-coded data can, due to the variable
`length of the code words, cause a
`loss of code word
`synchronization which may persist for all succeeding data,
`thereby resulting in catastrophic eifecls on a reconstructed
`image. Because bit errors on compressed data can result in
`a variety of catastrophic effects as described above,
`it is
`imperative that some form of error protection be provided
`for compressed data which will be transmitted through a
`noisy channel. However, such error protection increases the
`amount of information which must be transmitted over the
`noisy channel, thereby decreasing the ellicieucy or speed at
`which the compressed data is transmitted as described
`above.
`
`SUMMARY OF '11-[E INVENTION
`
`It is therefore an object of the present invention to provide
`an improved error resilient method and apparatus for
`entropy coding of data which can utilize unequal error
`protection techniques of channel coding.
`It is another object of the present invention to provide an
`error resilient method and apparatus which isolates the
`clfccts of a bit error to a single code word and which
`constrains the resulting error on the decoded value such that
`a misdeooded value falls within a constrained or limited
`interval about the correct value.
`
`It is a further object of the present invention to provide an
`improved method and apparatus for image compression
`which utilizes error resilient entropy coding which, in turn,
`can utilize unequal error protection techniques of channel
`coding.
`These and other objects are provided, according to the
`present invention, by an error resilient method and apparatus
`for entropy coding of data which includes code word gen-
`erating means for generating a plurality of code words
`representative of respective items in the data set. Each code
`word has two portions which we shall hereafter refer to as
`"iields”, namely, a first or prefix field which is susceptible to
`bit errors, and an associated second or sutfix field which is
`resilient
`to bit errors

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket