throbber
USOO7003171B1
`
`(12) United States Patent
`TakeO
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,003,171 B1
`Feb. 21, 2006
`
`(*) Notice:
`
`(54) METHOD, APPARATUS AND RECORDING
`MEDIUM FOR DATA COMPRESSION
`(75) Inventor: Hideya Takeo, Kanagawa-ken (JP)
`(73) Assignee: Fuji Photo Film Co., Ltd.,
`Kanagawa-ken (JP)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 816 days.
`(21) Appl. No.: 09/356,505
`(22) Filed:
`Jul.19, 1999
`(30)
`Foreign Application Priority Data
`Jul. 17, 1998
`(JP)
`................................. 10-2O25O1
`Dec. 9, 1998
`(JP)
`................................. 1O-35O159
`(51) Int. CI.
`(2006.01)
`G06K 9/36
`(52) U.S. Cl. .…. 382/251; 382/232
`(58) Field of Classification Search ......... 382/232-253
`See application file for complete Search history.
`
`(56)
`
`References Cited
`PUBLICATIONS
`IEEE Transactions of Image Processing, vol. 4, Jun. 1995,
`pp. 725-733, “Image Coding Using Wavelet Transforms and
`Entropy-Constrained Trellis-Coded Quantization”.
`
`IEEE Transactions on Circuits and Systems for Video
`Technology, vol. 6, Jun. 1996, pp. 243-250, “A New Fast and
`Efficient Image Codec Based on Set Partitioning in
`Hierarchical Trees”.
`Nikkan Kogyo Shimbun Co., May 25, 1981, “Digital Image
`Signal Processing”, Fukinuke, T.
`Shokodo Co., Feb. 25, 1992, “Digital Communication
`Systems”, Kasahara et al.
`Primary Examiner Samir Ahmed
`ASSistant Examiner-Anand Bhatnagar
`(74) Attorney, Agent, or Firm-Sughrue Mion, PLLC
`(57)
`ABSTRACT
`
`Fast and efficient data compression can be carried out.
`Original image data are Subjected to wavelet transform. The
`obtained wavelet-transformed data are classified and bit
`allocation is determined. Based on the determined bit allo
`cation, the wavelet-transformed data are quantized and
`quantized data are obtained. The quantized data are classi
`fied into 0 data and non-Zero data. Binary classification
`information data indicating this classification are also
`obtained. The classification information data are coded
`according to a coding method having a comparatively
`Simple operation, Such as Huffman coding and run length
`coding. The multi-valued, non-Zero data are coded accord
`ing to a coding method having a high compression efficiency
`although having a comparatively complex operation, Such as
`universal coding and Golomb-Rice coding.
`
`30 Claims, 10 Drawing Sheets
`
`WAWELE
`TRANSFOR MEANS
`
`CASSIFICATION AND
`BIT ALLOCAON MEANS
`
`QUANTIZATION RS
`MEANS (TCO)
`
`CASSIFICATION
`MEANS
`
`FIRS
`CONG MEANS
`
`2
`
`3.
`
`4.
`
`5
`
`RECORING
`MEANS
`
`
`
`
`
`
`
`
`
`SECON
`COOING MEANS
`
`IPR2018-01413
`Sony EX1032 Page 1
`
`

`

`U.S. Patent
`
`Feb. 21, 2006
`
`Sheet 1 of 10
`
`US 7,003,171 B1
`
`
`
`
`
`1BTEJAVNA
`
`
`
`SNWEW WHO HSNWHL
`
`IPR2018-01413
`Sony EX1032 Page 2
`
`

`

`US. Patent
`
`Feb. 21, 2006
`
`Sheet 2 0f 10
`
`US 7,003,171 B1
`
`LL1
`
`LHO
`
`HLO
`HHO
`
`-12A
`
`-8A
`
`—4A
`
`-11A
`
`-7A
`
`-3A
`
`0
`
`0
`
`3A
`
`7A
`
`11A
`
`4A
`
`8A
`
`12A
`
`
`
`
`
`-9A
`
`-5A
`
`-A
`
`2A
`
`6A
`
`10A
`
`|PR2018—O1413
`
`Sony EX1032 Page 3
`
`IPR2018-01413
`Sony EX1032 Page 3
`
`

`

`U.S. Patent
`
`Feb. 21, 2006
`
`Sheet 3 of 10
`
`US 7,003,171 B1
`
`Ao-Do&D2
`SSSMSSSMSSSMSSSMSSSMSSSSSSSMSSSMSSSMSSSM
`-8 ? -6 ? -4 ? -2?
`?
`? 3? 5 ? 7 ? 9 ?
`INDEX --> -4
`-3
`-2
`- 1
`O
`1
`2
`3
`4.
`5
`
`INDEX -> -4
`
`-3
`
`-2
`
`-1
`
`O
`
`1
`
`2
`
`3
`
`4.
`
`FG4
`
`
`
`IPR2018-01413
`Sony EX1032 Page 4
`
`

`

`U.S. Patent
`
`Feb. 21, 2006
`
`Sheet 4 of 10
`
`US 7,003,171 B1
`
`STAR
`
`WAVELET TRANSFORM
`
`CLASSIFICATION AND
`BT ALLOCATION
`
`CUANTIZATION
`
`CLASSIFICATION
`
`S1
`
`S2
`
`S3
`
`S4
`
`S5
`
`CLASSIFICATION INFORMATION
`DATA COONG
`
`S6
`
`NON-ZERO DATA CODNG
`
`S7
`
`RECORDING
`
`F G
`
`IPR2018-01413
`Sony EX1032 Page 5
`
`

`

`US. Patent
`
`Feb. 21, 2006
`
`Sheet 5 0f 10
`
`US 7,003,171 B1
`
`OZEEOOmm
`
`wz<m:2
`
`0200mm
`
`
`
`mz<w§02.000
`
`meE
`
`
`
`mz<ws_02500
`
`ZO_.F<O_...:mw<.._Omm
`ZO_._.<N_._.Z<DO
`
`wz<m2
`
`
`
`ZO:.<OO.3<:m
`
`wz<m2
`
`or
`
`mz<w§Poom
`
`|PR2018—O1413
`
`Sony EX1032 Page 6
`
`IPR2018-01413
`Sony EX1032 Page 6
`
`
`
`
`
`
`
`

`

`US. Patent
`
`Feb. 21, 2006
`
`Sheet 6 0f 10
`
`US 7,003,171 B1
`
`
`
`02.9302".7..ZOmE<m§OU
`
`hum.”—
`
`02—000
`
`mz<m2
`
`ozoumm
`
`02500
`
`mz<w2
`
`mz<m2
`
`zo_h<u_u_mm<._o
`.._zoF<N_._.z<=O
`oz<zoz.<o_n=mmSum!
`
`
`.00.».«2402
`
`wz<m¥zo_h<uo.fi<to
`
`mz<m$EIOuw—EE.
`
`.5492;
`
`|PR2018—O1413
`
`Sony EX1032 Page 7
`
`IPR2018-01413
`Sony EX1032 Page 7
`
`
`
`
`
`

`

`U.S. Patent
`
`Feb. 21, 2006
`
`Sheet 7 of 10
`
`US 7,003,171 B1
`
`START
`
`WAVELET TRANSFORM
`
`CASSIFICATION AND
`BIT ALLOCATION
`
`QUANTIZATION
`
`CLASSIFICATION
`
`S11
`
`S12
`
`S13
`
`S14
`
`S15
`CLASSIFICATION INFORMATION
`DATA CODNG
`
`S16
`NON-ZERO DATA CODING
`
`S 17
`
`COMPARISON
`
`S18
`CODED DATA F2 WEAVELET-
`TRANSFORMED DATAWS2
`
`
`
`YES
`
`S 19
`COD NG OUANTIZED DATA BY
`THRD CODNG METHOD
`
`NO
`
`RECORONG
`
`END
`
`FG.9
`
`IPR2018-01413
`Sony EX1032 Page 8
`
`

`

`U.S. Patent
`
`Feb. 21, 2006
`
`Sheet 8 of 10
`
`US 7,003,171 B1
`
`
`
`
`
`
`
`
`
`
`
`
`
`SNWE'W
`NO]] VOOTTW/ 118
`
`
`
`0 ||
`
`
`
`SNVEW LOC]S
`
`IPR2018-01413
`Sony EX1032 Page 9
`
`

`

`U.S. Patent
`US. Patent
`
`6F
`
`60021,2
`
`oCb:GE
`
`£225£02528:5.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`DEEEE<mzfia5m:
`
`00250090521:?
`
`mzfim£2a$225m$00:0520.59.208520:39:.5zmommzéh
`
`S02500mzin.5mm20592430oz<zo:<oz_wm<._oEdi;
`
`
`1f
`
`US 7,003,171 B1
`1B171,300,7SU
`
`030:52Eamv
`
`388331532
`
`@25000.525;:
`
`|PR2018—O1413
`
`Sony EX1032 Page 10
`
`IPR2018-01413
`Sony EX1032 Page 10
`
`
`
`

`

`US. Patent
`
`60021,2b.eF
`
`S
`
`01f00
`
`1B171,300,7SU
`
`
`
`300@250023:5:
`
`
`
`80:52mzzmm<mRE?
`
`
`
`30sz02500
`
`
`
`25.25;:0mm?
`
`102500azo_mmm>zoo
`
`
`
`>m<z_mmmooooflmmauwmgtom3<>-_::2zo:<N:z<:o29:89.25
`
`
`
`NiOE
`
`|PR2018—O1413
`
`Sony EX1032 Page 11
`
`IPR2018-01413
`Sony EX1032 Page 11
`
`
`

`

`1
`METHOD, APPARATUS AND RECORDING
`MEDIUM FOR DATA COMPRESSION
`
`BACKGROUND OF THE INVENTION
`
`15
`
`25
`
`35
`
`40
`
`1. Field of the Invention
`The present invention relates to a method and an appa
`ratus for compressing original data by coding thereof, and to
`a computer-readable recording medium Storing a program
`for causing a computer to execute the data compression
`method.
`2. Description of the Related Art
`In the field of data compression Such as image data
`compression by an image Server in a medical network and
`compression of general data Such as for communication and
`filing, various compression algorithms have been proposed.
`For example, as substantially eficient compression algo
`rithms, the WTCQ method (P. Sriram and M. W. Marcellin,
`“Image coding using wavelet transforms and entropy-con
`Strained trellis-coded quantization', IEEE Transactions on
`Image Processing, vol. 4, pp. 725-733, June 1995) and the
`SPIHT method (A. Said and W. A. Pearlman, “A new Fast
`and Efficient Image Codec Based on Set Partitioning in
`Hierarchical Trees”, IEEE Transactions on Circuits and
`Systems for Video Tech., vol. 6. pp. 243–250, June 1996)
`have been proposed. FIG. 11 is a diagram explaining a
`compression algorithm according to the WTCO method and
`the SPIHT method. Original image data S representing an
`original image are wavelet-transformed first. Data in each
`Subband after the transform are classified while bit alloca
`tion is determined. Based on the determined bit allocation,
`quantization according to a TCO method is carried out to
`obtain quantized data RS. Coded data are obtained by
`carrying out entropy coding on the quantized data RS. AS the
`method for the entropy coding, in the WTCO method,
`bit-plane binary arithmetic coding is used. In the bit-plane
`binary arithmetic coding, quantized data are decomposed
`into a plurality of bit planes, and converted into binary data.
`Binary arithmetic coding is carried out on the data in each
`bit plane and each output is coded. Meanwhile, in the SPIHT
`method, multi-valued arithmetic coding is used as the
`entropy coding. By compressing the original image data Sin
`this manner, efficient coding with a Substantially low bit rate
`can be carried out.
`In the field of general JPEG compression, as shown in
`FIG. 12, an arithmetic coding method or a baseline method
`can be used. In the case of JPEG compression, discrete
`cosine transform (DCT) is carried out on the original image
`data S and quantization is carried out by determining bit
`allocation. In the case of arithmetic coding, after the quan
`tized data RS have been converted from multi-valued data to
`binary data, binary arithmetic coding is carried out to obtain
`coded data. Meanwhile, in the case of the baseline method,
`the quantized data RS are coded using Huffman coding.
`In the above-described SPIHT method, since multi-valued
`arithmetic coding is carried out, compression efficiency
`thereof is higher than that of the conventional Huffman
`coding or the like. However, computation therefor is time
`consuming, Since the multi-valued arithmetic coding is a
`60
`substantially complex operation. Meanwhile, in the WTCO
`method, Since binary arithmetic coding is carried out, com
`putation is carried out faster than in the SPIHT method.
`However, upon entropy coding, quantized data are decom
`posed into a plurality of bit planes (practically, approxi
`mately 14) and binary-coded, and it is necessary to carry out
`binary arithmetic coding thereafter on each bit plane. There
`
`45
`
`50
`
`55
`
`65
`
`US 7,003,171 B1
`
`2
`fore, as for the WTCO method, a computational load
`becomes heavy as a whole, and the execution time is long.
`In the arithmetic coding method in the above-described
`JPEG compression, data are binary coded as in the WTCO
`method. Therefore, a computational load is heavy and the
`execution time is long. The baseline method uses Huffman
`coding, and the compression efficiency is thus lower than the
`WTCO method or the like.
`
`SUMMARY OF THE INVENTION
`
`The present invention has been created based on consid
`eration of the above problems. An object of the present
`invention is to provide a method and an apparatus for
`efficient and fast data compression, and a computer-readable
`recording medium Storing a program to cause a computer to
`execute the data compression method.
`A data compression method of the present invention is a
`data compression method of obtaining compressed coded
`data by quantization of original data to obtain quantized data
`followed by coding and compression of the quantized data.
`The data compression method of the present invention
`comprises the Steps of:
`classifying the quantized data into data having a value
`representing the quantized data and at least one set of
`classified data representing a data value other than the
`representative value while obtaining classification
`information data regarding this classification;
`coding the classification information data according to a
`first coding method; and
`obtaining the coded data by coding at least the classified
`data out of the classified data and the data having the
`representative value, according to a Second coding
`method.
`In the data compression method of the present invention,
`it is preferable for the second coding method to be different
`between the data having the representative value and each
`Set of the classified data.
`In the data compression method of the present invention,
`it is preferable for the quantized data to be obtained by
`carrying out wavelet transform on the original data followed
`by quantization thereof. It is also preferable for the quan
`tized data to be obtained by carrying out DCT on the original
`data followed by quantization thereof.
`In the case where the quantized data are obtained after
`wavelet transform, quantization and coding is carried out on
`wavelet-transformed data in each Subband.
`Furthermore, it is preferable for the data having the
`representative value to be 0 data representing the value 0 of
`the quantized data, and it is preferable for the classified data
`to be non-Zero data representing non-Zero values of the
`quantized data.
`The “value representing the quantized data' herein
`referred to means various kinds of values Such as an average
`of the data values, a mode in the quantized data, and the
`value 0. AS a method of classifying the quantized data into
`the data having the representative value and the data other
`than those, various kinds of methods can be used. For
`example, the quantized data may be classified simply
`according to the representative value and non-representative
`values. Alternatively, the data may be classified into the data
`having the representative value and data having values
`Smaller and larger than the representative value, or into the
`data having the representative and data having values whose
`absolute differences from the representative value are
`Smaller and larger than a threshold value.
`
`IPR2018-01413
`Sony EX1032 Page 12
`
`

`

`3
`The “classification information data' have a value corre
`sponding to the number of classification classes. For
`example, when the quantized data are classified into 2
`classes, namely the data having the representative value and
`the data having values other than that, the classification
`information data are binary. When the quantized data are
`classified into the data having the representative value and
`data having the values below and above the representative
`value, the classification information data are 3-valued.
`Furthermore, “at least the classified data are coded
`according to the Second coding method’, because the data
`having the representative value are not coded in Some cases,
`as in the case where the representative value is 0.
`The phrase stating that the coding method “is different
`between the data having the representative value and each
`Set of the classified data” means the case where the data
`having the representative value and the classified data have
`different coding methods, as well as the case where each Set
`of classified data has a different coding method when a
`plurality of classified data Sets exist.
`AS the first coding method, any one of Huffman coding,
`run length coding, B1 coding, B2 coding, Wyle coding,
`Golomb coding, Golomb-Rice coding, and binary arithmetic
`coding can be adopted.
`AS the Second coding method, any one of Huffman
`coding, universal coding, and multi-valued arithmetic cod
`ing can be used.
`In the data compression method of the present invention,
`if the amount of the coded data is larger than a predeter
`mined information amount determined based on the original
`data, it is preferable for the coded data to be obtained by
`coding the classified data according to a third coding
`method, out of the classification information data and/or the
`data having the representative value and the classified data.
`It is preferable for the third coding method to be any one
`of Huffman coding, arithmetic coding, and PCM (Pulse
`Code Modulation) without any coding.
`AS the “predetermined information amount determined
`based on the original data”, the amount of the original data
`may be used. It is preferable for the predetermined infor
`mation amount to be an information amount Smaller than the
`amount of the original data and enabling reduction in an
`information amount when at least the classified data out of
`the classification information data and/or the data having the
`representative value and the classified data are coded
`according to the third coding method.
`A data compression apparatus of the present invention is
`a data compression apparatus for obtaining compressed
`coded data by quantization of original data to obtain quan
`tized data followed by coding and compression of the
`quantized data. The data compression apparatus comprises:
`classification means for classifying the quantized data into
`data having a representative value representing the
`quantized data and at least one set of classified data
`having a data value other than the representative value
`and for obtaining classification information data rep
`resenting this classification;
`first coding means for coding the classification informa
`tion data by using a first coding method; and
`Second coding means for coding at least the classified data
`according to a Second coding method, out of the data
`having the representative value and the classified data.
`It is preferable for the Second coding method carried out
`by the Second coding means to be different for the data
`having the representative value and each Set of the classified
`data.
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 7,003,171 B1
`
`4
`It is preferable for the data compression apparatus of the
`present invention to further comprise wavelet transform
`means for obtaining the quantized data by carrying out
`wavelet transform on the original data followed by quanti
`Zation thereof, or DCT means for obtaining the quantized
`data by carrying out DCT on the original data followed by
`quantization thereof.
`Furthermore, it is preferable for the classification means
`to classify the quantized data by letting the data having the
`representative value be 0 data representing the value of 0 of
`the quantized data and letting the classified data be non-Zero
`data representing non-Zero values of the quantized data.
`Moreover, it is preferable for the data compression appa
`ratus of the present invention to further comprise:
`judging means for judging whether or not the amount of
`the coded data is larger than a predetermined informa
`tion amount determined based on the original data; and
`third coding means for obtaining the coded data by coding
`at least the classified data according to a third com
`pression method out of the classification information
`data and/or the data having the representative value and
`the classified data, in the case where the judging means
`has judged the amount of the coded data to be larger
`than the predetermined information amount.
`The processing carried out in the above-described data
`compression method may be provided in the form of a
`computer-readable recording medium Storing a program to
`cause a computer to execute the method.
`According to the present invention, the quantized data
`obtained by quantization of the original data are classified
`into the data having the representative value, the classified
`data, and the classification information data showing a
`classification result according to the data values. The clas
`sification information data are data having a comparatively
`Small information amount, Such as binary data or 3-valued
`data, although the amount depends on the classification.
`Therefore, coding by a simple coding method can be carried
`out at a high compression rate and with a light computational
`load. The data having the representative value are data
`containing only one value. Therefore, coding can be carried
`out with a Small amount of computation and at a compara
`tively high compression rate. The classified data other than
`the data having the representative value are multi-valued.
`However, Since the representative value have been elimi
`nated, the ratio of these data to the total quantized data is
`comparatively low. For example, in the case of 0.5 bit/pixel
`(1/20 compression of 10-bit data), if 0 is the representative
`value, the ratio of non-zero values is only about 13%.
`Therefore, the amount of data to be processed is Small,
`although the data are multi-valued.
`Consequently, in the present invention, although coding
`of multi-valued data is carried out, the computational load
`therefor is lighter than the conventional compression algo
`rithm described above, and the data having the representa
`tive value and the classification information data can be
`compressed at a high compression rate with a light compu
`tational load. In this manner, the original data can be
`compressed fast and efficiently, at a high compression rate.
`Furthermore, by classifying the quantized data into 0 data
`and non-Zero data, classification becomes easy and com
`pression of 0 data in the classified data is not necessary.
`Therefore, the original data can be compressed faster with a
`Smaller computational load.
`Moreover, Since the classification information data have a
`comparatively Small information amount, Such as binary
`data and 3-valued data, the classification information data
`can be compressed fast and efficiently by any one of the
`
`IPR2018-01413
`Sony EX1032 Page 13
`
`

`

`US 7,003,171 B1
`
`S
`Huffman coding, run length coding, B1/B2 coding, Wyle
`coding, Golomb coding, Golomb-Rice coding, and binary
`arithmetic coding, which are all comparatively simple
`operations.
`The multi-valued data in the classified data occupies a
`Small portion of the entire quantized data. Therefore, effi
`cient coding can be carried out by using any one of the
`Huffman coding, universal coding, and multi-valued arith
`metic coding which enable high compression rate coding
`although computation therefor is complex.
`According to the present invention, the original data can
`be compressed at a high compression rate. For example,
`when the original data represents a background image with
`plain Signal values, only either the data having the repre
`sentative value or the classified data have information if the
`original data are quantized and classified into the data
`having the representative value and the classified data.
`Therefore, the amount of classification information data is
`large and the amount of the coded data may become larger
`than the amount of the original data. In a case like this, the
`information amount of the coded data is compared with the
`predetermined information amount determined based on the
`original data, and when the former is larger than the latter,
`at least the classified data are coded out of the classification
`information data and/or the data having the representative
`value and the classified data, according to the third coding
`method.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a Schematic block diagram showing a configu
`ration of a data compression apparatus according to a first
`embodiment of the present invention;
`FIGS. 2a, 2b and 2C are diagrams for explaining wavelet
`transform;
`FIG. 3 is a diagram showing Scalar quantizers,
`FIG. 4 is a diagram showing Sum Sets of 2 Scalar quan
`tizers;
`FIG. 5 is a 4-state trellis diagram;
`FIG. 6 is a flow chart showing processing carried out by
`the first embodiment;
`FIG. 7 is a Schematic block diagram showing a configu
`ration of a data compression apparatus according to a Second
`embodiment of the present invention;
`FIG. 8 is a Schematic block diagram showing a configu
`ration of a data compression apparatus according to a third
`embodiment of the present invention;
`FIG. 9 is a flow chart showing processing carried out by
`data compression apparatus according to the third embodi
`ment;
`FIG. 10 is a Schematic block diagram showing a configu
`ration of a data compression apparatus according to a fourth
`embodiment of the present invention;
`FIG. 11 is a Schematic block diagram showing a configu
`ration of a conventional data compression apparatus, and
`FIG. 12 is a Schematic block diagram showing a configu
`ration of another conventional data compression apparatus.
`
`DESCRIPTION OF THE PREFERRED
`EMIBODIMENTS
`
`Hereinafter, embodiments of the present invention will be
`explained with reference to the accompanying drawings.
`FIG. 1 is a Schematic block diagram showing a configu
`ration of a data compression apparatus according to a first
`embodiment of the present invention. As shown in FIG. 1,
`the data compression apparatus according to the first
`
`6
`embodiment comprises wavelet transform means 1 for
`obtaining wavelet-transformed data WS in each Subband at
`each resolution by carrying out wavelet transform on origi
`nal image data S, classification and bit allocation means 2
`for classifying the wavelet-transformed data WS and for
`determining bit allocation for each class, quantization means
`3 for obtaining quantized data RS by quantizing the wavelet
`transformed data WS according to the bit allocation deter
`mined by the classification and bit allocation means 2,
`classification means 4 for obtaining O data S0, non-Zero data
`NS and classification information data B representing a
`classification result through classification of the quantized
`data RS, first coding means 5 for coding the classification
`information B according to a first coding method, Second
`coding means 6 for coding the non-Zero data NSB according
`to a Second coding method, and recording means 7 for
`recording coded data F resulting from the coding in a
`recording medium.
`The wavelet transform means 1 carries out wavelet trans
`form on the original image data S as follows. AS shown in
`FIG. 2(a), the original image data S are Subjected to wavelet
`transform and decomposed into 4 Sets of data at a plurality
`of resolutions, namely LL1, HL0, LHO, and HHO. The data
`LL1 represents an image whose width and height are /2 of
`those of the original image, and the data HL0, LHO and HHO
`respectively represent images of a vertical edge component,
`a horizontal edge component, and a diagonal edge compo
`nent. As shown in FIG. 2(b), 4 sets of data LL2, HL1, LH1
`and HH1 are obtained through wavelet transform on the data
`LL1. The data LL2 represents an image whose width and
`height are /2 of those of the data LL1, and the data HL1,
`LH1 and HH1 represent images of vertical, horizontal and
`diagonal edge components of the data LL1 respectively. The
`wavelet transform is repeated a desired number of times on
`data LL obtained at each wavelet transform, and data at
`multiple resolutions are obtained thereby. When the wavelet
`transform is carried out 3 times, as shown in FIG. 2(c), data
`at 3 resolutions can be obtained. In this embodiment, data at
`each resolution, that is, data in each Subband, are all called
`wavelet-transformed data WS.
`The classification and bit allocation means 2 determines
`the classification and bit allocation of the wavelet-trans
`formed data WS in the following manner. For example, the
`wavelet-transformed data WS in each Subband obtained by
`carrying out wavelet transform as shown in FIG. 2(c) are
`classified into 4 classes, namely data LL2, data HHn
`(n=0-2), data HLn (n=0-2), and data LHn (n=0-2). This is
`because the data in each class have Statistically similar
`Signal values. For the data in each class, Square error of the
`data values is calculated for example, and the bit allocation
`is determined based on the magnitude of the Square error.
`For example, when the Square error is large, a large number
`of bits are allocated for data Saving. When the Square error
`is Small, a Small number of bits are allocated, Since missing
`data are allowed to Some degree.
`The quantization means 3 carries out quantization of the
`wavelet-transformed data WS according to a TCO (Trellis
`Coded Quantization), based on the bit allocation determined
`by the classification and bit allocation means 2. The TCO
`method is based on the TCM (Trellis Coded Modulation)
`developed in the field of Signal communication and audio
`coding, and has been generated by extending this TCM
`method for image coding. The TCO method can be consid
`ered theoretically to have the same meaning as infinite
`length vector quantization. From the Viewpoint of the rate
`distortion theory, an S/N ratio can be improved by several
`dB’s compared with conventional Scalar quantization. The
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`IPR2018-01413
`Sony EX1032 Page 14
`
`

`

`25
`
`7
`TCO method is one of vector quantization methods. In the
`TCO method, for a plurality of signal inputs (b1, b2, ... bn),
`the same number of quantized values (q1, q2, . . . qn) are
`output. The quantized values are determined by using the
`Viterbialgorithm for Searching for a path along which a total
`of quantization errors becomes minimal for the input vector.
`In this embodiment, the Viterbi algorithm is applied by
`letting a mean Square of each quantization error be the cost
`of the path. In the TCO method, sets of a plurality of
`quantization representative values are defined, and the quan
`tization representative value Sets are used properly depend
`ing on a State. For example, when the classification and bit
`allocation means 2 has allocated 4 bits and an index (which
`will be explained later in detail) after quantization is
`expressed by 4 bits or fewer, quantization can be carried out
`easily by normally using quantization representative values
`of 16 points. In the TCO method, 2 sets of quantization
`representative values (for example, Q1 and Q2) for 16 points
`are used, and which Set of the quantization representative
`values is used is defined depending on the State, Such as Q1
`for a state S0, and O2 for a state S1. State transition rules
`which are common for both quantization and inverse quan
`tization are defined in advance. Quantization progresses
`while the State transits every time quantization is carried out
`on one pixel. In this manner, by carrying out quantization
`processing, 32 quantization representative values (31 for the
`case where 0 is used twice) which is double the values in the
`normal case can be used apparently, and Selection of the path
`for reducing the quantization error is widened. Hereinafter,
`the TCO quantization method will be explained more spe
`cifically.
`FIG. 3 shows 4 scalar quantizers D0 to D3, and FIG. 4
`shows Sum-sets (Sum-set quantizer) of 2 Scalar quantizers.
`FIG. 5 shows a 4-state trellis diagram. In FIG. 3, A is a
`quantization Step size determined by the quantization bit
`allocation, and assumed to be 0.5 in this embodiment. In
`FIG. 4, a sum-set quantizer D0&D2 (=A0) of scalar quan
`tizers D0 and D2, and a sum-set quantizer D1&D3 (=A1) for
`scalar quantizers D1 and D3 are shown. In this embodiment,
`a signal comprising 5 elements as input wavelet-transformed
`data WS is used and the 5 elements are expressed as
`WS=(0.5, -3, -3, 0.25, -2.5). FIG. 5 shows, as state
`transitions, how an optimal quantizer is Selected when the
`Signal having the 5 elements is quantized at one time. The
`transition States are shown by the Selected quantizers. In
`45
`other words, the state wherein the sum-set quantizer D0&D2
`has been selected is shown as “state D0&D2'. The method
`of quantization will be explained below.
`The quantization method according the TCO method
`examines all paths of the trellis diagram defined as shown in
`FIG. 5, and finds out the path along which mean Square
`errors (hereinafter called MSE) of all elements of the input
`Signal after quantization become minimal. More Specifically,
`when 0.5, which is the first element of the wavelet-trans
`formed data WS, is quantized by the quantizer D2 out of the
`quantizers D0 to D3, MSE becomes (0.5-A)=0.0 which is
`a minimum. For the next element -3, if quantization is
`carried out by the quantizer D2, MSE becomes
`(-3-(-6A))=0.0 which is also a minimum.
`Meanwhile, as restraints of trellis transition, paths are
`limited in a manner Such that for the Sum-set quantizer
`D0&D2, a subsequent quantizer is D0 (the state shifts to
`D0&D2), or D2 (the state shifts to D1&D3). For the sum-set
`quantizer D2&D0, a Subsequent quantizer is D2 (the State
`shifts to D0&D2), or D0 (the state shifts to D1&D3). For the
`Sum-set quantizer D1&D3, a Subsequent quantizer is D1 (the
`state shifts to D2&D0), or D3 (the state shifts to D3&D1).
`
`8
`For the Sum-set quantizer D3&D1, a Subsequent quantizer is
`D3 (the state shifts to D2&D0) or D1 (the state shifts to
`D3&D1). These restraints have been pre-set and for the
`Sum-set quantizer D0&D2, the subsequent quantizer D0
`corresponds to the state D0&D2, while D2 corresponds to
`the state D1&D3. For the sum-set quantizer D2&D0, the
`Subsequent quantizer D2 corresponds to the state D0&D2,
`while D0 corresponds to D1&D3. For the sum-set quantizer
`D1&D3, the subsequent quantizer D1 corresponds to
`D2&D0 while D3 corresponds to D3&D1. For the sum-set
`quantizer D3&D1, the Subsequent quantizer D3 corresponds
`to the state D2&D0, while D1 corresponds to the state
`D3&D1.
`A restraint length of the trellis transition branches in the
`trellis diagram corresponds to the number of elements of the
`Signal (in this embodiment, the restraint length is 5, since the
`number of the elements is 5), and MSE is calculated as the
`cost in all combinations of the trellis transitions. The path
`along which the Sum of the cost becomes minimal is
`Selected, and the quantizers are Selected by the path to carry
`out quantization. The cost is basically calculated for all
`paths. However, when the cumulative cost of a path being
`calculated exceeds the minimum cost of another path having
`been calculated, the cost calculation is interrupted and the
`cost of a Subsequent path is then calculated, for example. In
`this manner, an operation Speed is increased. In FIG. 5, the
`cost at all pa

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket