throbber

`
`DECLARATION OF TRANSLATOR
`
`
`
`I, Corey Colling hereby declare as follows:
`
`1.
`
`My name is Corey Colling. I provide this declaration on behalf of
`
`Park IP at the request of Unified Patents, LLC.
`
`2.
`
`I am a professional translator, fluent in Korean and English. I work as
`
`a contractor for Park IP.
`
`3.
`
`I reviewed the original Korean version of Korean Patent KR 0135364
`
`BI and prepared the English translation attached hereto. The English translation is a true,
`
`complete, and accurate translations of KR 0135364 BI.
`
`4.
`
`In signing this declaration, I recognize that the declaration will be filed
`
`as evidence in a case before the Patent Trial and Appeal Board of the United States Patent
`
`and Trademark Office. I also recognize that I may be subject to cross-examination in the
`
`case and that cross-examination will take place within the United States. If cross-
`
`examination is required of me, I will appear for cross-examination within the United States
`
`during the time allotted for cross-examination.
`
`5.
`
`All statements made herein of my own knowledge are true and all
`
`statements made on information and belief are believed to be true. These statements
`
`were made with the knowledge that willful false statements and the like so made are
`
`punishable by fine or imprisonment, or both (18 U.S.C. § 1001).
`
`
`
`Dated: December 28, 2020
`
`
`
`
`
`
`
`Corey Colling
`
`15 W. 37th Street 8th Floor
`New York, NY 10018
`212.581.8870
`ParkIP.com
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 1
`
`

`

`(51) . Int. Cl. 6
`H04N 7/133
`(21) Application No.
`(22) Application date
`
`(73) Assignee(s)
`
`(72) Inventor(s)
`
`1019940025774
`
`(19) Korean Intellectual Property Office (KR)
`(12) Publication of Registration (B1)
`
`
`(11) Registration No. Patent 0135364
`(24) Registration date
`January 13, 1998
`(65) Publication No.
`Patent 1996-016550
`(43) Publication date
`May 22, 1996
`
`HONG, Doo-Pyo
`
`Patent 1994-025774
`October 8, 1994
`
`KOREAN BROADCASTING SYSTEM
`18, Yeouido-dong, Yeongdeungpo-gu, Seoul
`LEE, Jong-Hwa
`1460-2, Sillim 5-dong, Gwanak-gu, Seoul
`MOON, Jong-Hwan
`#2-710, Jangmi Apt., 7, Sincheon-dong, Songpa-gu, Seoul
`WON, Hui-Seon
`11/1, 115, Seongbuk-dong 1-ga, Seongbuk-gu, Seoul
`DO, Myeong-Gyu
`366-12, Sindaebang-dong, Dongjak-gu, Seoul
`YANG, Gyeong-Ho
`#106-201, Woosung Apt., 365, Singil 3-dong, Yeongdeungpo-gu, Seoul
`Soo-Won Kang
`11-8, Sillim 9-dong, Gwanak-gu, Seoul
`
`(74) Agent(s)
`
`KIM Byung-Jin, BAIK Myung-Ja
`
`
`Examiner: KWON, Jang-Woo (Booklet Gazette No. 5341)
`(54) METHOD AND APPARATUS FOR ENCODING DCT BLOCKS USING BLOCK-ADAPTING SCAN
`Abstract
`
`The present invention relates to a method and apparatus for coding DCT blocks using block-adapting scan, and in particular, the encoder (100)
`is composed of a scan pattern storage unit (11), a DCT conversion unit (12), a feature extraction unit (13), a scan pattern determination unit
`(14), a processor unit (15), and an entropy encoding unit (16), and the decoder (200) is composed of a scan pattern storage unit (21), a DCT
`conversion unit (22), a feature extraction unit (23), a scan pattern determination unit (24), an entropy decoding unit (25), a processor unit (26),
`which can greatly improve the efficiency by selecting an arbitrary set of scan patterns, analyzing the features of each DCT block of an image,
`and applying a suitable scan pattern to the image, instead of selecting a scan pattern reflecting the features of a general image and adapting it
`uniformly as in the prior art.
`
`Representative figure
`
`FIG. 1
`
`Specifications
`
`[Title of the Invention]
`
`Text Information Display Apparatus
`
`[Brief description of the drawings]
`
`FIG. 1 is an original image taken by a camera.
`
`FIG. 2 is an image for which the current image has been motion compensated from the previous image.
`
`FIG. 3 is a difference image when the current image is predicted with a motion compensated image.
`
`FIG. 4 is an exemplary diagram of an embodiment of a set of scanning patterns adapted to the present invention.
`
`FIG. 5 is a diagram showing the CDT coefficient used for feature extraction.
`
`FIG. 6 is a flowchart of an encoder for explaining the method of the present invention.
`
`FIG. 7 is a flowchart of a decoder for explaining the method of the present invention.
`
`FIG. 8 is a system configuration diagram of an encoder in the apparatus of the present invention.
`
`FIG. 9 is a system configuration diagram of a decoder in the apparatus of the present invention.
`
`* Description of reference numerals for major parts of the drawings
`
`8-1
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 2
`
`

`

`1019940025774
`
`11, 21: Scan pattern storage unit
`
`13, 23: Feature extraction unit
`
`15, 26: Processor unit
`
`
`
`25: Entropy decoding unit
`
`200: Decoder
`
`
`
`
`
`
`
`
`
`[Detailed description of the invention]
`
`12, 22: DCT conversion unit
`
`14, 24: Scan determination unit
`
`16: Entropy encoding unit
`
`100: Encoder
`
`The present invention relates to a coding method for compressing and restoring image data with a very large amount of information, such as
`high-definition TV (HDTV), that is, a quantization scanning technique for a DCT (Discrete Cosine Transform) coefficient in an image
`encoding and decoding process, and in particular, to a method and apparatus for encoding DCT blocks using a block-adaptive scan to increase
`the efficiency of variable-length coding by adaptively transforming a scan pattern.
`
`In general, the image data compression method draws a great deal of interest in various application fields, such as broadcasting and storage of
`communication and video signals. Since the processing of the digital signal has many advantages in data compression applications over the
`processing of analog signals, the main focus of the image transmission field is on encoding digital images.
`
`Furthermore, such trend is gaining more attention with the introduction of digital HDTV systems and image transmission through a network,
`and the standardization of image encoding methods in various application fields is being studied. Examples include CCITT Recommendation
`H.261 and MPEG.
`
`Most of these systems encode a video signal using motion compensation and DCT (MC-DCT encoder), and when motion compensation and
`DCT are performed, temporal and spatial redundancy of video data gets removed.
`
`In a general MC-DCT encoder, a motion compensation prediction difference image is conceptually encoded in the following three steps.
`
`1. 2D DCT of a motion compensation prediction difference image
`
`2. Scalar quantization of DCT coefficients
`
`3. Entropy encoding of quantized coefficients
`
`Since most of the DCT coefficients are quantized to zero and non-zero coefficients are sparsely distributed, an effective coding technique
`called “run-length encoding” is often used with this characteristic. Run-length encoding, that is, entropy encoding is performed on the
`consecutive zeros together with following non-zero values.
`
`For effective run-length encoding, a scan pattern should be created to make the entropy and average coding length small, and from this point
`of view, a zigzag scan is generally an effective scan pattern.
`
`Since the MC-DCT encoder allocates approximately 90% of the total number of bits to encode the quantized DCT coefficients, it is very
`important to display them to be effective in encoding them.
`
`For this purpose, several methods for analyzing the features of DCT coefficients have been proposed, but most of the image encoding scan
`methods developed so far adopt a method for selecting a scan pattern that reflects the general features of an image and applying it uniformly
`to each image, which lowers efficiency.
`
`Unlike the conventional method, the present invention provides a method and apparatus of encoding DCT blocks using block-adapting scan
`capable of improving efficiency by determining an arbitrary set of scan patterns, analyzing the features of each DCT block of an image, and
`applying the suitable scan pattern.
`
`In other words, the present invention is a method for encoding quantized DCT blocks to reduce the number of bits by adaptively changing the
`scan pattern of each block, and the scan pattern is selected to make the run-length as short as possible so that the scan pattern of the motion
`compensation prediction difference is determined generally by using a block of a corresponding motion compensated image having a similar
`contour feature to that of the scan pattern.
`
`The most important characteristic of the present invention is that it uses a motion compensated image, which is present in both the encoder
`and the decoder, to analyze the block features of an image and does not require other additional information, and that it is compatible with the
`international standard MPEG-2.
`
`When the method according to the present invention is applied, a reduction effect of approximately 5% to 10% can be expected based on the
`bit rate depending on the image.
`
`Hereinafter, the present invention will be described in detail with reference to the accompanying figures.
`
`FIGS. 6 and 7 are flowcharts for explaining the method of the present invention, and the process for encoding an image is composed of a step
`for initializing a block address with the first block of an image to be encoded (S1); a step for DCT-converting the current block indicated by
`the block address in the motion compensated image (S2); a step for calculating a value (t) of the extraction function (F) to identify the contour
`features of the DCT block generated in step (S2) (S3); a step for determining (I) a scan pattern suitable for the current DCT block from the set
`of scan patterns defined using the t value calculated in step (S3) and the selection function (X) (S4); a step for DCT-converting and quantizing
`the current block of the motion compensation prediction difference image (S5); a step for coding the DCT coefficients of the block generated
`in step (S5) in variable-length in the order of the scan patterns determined in step (S4) and outputting them to a bit string (S6); a step for
`increasing the address of the block (S7); and a step for detecting whether it is the end of the valid block, and ending the encoding process if it
`
`8-2
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 3
`
`

`

`1019940025774
`
`is the end and repeating the process after step (S2) if it is not the end of the valid block (S8), and also, the process for decoding an image is
`composed of a step for initializing a block address with the first block of an image to be decoded (S11); a step for DCT-converting the current
`block indicated by the block address in the motion compensated image (S12); a step for calculating the value (t) of the extraction function (F)
`to identify the contour features of the DCT block generated in step (S12) (S13); a step for determining (I) a suitable scan pattern from a set of
`scan patterns defined using the t value calculated in step (S13) and the selection function (X) (S14); a step for coding the compressed bit string
`coming from the encoder in variable-length and outputting DCT coefficients (S15); a step for reconstructing the block with the DCT
`coefficients decoded in step (S15) by referring to the scan pattern determined in step (S14) (S16); a step for increasing the address of the block
`(S17); and a step for detecting whether it is the end of the valid block, and ending the decoding process if it is the end and repeating the process
`after step (S12) if it is not the end of the valid block (S18).
`
`FIGS. 8 and 9 show an embodiment of and encoder and a decoder in the apparatus of the present invention, and the encoder (100) is composed
`of a scan pattern storage unit (11) for storing data of an arbitrary set of scan patterns; a DCT conversion unit (12) for receiving the input of the
`current block of a motion compensated image and converting it to DCT; a feature extraction unit (13) for receiving a DCT block from the
`DCT conversion unit (12) and calculating a value of an extraction function (F) to identify the contour features of the block; a scan pattern
`determination unit (14) for determining (I) a scan pattern based on the value of the extraction function (F) output from the feature extraction
`unit (13); a processor unit (15) for referring to the scan pattern determined in the scan pattern storage unit (14) in the scan pattern storage unit
`(11) and applying it to the quantized DCT block of the motion compensation prediction difference image; and an entropy encoding unit (16)
`for variable-length encoding DCT coefficients coming in from the processor unit (15), and the decoder (200) is composed of a scan pattern
`storage unit (21) for storing data of an arbitrary set of scan patterns; a DCT conversion unit (22) for receiving the input of the current block of
`a motion compensated image and converting it to DCT; a feature extraction unit (23) for calculating the value of an extraction function (F) to
`identify the contour of the block; a scan pattern determination unit (24) for determining (I) the scan pattern based on the value of the extraction
`function (F) output from the feature extraction unit (23); an entropy decoding unit (25) for variable-length decoding the compressed bit string;
`and a processor unit (26) for referring to the scan pattern determined in the scan pattern storage unit (24) in the scan pattern storage unit (21)
`and reconstructing the DCT coefficients coming in from the entropy decoding unit (25).
`
`The effects of the present invention are as follows.
`
`First, the basic idea of the present invention is that the contour features of the original image captured by the camera, the motion compensated
`image, and the motion compensation prediction difference image are similar to each other. FIGS. 1, 2, and 3 are examples showing the original
`image, the motion compensated image, and the motion compensation prediction difference image presented above.
`
`In addition, in order to apply the block-adapting scan method, an arbitrary set of scan patterns reflecting the characteristics of DCT coefficients
`must be defined according to the direction of the contour.
`
`FIG. 4 illustrates a set of five scan patterns, and other various scan pattern sets can be configured.
`
`The sets of scan patterns are stored in the scan pattern storage units (11)(21) installed in the encoder (100) and the decoder (200), respectively,
`and when the scan pattern is determined (I) in the scan pattern determination units (14)(24), the processor (15) refers to it.
`
`On the other hand, the present invention uses a motion compensation image that is commonly effective in the encoder (100) and the decoder
`(200) to determine a suitable scan pattern from a set of scan patterns for each DCT-converted block of the motion compensation prediction
`difference image, and converts the block of the motion compensated image into DCT, and analyzes and uses the coefficient for efficient and
`simple extraction of contour features.
`
`In other words, the DCT conversion units (12)(22) receive the current block indicated by the block address in the motion compensated image
`and perform DCT conversion.
`
`In addition, the feature extraction units (13)(23) that have received the output signals of the DCT conversion units (12)(22) calculate the value
`(t) of the extraction function (F) to identify the contour features of the block.
`
`Also, the scan pattern determination units (14)(24) determine (I) the scan pattern based on the extraction function (F) value (t) and the selection
`function (X) output from the feature extraction units (13)(23).
`
`FIG. 5 shows the locations of DCT coefficients used for feature extraction in each of the feature extraction units (13)(23), and here, Hi refers
`to the position of the coefficients indicating the horizontal contour feature, and Vi refers to that indicating the vertical contour feature. When
`the primary value of the scanning pattern is indicated as I for P, which is a block of a DCT-converted motion compensated image, I is defined
`as follows.
`
`I(P)=X(F(P))
`
`Here, F is a function for extracting required features from P, and X is a selection function to be matched to the scan pattern according to the
`value calculated in F(P).
`
`The function F for extracting the features of the contour is defined as the ratio of the state of horizontal and vertical energy as follows.
`
`
`
`In this case, it is preferable that the scan pattern selection function X is determined to select No. 1 or No. 2 for the vertical contour, and No. 4
`or No. 5 contour for the horizontal contour when viewed from the scan pattern set in FIG. 4. The selection function X for the set of scan
`patterns in FIG. 4 is defined as follows.
`
`8-3
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 4
`
`

`

`1019940025774
`
`
`
`ε1 and ε2 in the above equation are threshold values determined in consideration of the motion compensation prediction difference image and
`the extraction function (F).
`
`In addition, the processor unit (16) in the encoder (100) receives the scan pattern determined by the scan pattern determination unit (14) from
`the scan pattern storage unit (11), and processes it after applying it to the current quantized DCT block indicated by the block address in the
`motion compensation prediction difference image.
`
`When the quantized DCT block is output in the order of the scan patterns from the processor unit (15) in this manner, the entropy encoding
`unit (16) receiving this input codes the DCT coefficients in variable-length, outputs them as a bit string, and then increases the address of the
`block, and this process is repeated until the valid block is finished.
`
`On the other hand, the entropy decoding unit (25) in the decoder (200) decodes the bit string coming from the encoder in variable-length, and
`outputs DCT coefficients.
`
`The processor unit (26) receiving the DCT coefficients from the entropy decoding unit (25) reconstruct the DCT block by referring to the scan
`pattern selected in the scan pattern storage unit (21) and then increases the block address, and this process is repeated until the valid block is
`finished.
`
`As described above, according to the present invention, the efficiency can be greatly improved by selecting an arbitrary set of scan patterns,
`analyzing the features of each DCT block of an image, and applying a suitable scan pattern to the image, instead of uniformly applying a scan
`pattern reflecting the general features of an image as in the prior art.
`
`(57) Scope of claims
`
`Claim 1
`
`A method for encoding DCT blocks using block-adapting scan in a system to which a method for encoding a predetermined image and
`decoding the encoded image back to the original image is applied, wherein the image encoding process is composed of a step for initializing
`a block address with the first block of an image to be encoded (S1); a step for DCT-converting the current block indicated by the block address
`in the motion compensated image (S2); a step for calculating a value (t) of the extraction function (F) to identify the contour features of the
`DCT block generated in step (S2) (S3); a step for determining (I) a scan pattern suitable for the current DCT block from the set of scan patterns
`defined using the t value calculated in step (S3) and the selection function (X) (S4); a step for DCT-converting and quantizing the current
`block of the motion compensation prediction difference image (S5); a step for coding the DCT coefficients of the block generated in step (S5)
`in variable-length according to the scan patterns determined in step (S4) (S6); a step for increasing the address of the block (S7); and a step
`for detecting whether it is the end of the valid block, and ending the encoding process if it is the end and repeating the process after step (S2)
`if it is not the end of the valid block (S8), wherein the process for decoding an image is composed of a step for initializing a block address
`with the first block of an image to be decoded (S11); a step for DCT-converting the current block indicated by the block address in the motion
`compensated image (S12); a step for calculating the value (t) of the extraction function (F) to identify the contour features of the DCT block
`generated in step (S12) (S13); a step for determining (I) a suitable scan pattern from the set of scan patterns defined using the t value calculated
`in step (S13) and the selection function (X) (S14); a step for reconstructing the block into DCT coefficients by decoding the compressed bit
`string in variable-length (S16); a step for increasing the address of the block (S17); a step for detecting whether it is the end of the valid block,
`and ending the decoding process if it is the end if it is not the end, with step (S17); and a step for detecting whether it is the end of the valid
`block, and ending the decoding process if it is the end and repeating the process after step (S12) if it is not the end of the valid block (S18).
`
`Claim 2
`
`The method for encoding DCT blocks using the block-adapting scan of claim 1, wherein the determination of the scan pattern (I(P)) for the
`block (P) of the DCT-converted motion compensated image calculates the extraction function (F) for the block (P) and is defined by the scan
`pattern selection function (X) value (I(P)=X((F(P)) for that value.
`
`Claim 3
`
`The method for encoding DCT blocks using the block-adapting scan of claim 1, wherein the extraction function (F) is defined by
`
`
`
`(provided, Hi is the horizontal contour feature and Vi is the vertical contour feature)
`
`
`
`Claim 4
`
`The method for encoding DCT blocks using the block-adapting scan of claim 1, wherein the selection function (X) is determined by threshold
`values (ε1, ε2) determined in consideration of the motion compensation prediction difference image and extraction function (F).
`
`
`
`8-4
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 5
`
`

`

`1019940025774
`
`Claim 5
`
`An apparatus for encoding DCT blocks using a block-adapting scan composed of an encoder (100) composed of a scan pattern storage unit
`(11) for storing data of an arbitrary set of scan patterns; a DCT conversion unit (12) for receiving the block of a motion compensated image
`and converting it to DCT; a feature extraction unit (13) for receiving an input of the output signal of the DCT conversion unit (12) and
`calculating the value of an extraction function (F) to identify the contour features of the block; a scan pattern determination unit (14) for
`determining (I) the scan pattern based on the value of the extraction function (F) output from the feature extraction unit (13); a processor unit
`(15) for bringing the scan pattern determined in the scan pattern storage unit (14) and the scan pattern determined in the scan pattern storage
`unit (11) from the scan pattern storage unit (11) and applying it to the quantized DCT block of the motion compensation prediction difference
`image; and an entropy encoding unit (16) for variable-length encoding DCT coefficients inputted from the processor unit (15), a scan pattern
`storage unit (21) for storing data of an arbitrary set of scan patterns; a DCT conversion unit (22) for receiving the input of the current block of
`a motion compensated image and converting it to DCT; a feature extraction unit (23) for calculating the value of an extraction function (F) to
`identify the contour of the block; a scan pattern determination unit (24) for determining (I) the scan pattern based on the value of the extraction
`function (F) output from the feature extraction unit (23); an entropy decoding unit (25) for variable-length decoding the compressed bit string;
`and a processor unit (26) for referring to the scan pattern determined in the scan pattern storage unit (24) in the scan pattern storage unit (21)
`and reconstructing the DCT coefficients coming in after being decoded through the entropy decoding unit (25) into blocks according to the
`selected scan pattern.
`
`Figure 1
`
`Figure 2
`
`Figure 3
`
`
`
`Figures
`
`
`
`
`
`
`
`
`
`
`
`
`
`8-5
`
`
`
`
`
`
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 6
`
`

`

`1019940025774
`
`
`
`Figure 4
`
`No. 1
`
`No. 2
`
`No. 3
`
`No. 4
`
`No. 5
`
`
`
`Figure 5
`
`
`
`
`
`
`
`
`
`
`
`
`
`8-6
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 7
`
`

`

`1019940025774
`
`
`
`Figure 6
`
`Start
`
`Initialize the block address of the image
`
`DCT-convert the current block of the motion compensated image
`
`Calculate value t of extraction function F for the
`block
`
`Determine the suitable scan pattern from a set of scan
`patterns by the value of t and selection function X
`
`DCT-convert and quantize the current block of the
`motion compensation prediction difference image
`
`DCT-convert and quantize the current block of the
`motion compensation prediction difference image
`
`Increase the block address
`
`Is it the end of valid block?
`
`No
`
`Yes
`
`End
`
`
`
`
`
`
`
`
`
`Figure 7
`
`Start
`
`Initialize the block address of the image
`
`DCT-convert the current block of the motion compensated image
`
`Calculate value t of extraction function F for the
`block
`
`Determine the suitable scan pattern from a set of scan patterns
`by the value of t and selection function X
`
`Variable-length decode the encoded bit sting
`
`Reconfigure the block by the scan pattern with
`decoded DCT coefficients
`
`Increase the block address
`
`Is it the end of valid block?
`
`No
`
`Yes
`
`End
`
`
`
`8-7
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 8
`
`

`

`
`
`
`
`
`
`Figure 8
`
`Figure 9
`
`1019940025774
`
`
`
`
`
`8-8
`
`Unified Patents, LLC v. Elects. & Telecomm. Res. Inst., et al.
`
`Ex. 1010, p. 9
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket