`
`
`
`
`
`
`
`
`
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`
`UNIFIED PATENTS INC.,
`Petitioner,
`
`v.
`
`GE VIDEO COMPRESSION, LLC,
`Patent Owner.
`
`
`Case No. IPR2019-00726
`U.S. Patent No. 6,943,710
`
`
`
`PATENT OWNER’S PRELIMINARY RESPONSE
`PURSUANT TO 37 C.F.R. § 42.107(a)
`
`
`
`
`
`
`
`TABLE OF CONTENTS
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`Page(s)
`
`I.
`
`II.
`
`The ʼ710 Patent Owner & Inventors................................................................ 2
`
`Introduction To The Use of Arithmetic Coding To Compress & Decompress
`Digital Data ...................................................................................................... 4
`
`A. Data Compression ................................................................................. 5
`
`B.
`
`Entropy Coding ..................................................................................... 8
`
`C. Arithmetic Coding ................................................................................. 8
`
`1.
`
`2.
`
`Binary Arithmetic Encoding ....................................................... 9
`
`Binary Arithmetic Decoding ..................................................... 17
`
`D. Multialphabet Arithmetic Coding ....................................................... 19
`
`E.
`
`Adaptive Arithmetic Coding ............................................................... 20
`
`III. The ’710 Patent .............................................................................................. 21
`
`A.
`
`Prior Art Arithmetic Coding Techniques Suffered From Numerous
`Drawbacks ........................................................................................... 23
`
`1.
`
`2.
`
`Arithmetic Coding Techniques Requiring Extensive
`Mathematical Operations Were Too Resource and Time
`Intensive .................................................................................... 23
`
`Arithmetic Coding Techniques That Avoided Extensive
`Mathematical Operations Resulted In Less Inefficient
`Compression .............................................................................. 24
`
`a.
`
`b.
`
`The JPEG QM Coder (Ground 2’s Kimura Reference) . 24
`
`The Quasi-Arithmetic Coder (Ground 1’s Howard
`Reference) ....................................................................... 27
`
`B.
`
`The ’710 Patent’s Adaptive Binary Arithmetic Coding Techniques
`Solved The Foregoing Problems To Provide Both Fast And Efficient
`Compression ........................................................................................ 28
`
`1.
`
`The Current Interval Width Is Mapped To A Quantization
`Index .......................................................................................... 30
`
`2.
`
`The Probability Is Represented By A Probability Index .......... 31
`
`i
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`The Mapped Quantization Index and The Probability Index Are
`Used As Pointers Into The Interval Division Table To Obtain
`The Partial Interval Width ........................................................ 32
`
`3.
`
`C.
`
`The Challenged ’710 Patent Claims .................................................... 33
`
`IV. Unified’s Petition & Grounds ........................................................................ 35
`
`A. Ground 1: Howard & Printz ................................................................ 35
`
`1.
`
`Howard ...................................................................................... 35
`
`a.
`
`b.
`
`Overview of Howard ...................................................... 37
`
`Howard’s Table 3 Quasi-Arithmetic Coder ................... 39
`
`2.
`
`Printz ......................................................................................... 43
`
`a.
`
`b.
`
`c.
`
`Printz Background .......................................................... 44
`
`Printz’s Table’s Does Not Input Probabilities or Output
`Partial Interval Widths .................................................... 46
`
`Printz Does Not Map A Current Interval Width To A
`Quantization Index.......................................................... 47
`
`3.
`
`Ground 1’s Proposed Howard & Printz Combination .............. 48
`
`B. Ground 2: Kimura & Printz ................................................................. 51
`
`1.
`
`Kimura....................................................................................... 51
`
`a.
`
`b.
`
`Kimura Background & Overview .................................. 51
`
`Kimura’s Figure 42 JPEG Q Coder ................................ 51
`
`2.
`
`Ground 2’s Proposed Kimura & Printz Combination............... 53
`
`V.
`
`The Petition Does Not Establish A Reasonable Likelihood That Unified Can
`Prevail On Ground 1’s Proposed Combination of Howard & Printz ............ 56
`
`A. Missing Limitation: “mapping the current interval width to a
`quantization index from a plurality of representation quantization
`indices” ................................................................................................ 57
`
`B. Missing Limitation: “a probability represented by a
`probability index” ................................................................................ 59
`
`C. Missing Limitation: “accessing an interval division table using the
`quantization index and the probability index”” .................................. 61
`
`D. Missing Limitation: “accessing an interval division table … to obtain
`a partial interval width value” ............................................................. 62
`
`ii
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`VI. The Petition Does Not Establish A Reasonable Likelihood That Unified Can
`Prevail On Ground 2’s Proposed Combination of Kimura & Printz ............. 65
`
`A. Missing Limitation: “mapping the current interval width to a
`quantization index from a plurality of representation quantization
`indices” ................................................................................................ 65
`
`B. Missing Limitation: “accessing an interval division table using the
`quantization index and the probability index” .................................... 66
`
`VII. CONCLUSION .............................................................................................. 68
`
`
`
`
`
`iii
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`TABLE OF AUTHORITIES
`
` Page(s)
`
`Cases
`
`In re Stepan Co.
`868 F.3d 1342 (Fed. Cir. 2017) .......................................................................... 60
`
`Vivid Techs., Inc. v. Am. Sci. & Eng’g, Inc.,
`200 F.3d 795 (Fed. Cir. 1999) ............................................................................ 34
`
`Wellman, Inc. v. Eastman Chem. Co.,
`642 F.3d 1355 (Fed. Cir. 2011) .......................................................................... 34
`
`Statutes
`
`35 U.S.C. § 103 ........................................................................................................ 35
`
`Regulations
`
`37 C.F.R. §42.24(d) ................................................................................................. 69
`
`
`
`
`
`
`iv
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`TABLE OF EXHIBITS
`
`Description
`
`Wien, Mathias – High Efficiency Video Coding
`
`Sayood, Khalid – Introduction to Data Compression
`
`Taubman et al – JPEG2000 Image Compression Fundamentals,
`Standards
`
`Thomas Wiegand – Wikipedia
`
`Detlev Marpe – Wikipedia
`
`
`
`Exhibit
`
`2001
`
`2002
`
`2003
`
`2004
`
`2005
`
`
`
`
`
`
`v
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`The Board should deny Unified’s Petition because it fails to establish the
`
`required reasonable likelihood of success under either Ground 1 or Ground 2.
`
`The challenged U.S. Patent No. 6,943,710 discloses and claims improved
`
`techniques for using “binary arithmetic coding” to compress data. The goal of
`
`implementing arithmetic coding in a manner that successfully balanced the need
`
`for both speed and compression efficiency was the subject of extensive effort and
`
`research. But despite all of these efforts, it was the ʼ710 patent’s inventors and
`
`subject matter that solved this long-felt need, disclosing and claiming techniques
`
`that successfully avoided the need to perform lengthy mathematical calculations
`
`while still achieving efficient compression.
`
`This new technique was so successful that it was incorporated into the
`
`leading video compression standards, the AVC/H.264 standard and its improved
`
`successor, the HVEC/H.265 standard. Accordingly, the ʼ710 Patent is recognized
`
`as a Standard-Essential Patent for each of those leading standards, which directly
`
`compete with the VP9 video compression standard developed and advocated by
`
`Google (which is a member of Petitioner Unified, but is not named as a real-party-
`
`in-interest in the Petition).
`
`Unified’s Petition relies on three compression coders, each of which was
`
`before the Patent Examiner during the ʼ710 patent’s prosecution. Unified concedes
`
`that neither of the primary references (Howard and Kimura) teaches limitations
`
`1
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`such as the “mapping the current interval width to a quantization index” or
`
`“accessing an interval division table using the quantization index and the
`
`probability index to obtain a partial interval width value,” each required by all of
`
`the challenged claims. Unified instead attempts to rely on a third reference—
`
`Printz—to fill these deficiencies, but Printz likewise does not teach each of these
`
`required but missing limitations.
`
`Unified’s arguments to the contrary are wholly unsupported. For example,
`
`the Petition asserts without explanation that Printz discloses mapping a current
`
`interval width to a quantization index, but cites no such disclosure in Printz. It
`
`cannot because Printz explicitly does not calculate or obtain a current interval
`
`width in the first instance, much less then map it to a quantization index.
`
`This and other deficiencies of Printz mean that it cannot remedy the
`
`admitted failings of the primary Howard and Kimura references. Each of these
`
`failures provides a separate and independent reason why Unified’s Petition should
`
`be denied in its entirety.
`
`I.
`
`The ʼ710 Patent Owner & Inventors
`
`The ʼ710 patent is owned by Patent Owner GE Video Compression LLC, a
`
`General Electric Company and a leader in developing video compression
`
`technology.
`
`2
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`The inventors of the ’710 patent, Dr. Detlef Marpe and Dr. Thomas
`
`Wiegand, are leaders in the video compression field, having authored numerous
`
`text books and peer reviewed articles as well as making substantial contributions to
`
`standard setting organizations. Dr. Marpe and Dr. Weigand were each named a
`
`Fellow of the Institute of Electrical and Electronics Engineers for their
`
`contributions to video compression and coding, and they have jointly won
`
`numerous awards for their work, including the Fraunhofer Award, the ITG Award
`
`of German Society for Information Technology, and the Karl Heinz Beckurts
`
`Award. (Ex. 2004; Ex. 2005).
`
`Dr. Wiegand also has received, among other recognitions, an ITU150 Award
`
`(the other ITU150 awards went to Bill Gates, Robert E. Kahn, Mark I. Krivocheev,
`
`Martin Cooper, and Ken Sakamura), a Primetime Emmy Engineering Award, the
`
`IEEE Masaru Ibuka Consumer Electronics Award, and
`
`the International
`
`Multimedia Telecommunications Consortium Award. (Ex. 2004). In 2006, the
`
`video coding work he co-led for the International Telecommunication Union
`
`Standardization Sector was voted the most influential in that organization’s 50 year
`
`history. (Ex. 2004).
`
`3
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`II.
`
`Introduction To The Use of Arithmetic Coding To Compress &
`Decompress Digital Data
`
`The challenged ʼ710 patent generally concerns methods for compressing
`
`(and then decompressing) digital data. In particular, the patent discloses and
`
`claims improved methods and programs for using “arithmetic coding” to “encode”
`
`binary data into a more compact form for transmission or, conversely, decoding
`
`such encoded data to transform it back to its original, larger form. In this manner,
`
`for example, video data can be compressed for transmission, and then, upon
`
`receipt, decompressed for display.
`
`Several concepts are important to understanding the inventions of the ʼ710
`
`patent (and the significantly different approaches in the Petition’s asserted
`
`references), including:
`
` data compression,
`
` arithmetic coding,
`
` binary arithmetic coding,
`
` multialphabet arithmetic coding, and
`
` adaptive arithmetic coding.
`
`The following primer explains each of these concepts.
`
`4
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`A. Data Compression
`
`In their raw form, video, audio, and other digital files contain an incredible
`
`amount of data. (Ex. 2001 at 5-6). Because of their enormous size, transmitting
`
`such files in their raw form requires a great deal of time and resources. (Ex. 2001
`
`at 5-6). It is therefore highly desirable to reduce the amount of data that needs to
`
`be transmitted to adequately reproduce the information included in an original file.
`
`(Ex. 2001 at 5-6). This reduction in the amount of data used to convey information
`
`is known as “compression.” (Ex. 2001 at 23-28).
`
`Compression (also referred to as “encoding”) is thus the modification of raw
`
`data into a more compact form that can adequately represent the original
`
`information (such as video images and audio). (Ex. 2001 at 23-28). Once
`
`compressed and then transmitted, the received compressed data can then be
`
`decompressed (or “decoded”) for display or other use. (Ex. 2001 at 23-28).
`
`There are many different ways to compress video and other data for
`
`transmission. (Ex. 2001 at 23-71). One general approach focuses on reducing the
`
`amount of redundant data that is transmitted. For example, one such method could
`
`include analyzing consecutive frames of a video to determine which areas of the
`
`image change from frame to frame, and then transmitting only the information
`
`needed to reproduce that change. (Ex. 2001 at 41-44). This reduces the amount of
`
`data that needs to be transmitted to accurately reproduce each video frame by
`
`5
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`eliminating the need to repeatedly transmit redundant information that does not
`
`change from frame to frame. (Ex. 2001 at 41-44).
`
`Another general compression approach focuses on modifying the transmitted
`
`data itself. (Ex. 2001 at 55-62). For example, if the data to be transmitted is a
`
`series of symbols (such as a series of 0’s and 1’s), this approach could include
`
`applying algorithms to that series of symbols to reduce the number of symbols that
`
`need to be transmitted to accurately reproduce the data on the receiving end. (Ex.
`
`2001 at 25-27, 55-62). This approach is illustrated in the figure below.1
`
`In this example, the uncompressed data that needs transmitting is a series of
`
`symbols, such as a long sequence of 0’s and 1’s. (Ex. 2001 at 25-27, 55-62). An
`
`encoder then compresses (encodes) this data into a shorter series of symbols that
`
`represent the original symbols, referred to as the encoded symbols or “codeword.”
`
`(Ex. 2001 at 25-27, 55-62). The series of encoded symbols/codeword is then
`
`transmitted and eventually received by a decoder. (Ex. 2001 at 25-27). This
`
`decoder decompresses (decodes) the encoded symbols/codeword back into the
`
`original sequence of symbols. (Ex. 2001 at 25-27, 55-62).
`
`
`1 Figure is adapted from Ex. 2001 at 24.
`
`6
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`
`
`A large number of different classes of compression techniques perform this
`
`type of encoding/decoding, including entropy encoding, Lempel-Ziv, delta coding,
`
`run length encoding, and many more. (E.g., Ex. 2002 at vii-xv).
`
`Moreover, each of these classes of compression techniques further
`
`encompass numerous different encoding/decoding approaches. (E.g., Ex. 2002 at
`
`vii-xv). For example, the class of entropy encoders is extensive and can include
`
`approaches such as arithmetic coding, Huffman coding, Rice coding, Golomb
`
`coding, and many more. (E.g., Ex. 2002 at 41-114).
`
`The challenged ʼ710 patent concerns improved methods and programs for
`
`using one of these types of entropy compression—arithmetic coding—to encode
`
`binary data.
`
`7
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`B.
`
`Entropy Coding
`
`Entropy encoding is a class of compression techniques that use probabilities
`
`to compress a longer sequence of symbols into an encoded, shorter series of
`
`symbols (the codeword). (Ex. 2001 at 55-56). More specifically (and as further
`
`explained in the example below), the probabilities of each possible symbol
`
`occurring in the original longer sequence of symbols is used to create the shorter
`
`codeword for transmission. (Ex. 2001 at 55-56).
`
`An early example of entropy encoding is Morse code, developed in the mid-
`
`19th century. (Ex. 2002 at 2). Letters sent by telegraph are encoded with dots and
`
`dashes. (Ex. 2002 at 2). Morse noticed that certain letters occurred more often than
`
`others. (Ex. 2002 at 2). In order to reduce the average time required to send a
`
`message, he assigned shorter sequences to letter that occur more frequently, such
`
`as “e” and “a”, and longer sequences to letters that occur less frequently, such as
`
`“q” and “j”. (Ex. 2002 at 2). Although the codeword representing “q” and “j” are
`
`longer than if each letter was coded with the same length code, the more frequently
`
`occurring “e” and “a” can be shorter. And since the shorter codewords occur more
`
`frequently than the longer codewords, overall compression is achieved.
`
`C. Arithmetic Coding
`
`As mentioned, one type of entropy encoding is arithmetic coding, which is
`
`the subject of the ʼ710 patent. Where other forms of entropy encoding (such as
`
`8
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`the Rice coding used in the Petition’s Ground 1 Howard reference) break each
`
`sequence of symbols into component symbols and then replace each such symbol
`
`with a separate codeword, arithmetic coding encodes a sequence of symbols into a
`
`single codeword. (Ex. 2001 at 60-62).
`
` Specifically, arithmetic coding encodes an entire sequence of symbols into
`
`a single codeword representing a specific range (or “interval”) between 0 and 1
`
`that is defined by a lower bound and an upper bound. (Ex. 2001 at 60-62). For
`
`example, such a range/interval between 0 and 1 could be from lower bound 0.4074
`
`to upper bound 0.5556. (Ex. 2001 at 60-62). The width of this interval is equal to
`
`the probability of the sequence of symbols. (Ex. 2001 at 60-62).
`
`This concept is illustrated in the following section.
`
`1.
`
`Binary Arithmetic Encoding
`
`Arithmetic coding may best be understood by walking step by step through a
`
`simple exemplary arithmetic coding process.2
`
`In the following example, arithmetic coding is used to compress/encode
`
`“binary data.” Binary data refers to a set of data in which there are only two
`
`possible values or symbols, such as “A” or “B”; “0” or “1”. In this example, the
`
`two possible symbols are “A” or “B”, and the original data sequence to be encoded
`
`is “BAB”.
`
`
`2 The following example is adapted from Ex. 2001 at 60-62.
`
`9
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`Arithmetic coding typically starts with a set interval of 0 to 1, denoted as
`
`[0,1). This starting interval has a lower bound of 0, an upper bound of 1, and,
`
`consequently, a width of 1:
`
`
`
`This interval is then divided into partial intervals, each having a width
`
`representing the probability of one of the possible symbols occurring next in the
`
`original data sequence. For example, for our illustrative original data sequence of
`
`“BAB”, the symbol “A” has the probability of an “A” symbol occurring is 1/3, and
`
`the probability of a “B” symbol occurring is 2/3. Therefore, the [0,1) interval is
`
`divided into two partial intervals, one equal to the probability of A occurring (1/3
`
`of current interval width 1) and another equal to the probability of B occurring (2/3
`
`of current interval width 1):
`
`10
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`
`
`The encoder may determine where to divide the current [0,1) interval to
`
`arrive at the relevant partial intervals (the interval separation point) by multiplying
`
`the probability of “A” occurring (1/3) by the current interval width (1) and adding
`
`the result to the lower bound of the current interval (0), as set forth in the following
`
`formula:
`
`Interval Separation Point = (Probability of A)(Current Interval Width) + Current Interval’s Lower Bound
`
`= (1/3) (1) + 0
`
`= 1/3
`
`
`
`Once the current interval is divided into the appropriate partial intervals for
`
`each possible symbol, the partial interval that corresponds to the symbol to be
`
`encoded is selected. In our illustrative original data sequence of “BAB”, the first
`
`11
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`symbol to be encoded is “B.” Therefore, the partial interval for B is selected,
`
`which in this example is the interval [1/3, 1):
`
`
`
`This partial interval of [1/3, 1) is then used as the new current interval, and
`
`the process is repeated to encode the next symbol in the original sequence. This
`
`cycle of dividing the current interval into the partial intervals representing the
`
`probability of each possible symbol, selecting the partial interval that corresponds
`
`to the current symbol to be encoded, and then using that corresponding partial
`
`interval as the next current interval to encode the next symbol is repeated until all
`
`of the symbols in the original sequence are encoded to arrive at a final interval
`
`whose width represents the probability of the original sequence of symbols.
`
`Thus, continuing with our illustrative encoding of the “BAB” sequence of
`
`symbols, in the next cycle of binary arithmetic encoding, the new symbol to be
`
`12
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`encoded is the A in “BAB,” and the new current interval is [1/3, 1) (i.e., the
`
`previous partial interval that corresponded to first symbol B). The lower bound of
`
`this new current interval is no longer 0, but is now 1/3, and the width is no longer 1
`
`but is now 2/3 (the distance from 1/3 to 1).
`
`
`
`As when encoding the first “B”, the next step is to divide the current interval
`
`into partial intervals, each having a width representing the probability of one of the
`
`possible symbols occurring next in the original data sequence. Again, this may be
`
`done by calculating the interval separation point using the probability of A (1/3),
`
`the width of the current interval (2/3) and the lowest bound of the current interval
`
`(1/3) in the following formula:
`
`Interval Separation Point = (Probability of A)(Current Interval Width) + Current Interval’s Lower Bound
`
`= (1/3) (2/3) + 1/3
`
`= 5/9
`
`As a result, the new interval separation point is 5/9, and, consequently, the
`
`new partial interval corresponding to the probability of an A symbol is [1/3, 5/9)
`
`13
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`and the new partial interval corresponding to the probability of a B symbol is [5/9,
`
`1):
`
`Since the symbol-to-be-encoded is “A”, the partial interval corresponding to
`
`
`
`“A”, [1/3, 5/9), is selected.
`
`At this point, the encoder has encoded the input sequence “BA” to the
`
`interval [1/3, 5/9). Since the final B in the original sequence BAB still needs to be
`
`encoded, the process repeats, using the selected partial interval [1/3, 5/9) as the
`
`new current interval. This new current interval has a lower bound of 1/3, an upper
`
`bound of 5/9, and a width of 2/9:
`
`14
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`
`This information is again used to calculate the interval separation point:
`
`Interval Separation Point = (Probability of A)(Current Interval Width) + Current Interval’s Lower Bound
`
`
`
`= (1/3) (2/9) + 1/3
`
`= 11/27
`
`As a result, the new interval separation point is 11/27, and, consequently, the
`
`new partial interval corresponding to the probability of an A symbol is [1/3, 11/27)
`
`and the new partial interval corresponding to the probability of a B symbol is
`
`[11/27, 5/9):
`
`
`
`15
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`Since the next (and last) symbol-to-be-encoded is the final “B” in the
`
`original BAB sequence, the encoder selects the partial interval corresponding to
`
`“B”, which is [11/27, 5/9).
`
`At this point, the encoder has encoded each symbol in the original input
`
`sequence “BAB” and, therefore, the selected partial interval [11/27, 5/9) is the final
`
`interval. This interval’s width equals the probability of the BAB sequence and can
`
`be represented and transmitted as a single codeword. Because this codeword
`
`representation of [11/27, 5/9) is more compact than the amount of data needed to
`
`transmit “BAB”, compression is achieved.
`
`Thus, as illustrated in the foregoing simple example, arithmetic coding
`
`involves encoding each symbol in an original input sequence by repeatedly:
`
` dividing the current interval into partial intervals, each of which is a
`
`fraction of the current interval proportional to the probability of the
`
`occurrence of a particular symbol;
`
` selecting the resulting partial interval that corresponds to the current
`
`symbol from the input sequence to be encoded; and
`
` using that corresponding partial interval as the next current interval to
`
`encode the next symbol from the input sequence.
`
`The final resulting partial interval’s width equals the probability of the
`
`original input sequence and can be represented and transmitted as a single
`
`16
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`codeword. Because the resulting partial interval width reflects the probabilities of
`
`the symbols, arithmetic coding achieves variable length codes (similar to Morse
`
`code) where more frequently occurring symbols are represented with shorter
`
`codewords
`
`than
`
`less frequently occurring codewords,
`
`thereby achieving
`
`compression.
`
`2.
`
`Binary Arithmetic Decoding
`
`Once the codeword is transmitted and received, it can be decoded to
`
`reconstruct the original input sequence of symbols. (Ex. 2001 at 23-28).
`
`
`
`Decoding works similarly to encoding except that the inputs and outputs are
`
`reversed. (Ex. 2001 at 60-62). In encoding, the input to the encoder is the original
`
`sequence of symbols, and the encoder performs arithmetic coding on that original
`
`sequence of symbols to output the codeword. (Ex. 2001 at 60-62). In decoding,
`
`the input to the decoder is the codeword, and the decoder performs arithmetic
`
`coding on that codeword to output the original sequence of symbols. (Ex. 2001 at
`
`60-62).
`
`17
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`Thus, just like arithmetic encoding, arithmetic decoding involves the basic
`
`steps of dividing a current interval into partial intervals corresponding to the
`
`probabilities of the possible symbols, selecting the applicable partial interval, and
`
`then repeating that process using that selected partial interval as the next current
`
`interval. (Ex. 2001 at 60-62). The primary difference is that, in encoding, the
`
`applicable partial interval is selected based on which symbol is currently being
`
`encoded (e.g., an A or a B). (Ex. 2001 at 60-62). In decoding, however, the
`
`codeword is used to select the applicable partial interval. (Ex. 2001 at 60-62). The
`
`selected partial interval then indicates which symbol is the current decoded
`
`symbol. (Ex. 2001 at 60-62).
`
`For example, assume that during the decoding process, the codeword is the
`
`number 13/17 and the current interval has been divided as follows:
`
`
`
`18
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`In this example, the codeword 13/17 falls within the partial interval
`
`corresponding to “B” (it is between 1/3 and 1) so the current decoded symbol is
`
`“B”. The “B” partial interval is then used as the next current interval, and the
`
`process is repeated until all of the symbols from the original sequence are decoded.
`
`D. Multialphabet Arithmetic Coding
`
`The above example illustrated binary arithmetic coding to encode and
`
`decode binary data, a set of data in which there are only two possible symbols,
`
`such as “A” or “B”; “0” or “1”.
`
`In contrast, multialphabet arithmetic coding is a different technique in
`
`which arithmetic coding is used to encode and decode multialphabet data, a set of
`
`data that comprise more than two possible symbols. (Ex. 2001 at 62). For example,
`
`if the original input sequence of symbols was a sentence, paragraph, or chapter,
`
`then the possible input symbols could be the 26 letters of the alphabet (e.g., “A”,
`
`“B”, “C”… “Z”), plus additional symbols necessary for spaces, punctuation, etc.
`
`Because of the larger number of possible symbols, multialphabet arithmetic
`
`coding is distinctly different from binary arithmetic coding in significant ways,
`
`including, for example, the number of interval separation points and partial
`
`intervals that must be calculated, determined and evaluated. (Ex. 2001 at 62). For
`
`example, if the possible data symbols are limited to the 26 letters of the alphabet,
`
`19
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`then each cycle of multialphabet coding requires the determination and calculation
`
`of which of 26 potential partial intervals should be selected for the current cycle.
`
`To simplify for illustrative purposes, below is a visualization of the example
`
`interval division points and corresponding partial intervals for a multialphabet
`
`arithmetic coding process where the symbols can be “A”, “B”, or “C”.
`
`
`
`E. Adaptive Arithmetic Coding
`
`As described above, each cycle of arithmetic coding uses the probability of
`
`each possible symbol to determine the width of each partial interval. In the
`
`foregoing exemplary arithmetic coding process, the same symbol probability was
`
`used in each cycle of the arithmetic coding process (in the example, 1/3 for the “A”
`
`symbol and 2/3 for the “B” symbol). However, not all arithmetic coding
`
`techniques use such static probabilities.
`
`20
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`“Adaptive” arithmetic coding refers to a technique in which the symbol
`
`probability is not static but is instead dynamic. (Ex. 2001 at 55-62; Ex. 2002 at
`
`112). In other words, in adaptive arithmetic coding, each symbol’s probability is
`
`updated or recalculated for each cycle of arithmetic coding. (Ex. 2001 at 62; Ex.
`
`2002 at 112).
`
`For example, assuming the sequence “BAB” is encoded as described above,
`
`the decoder may initialize the probabilities to 1/2 for “A” and 1/2 for “B”. After
`
`the decoder completes its first cycle in which it decodes the first symbol as “B”,
`
`the decoder could update the probabilities to 1/3 for “A” and 2/3 for “B”. Then,
`
`once the decoder has decoded the next symbol as “A”, the decoder could update
`
`the probabilities to 1/2 for “A” and 1/2 for “B”.
`
`Adaptive arithmetic coding can achieve greater levels of compression
`
`because continually estimating the probabilities of symbol occurrence can result in
`
`more accurate probability models and allow it to adapt to changing statistics, which
`
`improves the compression efficiency (i.e., reduces the amount of data that needs to
`
`be transmitted to reproduce the original data). (Ex. 2001 at 62; Ex. 2002 at 112).
`
`III. The ’710 Patent
`
`The ʼ710 Patent has a priority date of May 2, 2002, and issued on September
`
`13, 2005. (Ex. 1001). It is titled “Method And Arrangement For Arithmetic
`
`Encoding And Decoding Binary States And A Corresponding Computer Program
`
`21
`
`
`
`Case No. IPR2019-00726
`Patent No. 6,943,710
`
`And A Corresponding Computer-Readable Storage Medium,” and consistent with
`
`that title, the patent discloses and claims improved techniques for using binary
`
`arithmetic coding to compress/encode or decompress/decode binary data.
`
`As described in the ʼ710 Patent’s background section (and supported by the
`
`extensive literature cited therein), the prior art use of arithmetic coding to compress
`
`and decompress data suffered from numerous drawbacks. (Ex. 1001 at 1:23-3:11).
`
`On the one hand, arithmetic coding techniques that repeatedly performed all of the
`
`interval division point