`
`_________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`_________________
`
`SAP AMERICA INC., SYBASE, INC., DELL, INC., HEWLETT-PACKARD
`ENTERPRISE COMPANY, HP ENTERPRISE SERVICES, LLC and
`TERADATA OPERATIONS, INC.
`Petitioner
`
`v.
`
`REALTIME DATA LLC d/b/a IXO
`Patent Owner
`_________________
`
`U.S. Patent No. 6,597,812 B1
`Issued: July 22, 2003
`Application No.: 09/579,221
`Filed: May 26, 2000
`Title: System and method for lossless data compression and decompression
`_________________
`
`DECLARATION OF CHARLES D. CREUSERE
`IN SUPPORT OF PETITION FOR INTER PARTES REVIEW OF
`CLAIMS 1-4, 8, 14-17, 21 AND 28 OF U.S. PATENT NO. 6,597,812
`1.
`I, Charles Creusere, declare that all statements made herein of my
`
`own knowledge are true and all statements made on information and belief are
`
`believed to be true; and further that these statements were made with the
`
`knowledge that willful false statements and the like so made are punishable by fine
`
`Page 1 of 82
`
`
`
`
`
`or imprisonment, or both, under Section 1001 of Title 18 of the United States
`
`Code.
`
`2.
`
`I have been hired by Klarquist Sparkman, LLP, counsel for SAP
`
`America Inc. (“SAP”), Sybase, Inc. (“Sybase”) Dell, Inc. (“Dell”), Hewlett-
`
`Packard Enterprises Co. (“HPE”) and HP Enterprise Services (“HPES”), and
`
`Teradata Operations, Inc. (“Teradata”) (collectively, “Petitioners”) as an expert
`
`witness in the above-captioned proceeding. I have been asked to provide my
`
`opinion regarding U.S. Patent No. 6,597,812 B1 (“the 812 patent”).
`
`I.
`
`BACKGROUND AND QUALIFICATIONS
`
`3. My Curriculum Vitae is attached to this Declaration as Exhibit A.
`
`A. Educational Background
`
`4.
`
`I received a Bachelor of Science degree in Electrical and Computer
`
`Engineering from the University of California, Davis in 1985. I received a Masters
`
`of Science degree in Electrical and Computer Engineering from the University of
`
`California, Santa Barbara (“UCSB”) in 1990. I received my Ph.D. in Electrical and
`
`Computer Engineering, also from UCSB, in 1993.
`
`B.
`
`5.
`
`Professional History
`
`I am currently a Full Professor in the Klipsch School of Electrical &
`
`Computer Engineering at New Mexico State University. I was an Assistant
`
`Professor at New Mexico State from January 2000 until 2004, when I became an
`
`
`
`
`
`Page 2
`
`Page 2 of 82
`
`
`
`
`
`Associate Professor. I have been a Full Professor since August 2010. My research
`
`and coursework at New Mexico State University have focused on digital signal
`
`processing, image and video processing, and, in particular, compression. My
`
`general area of expertise is digital signal processing, with a particular focus on
`
`applications related to compression: image, video, and audio.
`
`6. My first exposure to the field of signal compression came in the fall of
`
`1989, when I took a course entitled Vector Quantization and Signal Compression
`
`at UCSB from Prof. Allen Gersho—an internationally renowned researcher in the
`
`area of speech compression. As my Ph.D. research progressed, I began to focus on
`
`transform-based compression as my main application area. My first paper dealing
`
`with image compression was published in 1991. I have since written 24 other
`
`journal and conference papers and I am the named inventor on two issued United
`
`States patents related to compression.
`
`7.
`
`Since joining the faculty of New Mexico State University in 2000, I
`
`have taught numerous classes at both the graduate and undergraduate levels. At the
`
`graduate level, I have taught the following: Image Processing (EE596), Digital
`
`Signal Processing (EE545), Signal Compression (EE573), Pattern Recognition
`
`(EE565), Advanced Linear Systems (EE555), Telemetering Systems (EE585),
`
`Information Theory (EE586), Adaptive Signal Processing (EE594), Multirate
`
`Signal Processing and Wavelets (EE595), and Neural Signal Processing (EE590).
`
`
`
`
`
`Page 3
`
`Page 3 of 82
`
`
`
`
`
`At the undergraduate level, I have taught the following courses: Engineering
`
`Analysis I (EE210), Signals and Systems I (EE312), Image Processing (EE446),
`
`Introduction to Digital Signal Processing (EE395), and Digital Communications
`
`(EE497).
`
`8.
`
`From 1993 through 1999, I was a Researcher and Team Leader at the
`
`Naval Air Warfare Center, China Lake, California. At China Lake, my research
`
`efforts focused on high speed image and video compression technologies,
`
`including embedded compression. I also developed improved encoders that enable
`
`the most critical data in images to be transmitted more efficiently over TCP/IP
`
`networks while retaining the highest possible fidelity.
`
`9.
`
`From 1990 through 1993, I worked as a Research Assistant in the
`
`Department of Electrical and Computer Engineering at UCSB. In this position, I
`
`worked on subband coding (compression) and multirate filter bank theory. I also
`
`implemented real-time filter banks on a digital signal processer. In the summer of
`
`1992, I worked at AT&T Bell Laboratories, where I developed and simulated new
`
`methods of extremely low bit rate video coding for video telephone applications.
`
`10. From 1985 through 1989, I worked as a Design Engineer at the Naval
`
`Weapons Center, China Lake. In this role, I built and tested the guidance
`
`electronics for various laser-guided munitions. This project included mixed analog
`
`and digital circuit design as well as the programming of an embedded digital signal
`
`
`
`
`
`Page 4
`
`Page 4 of 82
`
`
`
`
`
`processor. I also developed software for an advanced video processor and studied
`
`ground target tracking.
`
`C.
`
`11.
`
`Publications
`
`I have published numerous peer-reviewed journal articles and
`
`conference papers, including 9 journal articles and 30 conference papers directly
`
`related to data compression; the following are representative:
`
`•
`
`C.D. Creusere, "A New Method of Robust Image Compression Based
`
`on the Embedded Zerotree Wavelet Algorithm," IEEE Trans. on Image Processing,
`
`Vol. 6, No. 10, pp. 1436-1442, Oct. 1997.
`
`•
`
`C.D. Creusere, "Fast Embedded Compression for Video," IEEE Trans.
`
`on Image Processing, Vol. 8, No. 12, pp. 1811-16, December 1999.
`
`•
`
`S. Kandadai and C.D. Creusere, "Scalable Audio Compression at Low
`
`Bitrates," IEEE Transactions on Audio, Speech, and Language Processing, Vol. 16,
`
`No. 5, pp. 969-979, July 2008.
`
`12.
`
`In all, I have published more than 75 peer-reviewed journal articles
`
`and conference papers over my career, with a focus on imaging and compression.
`
`D. Other Relevant Qualifications and Compensation
`
`13.
`
`In addition to the experience and publications listed above, I have
`
`received the following awards and distinctions that are relevant to the subject
`
`matter of this Declaration. I am the listed inventor on U.S. Patent No. 6,148,111
`
`
`
`
`
`Page 5
`
`Page 5 of 82
`
`
`
`
`
`entitled “Parallel digital image compression system which exploits zerotree
`
`redundancies in wavelet coefficients” and U.S. Patent No. 6,466,698 entitled
`
`“Efficient embedded image and video compression using lifted wavelets.” I am
`
`currently a Senior Area Editor for IEEE Transactions on Image Processing and
`
`served as an Associate Editor for IEEE Transactions on Image Processing from
`
`2010 through 2014. I also served in this capacity from 2002 through 2005. From
`
`2008-2013, I served as an Associate Editor for IEEE Transactions on Multimedia.
`
`14.
`
`In 2004, I served as the co-general chair for the IEEE Digital Signal
`
`Processing Workshop in Taos, New Mexico. In 2012 and 2014, I served as the co-
`
`technical chair for the Southwest Symposium on Image Analysis and Interpretation
`
`held in Sante Fe, New Mexico and San Diego, CA, respectively. In addition, I
`
`served as the technical chair for the 2015 International Telemetering Conference. I
`
`am also a member of the technical program committees for the IEEE International
`
`Conference on Image Processing and the IEEE Data Compression Conference.
`
`15.
`
`I charge $350 an hour for my expert witness services.
`
`
`
`
`
`Page 6
`
`Page 6 of 82
`
`
`
`
`
`OPINIONS REGARDING THE
`
`SUBJECT MATTER OF THE 812 PATENT
`
`II. BACKGROUND AND STATE OF THE ART
`
`16. The 812 (Ex. 1001)1 patent relates to two data compression
`
`techniques—run length encoding and dictionary encoding. The background of the
`
`812 patent correctly states that both of these encoding techniques were well known
`
`prior to the filing date of the 812 patent. Nonetheless, the 812 patent claims the
`
`combination of these two techniques into a single compression algorithm. This
`
`combination, as claimed in the 812 patent, is shown in the prior art discussed
`
`below.
`
`A. Run Length Encoding Was Well-Known Before the Filing Date of
`
`the 812 Patent
`
`17.
`
`In a simple example of run length encoding, a “run” of repeated
`
`characters is identified and replaced with an identifier for the repeated character
`
`and a count of the number of times it is repeated. As shown below, run length
`
`encoding allows the string AAABBBCCCBBBBCCCCCAAAA to be represented as
`
`3A3B3C4B5C4A.
`
`
`1 All Exhibit numbers in this Declaration refer to the Exhibits attached to the
`
`accompanying Petition for Inter Partes Review.
`
`
`
`
`
`Page 7
`
`Page 7 of 82
`
`
`
`
`
`Illustration of Run length Encoding
`
`18. While the basic concept of run length encoding is straightforward,
`
`there are several possible variations in its implementation. For example, many
`
`systems use a control code (also called an escape sequence or code) to identify run
`
`length sequences of inputs (characters, bytes, or blocks) that have been run length
`
`encoded. In some systems, multiple control codes are used with the specific control
`
`code selected providing information about the repetition count value.
`
`19. The origins of run length encoding are murky. However, it was well-
`
`known by the 1960’s, when it was used for television signals (see Ex. 1006, A.H.
`
`Robinson and C. Cherry, “Results of a Prototype Television Bandwidth
`
`Compression Scheme,” Proceedings of the IEEE, 1967, 55(3):356-64) and
`
`facsimile transmissions (see Ex. 1007, U.S. Pat. No. 3,560,639, Cascade Run
`
`Length Encoding Technique, issued February 2, 1971). By the 1980’s, run length
`
`encoding was being paired with other encoding schemes. (See Ex. 1008, U.S. Pat.
`
`No. 4,626,829, Data Compression Using Run Length Encoding and Statistical
`
`
`
`
`
`Page 8
`
`Page 8 of 82
`
`
`
`
`
`Encoding, issued December 2, 1986.) This trend continued into the late 1980’s and
`
`early 1990’s. (See, e.g., Ex. 1009 U.S. Pat. No. 4,988,998, Data Compression
`
`System for Successively Applying at Least Two Data Compression Methods to an
`
`Input Data Stream, issued January 29,1991; U.S. Pat. No. 5,389,922, Compression
`
`Using Small Dictionaries with Applications to Network Packets, issued February
`
`14, 1995; U.S. Pat. No. 5,973,630, Data Compression for Use with a
`
`Communications Channel, issued October 26, 1999.) Run length encoding is
`
`frequently used in combination with Huffman coding and is part of major industry
`
`specifications and standards adopted in the 1980’s and 1990’s, such as TIFF and
`
`JPEG.
`
`B. Dictionary Compression Was Well-Known Before The Filing Date
`
`Of The 812 Patent
`
`
`
`
`
`Page 9
`
`Page 9 of 82
`
`
`
`
`
`20. Like run length encoding, dictionary encoding was known well before
`
`the filing date of the 812 patent. In dictionary encoding, a “dictionary” maps data
`
`blocks (such as characters and character strings) to indices. A data compression
`
`dictionary can be visualized as a table in which indices are mapped to “words”
`
`which can be characters, character strings, or other data blocks. The index can
`
`typically be represented by fewer bits than its corresponding word, so replacing the
`
`word with the index results in data compression. During encoding, each time a
`
`word in the input stream is found in the dictionary, it is replaced with the index
`
`corresponding to that word. For example, given the dictionary above, the string
`
`“dictionary data compression is well known” could be encoded as “3 2 1 is well 4”.
`
`21. For decompression, a decoder uses the same dictionary to reverse the
`
`compression process. The decoder looks up indices in a compressed data stream
`
`and replaces them with the corresponding “words” from the dictionary.
`
`22. Dictionaries are often visualized as tables, in which each entry
`
`associates an index with the corresponding character, string, word, etc. represented
`
`
`
`
`
`Page 10
`
`Page 10 of 82
`
`
`
`
`
`by that index, but dictionaries can be implemented in a variety of ways. The
`
`defining characteristic of a dictionary used for encoding is the mapping of index
`
`values to data blocks and data block strings.
`
`23. Dictionaries can be static, meaning that the words they contain are
`
`fixed before the encoding process starts and do not change over the course of the
`
`encoding process. Alternately, the dictionary can be adaptive, meaning that words
`
`from the input stream are added to the dictionary during the compression process.
`
`Many modern adaptive dictionary encoders trace their roots to Lempel and Ziv’s
`
`groundbreaking papers in 1977 (“LZ77”) and 1978 (“LZ78”). (Ex. 1012, J. Ziv and
`
`A. Lempel, “A Universal Algorithm for Sequential Data Compression,” IEEE
`
`Trans. Information Theory, Vol. 23, No. 3, May 1977, pp. 337-43; and Ex. 1013, J.
`
`Ziv and A. Lempel, “Compression of Individual Sequences via Variable-Rate
`
`Coding,” IEEE Trans. Information Theory, Vol. 24, No. 5, Sept. 1978, pp. 530-36.)
`
`LZ77 and LZ78 both describe adaptive dictionary encoders. In LZ77, dictionary
`
`indices point to character strings found in a “sliding window” dictionary composed
`
`of the most recent previously encoded data. LZ77 dictionary compression is not
`
`used in the prior art discussed below and is not discussed further. In LZ78, indices
`
`are mapped to data blocks from the input stream to build the dictionary during the
`
`encoding process. The prior art discussed below uses a variation of LZ78
`
`encoding. Accordingly, LZ78 is explained in more detail below.
`
`
`
`
`
`Page 11
`
`Page 11 of 82
`
`
`
`
`
`1. LZ78 Dictionary Compression
`
`24. The signature feature of LZ78 is the algorithm used to build the
`
`dictionary as it encodes the input stream. The LZ78 algorithm starts with a
`
`dictionary initialized to contain only the “null” value, which is typically mapped to
`
`the “0” index. In the LZ78 algorithm, the characters of the input stream are read
`
`one at a time. As it is read, each character is combined with a prefix string (which
`
`is empty in the case of the first character or a new character after a string has been
`
`encoded) by adding it to the end of the prefix string. The dictionary is then
`
`searched to see if the combined prefix string/last character is present in the
`
`dictionary. If so, then the combined prefix string/last character is set to be the new
`
`prefix string and the process repeats by reading the next character in the input
`
`stream. If the combined prefix string/last character is not in the dictionary, then the
`
`index for the prefix string is output followed by the last character, the combined
`
`prefix string/last character is then added to the dictionary, and the prefix string is
`
`set to empty. Following this, the next character in the input stream is read and the
`
`process repeats. This sequence of steps, shown in the flow chart below, is the
`
`signature of an LZ78 encoding algorithm.
`
`
`
`
`
`Page 12
`
`Page 12 of 82
`
`
`
`
`
`Exemplary Flow Chart For LZ78
`
`25. A simple example of this algorithm compressing the string “A B A B B
`
`B A B B A B B B” follows. Before the algorithm starts, the dictionary is emptied by
`
`initializing it so that the “null” value, i.e., no match, is mapped to the “0” index.
`
`The first character “A” is read and set to be the combined prefix string/last
`
`character. The character “A” is not in the dictionary, so a “0” index followed by an
`
`“A” is output. Then “A” is added to the dictionary and mapped to the “1” index,
`
`and the prefix string is set to empty. The next character “B” is read and becomes a
`
`new combined prefix string/last character. This new combined string is compared
`
`
`
`
`
`Page 13
`
`Page 13 of 82
`
`
`
`
`
`to the dictionary. Again, there is no match, so a “0” index followed by a “B” is
`
`output. Then, “B” is added to the dictionary and mapped to index “2,” and the
`
`prefix string is set to empty. The next character “A” is read and becomes part of the
`
`new prefix string/last character. This time “A” is found in the dictionary (mapped
`
`to index “1”), so the next character “B” is added to the combined string, and “AB”
`
`is compared to the dictionary. There is no match, so index “1” (the index for the
`
`prefix string, which was “A”), is output followed by a “B.” Then “AB” is added to
`
`the dictionary and mapped to the “3” index, and the prefix string is set to empty.
`
`This process is repeated until the end of the input stream is reached. For the
`
`example above, the outputs (0 A 0 B 1 B 2 B 3 B 5 B) and the resulting dictionary are
`
`summarized in the tables below.
`
`26. This example results in an output of 12 values—6 indices and 6
`
`characters—in place of the original 13 characters. This is not a significant
`
`
`
`
`
`
`
`Page 14
`
`Page 14 of 82
`
`
`
`
`
`reduction, assuming an index is represented with a code word having the same
`
`number of bits as a character. However, as the compression process continues and
`
`the dictionary grows, the LZ78 algorithm can produce substantial compression
`
`since more strings, and longer strings, are found in the dictionary.
`
`2. LZW Dictionary Compression
`
`In 1984, Terry Welch described a particular variation of LZ78 that has
`
`become known as LZW, for Lempel-Ziv-Welch. (Ex. 1014, T. Welch, “A
`
`Technique for High-Performance Data Compression,” IEEE Computer, Vol. 17,
`
`No. 6, June 1984, pp. 8-19 (“Welch paper”).) Welch provided the following
`
`pseudocode, below left, for his algorithm. (Ex. 1014, Welch paper, pp. 15-16.)
`
`Based on this pseudocode, a flow chart, below right, can be created.
`
`
`
`
`
`
`
`Page 15
`
`Page 15 of 82
`
`
`
`
`
` LZW Pseudocode
`
`
`
`
`
`
`
` LZW
`
`Flow Chart
`
`27. An explanation of how the LZW algorithm would encode the string
`
`“A B A B B B A B B A B B B” (the same string used in the example for LZ78 above)
`
`follows. Before the algorithm starts, the dictionary is initialized to map each
`
`possible single character to a different dictionary index, or code word. In this
`
`example, there are two possible single characters, “A” and “B,” and they can be
`
`mapped to code words “0” and “1,” respectively. After the dictionary is initialized,
`
`the first character “A” is read and stored as the prefix string. Because the dictionary
`
`
`
`
`
`Page 16
`
`Page 16 of 82
`
`
`
`
`
`has been initialized to contain code words for all possible single characters, “A” is
`
`in the dictionary, mapped to code word “0.” The next character “B” is read and
`
`added to the end of the prefix string “A” to create a combined string (“AB”). This
`
`combined string is compared to entries in the dictionary. There is no match, since
`
`the dictionary contains only single characters at this time. So, the index for the
`
`prefix string (a “0” index mapped to “A”) is output, and the combined string “AB”
`
`is added to the dictionary, mapped to index “2.” The prefix string is set to be the
`
`character “B” (last character of the combined string “AB”), and a new next
`
`character “A” is read. Again, the next character is combined with the prefix string
`
`to create a combined string (“BA”), and the dictionary is searched for the combined
`
`string. It is not found, so the index for the prefix string (a “1” for “B”) is output.
`
`The combined string “BA” is added to the dictionary, mapped to index “3.” The
`
`prefix string is updated to be the current character “A” (last character of the
`
`combined string “BA”), and a new next character “B” is read and added to the
`
`prefix string to create a combined string “AB.” The dictionary is searched for “AB”
`
`and it is found, mapped to index “2.” As a result, the combined string “AB”
`
`becomes the new prefix string. The next character, another “B,” is read and added
`
`to the prefix string to create a combined string “ABB”. The dictionary is searched,
`
`but this combined string is not found. As a result, the index of the current prefix
`
`
`
`
`
`Page 17
`
`Page 17 of 82
`
`
`
`
`
`string (a “2” for “AB”) is output, and “ABB” is added to the dictionary, mapped to
`
`index “4.” The current character “B” (last character of the combined string)
`
`becomes the prefix string. The next character, another “B,” is read and added to the
`
`prefix string to create a combined string “BB.” The dictionary is searched, but “BB”
`
`is not found. As a result, the index for the current prefix string (a “1” for “B”) is
`
`output, and the combined string “BB” is added to the dictionary, mapped to index
`
`“5.” The prefix string is set to be the current character “B” (last character of the
`
`combined string), and the next character “A” is read and added to the prefix string
`
`to create a new combined string “BA.” The dictionary is searched and this
`
`combined string is found, mapped to index “3.” The combined string “BA”
`
`becomes the prefix string, and the next character “B” is read and added to the
`
`prefix string to create a new combined string “BAB.” This string is not found in the
`
`dictionary. Accordingly, the index for the current input string (a “3” for “BA”) is
`
`output, and “BAB” is added to the dictionary, mapped to index “6.” The prefix
`
`string is set to be the current character “B” (last character of the combined string),
`
`and the next character “B” is read and added to the prefix string to create a
`
`combined string “BB.” The dictionary is searched and the combined string “BB” is
`
`found, mapped to index “5.” The combined string “BB” becomes the new prefix
`
`string. The next character “A” is read and added to the prefix string to create a new
`
`
`
`
`
`Page 18
`
`Page 18 of 82
`
`
`
`
`
`combined string “BBA.” This combined string is not present in the dictionary, so
`
`the index for the prefix string is output (a “5” for “BB”), and “BBA” is added to the
`
`dictionary, mapped to index “7.” The prefix string is set to be the current character
`
`“A” (last character of the combined string), and the next character “B” is read and
`
`added to the prefix string to create a combined string “AB.” The dictionary is
`
`searched and the combined string “AB” is found, mapped to index “2.” The
`
`combined string “AB” becomes the new prefix string. The next character “B” is
`
`read and added to the prefix string to create a new combined string “ABB.” The
`
`dictionary is searched and the combined string “ABB” is found, mapped to index
`
`“4.” The combined string “ABB” becomes the new prefix string. The next character
`
`“B” is read and added to the prefix string to create a new combined string “ABBB.”
`
`This combined string is not present in the dictionary, so the index for the prefix
`
`string is output (a “4” for “ABB”), and “ABBB” is added to the dictionary, mapped
`
`to index “8.” The prefix string is set to be the current character “B” (last character
`
`of the combined string). At this point, there is no next character, so the code for the
`
`current prefix string, a “1” for “B,” is output and the process ends. The outputs and
`
`the resulting dictionary are summarized in the tables below.
`
`28. As can be seen, the string “A B A B B B A B B A B B B” is encoded as “0
`
`1 2 1 3 5 4 1.” Thus, a string of 13 characters is encoded using 8 index values. As
`
`
`
`
`
`Page 19
`
`Page 19 of 82
`
`
`
`
`
`can be appreciated, when longer input streams are encoded, the dictionary will
`
`typically continue to grow, resulting in improved compression.
`
`29. The LZW algorithm incorporates the signature features of LZ78, but
`
`precedes them with initializing the dictionary to contain every possible single
`
`character value mapped to a different dictionary index. Typically, there are 256
`
`single-character strings (that is, every possible single-character value for an 8-bit
`
`character) that are mapped to indices 0-255. Because of this, it is no longer
`
`necessary to output single characters, since an index corresponding to each single
`
`character is in the initial dictionary. Also, according to LZW, when the combined
`
`prefix string/last character (ωΚ) is not found in the dictionary, the prefix string is
`
`set to be the last character (ω = Κ).
`
`
`
`
`
`Page 20
`
`Page 20 of 82
`
`
`
`
`
`30. LZW compression is used in many well-known formats, such as .gif,
`
`that were in widespread use before the filing date of the 812 patent.
`
`III. THE PRIOR ART
`
`31. The accompanying Petition relies on three pieces of prior art to
`
`challenge claims of the 812 patent. They are:
`
` U.S. Pat. No. 4,929,946, Adaptive Data Compression Apparatus
`
`Including Run Length Encoding for a Tape Drive System (“O’Brien,” Ex.
`
`1002), which was issued May 29, 1990, and is prior art under 35 U.S.C.
`
`§§ 102(a) and (b).
`
` Mark Nelson, The Data Compression Book (“Nelson,” Ex. 1003), M&T
`
`Books, 1992, which is prior art under 35 U.S.C. §§ 102(a) and (b).
`
` U.S. Pat. No. 4,558,302, High Speed Data Compression and
`
`Decompression Apparatus and Method (“Welch patent” Ex. 1004),
`
`which issued December 10, 1985, and is prior art under 35 U.S.C. §§
`
`102(a) and (b).
`
`I discuss each of these pieces of prior art below.
`
`A. O’Brien Combines Run Length Encoding And Dictionary
`
`Encoding
`
`32. As illustrated in O’Brien’s Fig. 3, below, O’Brien shows a
`
`compression system that combines a run length encoder 303 with a dictionary
`
`
`
`
`
`Page 21
`
`Page 21 of 82
`
`
`
`
`
`encoder 304. (Ex. 1003, Fig. 3.) Although O’Brien uses the term “reference value
`
`encoder” or “string compression” instead of “dictionary encoder,” O’Brien’s
`
`description of this encoder bears the signature of an LZ78 dictionary encoder and
`
`additional LZW features, as explained below.
`
`
`
`1. O’Brien Shows Run Length Encoding
`
`33.
`
`In O’Brien, run length encoding takes priority over string encoding.
`
`(Id., 4:4-5.) As shown in Figure 3 of O’Brien (above), run length encoder 303
`
`monitors three input bytes: the last byte currently being processed by the reference
`
`value encoder 304 and the next two bytes to be input. (Id., 9:19-27.) If the three
`
`bytes are the same, a run is detected. (Id., 9:29-34.) O’Brien does not use run
`
`length encoding for runs of only two bytes. If the three bytes are the same, the run
`
`length encoder 303 transmits a “run length detect signal” to the reference value
`
`
`
`
`
`Page 22
`
`Page 22 of 82
`
`
`
`
`
`encoder 304. (Id.) The reference value encoder 304 encodes the last byte it
`
`received, which it is currently processing and which starts the repeated characters
`
`of the run length sequence, inserts into the compressed data stream the “reference
`
`value” for the last byte (or reference value for the matched string that ends with the
`
`last byte), and transfers control to the run length encoder 303. (Id.) The run length
`
`encoder 303 continues to monitor input bytes, counting them to determine the
`
`length of the run length sequence. (Id., 9:34-41.) When the end of the run is
`
`reached, the run length encoder 303 encodes the length into a repeat code. (Id.,
`
`9:41-45.) This repeat code is output to the reference value encoder 304, which
`
`represents the repeat code using an appropriate reference value and bits for the
`
`repeat count. (Id.) The reference value encoder 304 inserts the reference value and
`
`bits indicating the repeat count into the compressed data stream. (Id., 9:45-47.)
`
`34. O’Brien defines and uses several pre-defined run length reference
`
`values to encode run length sequences. (Id., 12:63-65.) “Each run length reference
`
`value defines a range of repeat counts.” (Id., 12:65-66.) This use of multiple,
`
`predefined ranges for repeat counts allows for repeat codes to be encoded using
`
`fewer bits on average, since short, common repeat counts are coded with fewer bits
`
`than long, uncommon repeat counts. (Id., 12:66-69.) An exemplary table of run
`
`length reference values is set forth by O’Brien in Table A, below:
`
`
`
`
`
`Page 23
`
`Page 23 of 82
`
`
`
`
`
`
`
`(Id., 13:3-17.) In this way, the compression of O’Brien encodes a run length
`
`sequence using: 1) the last character encoded by the reference value encoder 304
`
`before the run is encoded (which is the repeated character); 2) a reference value
`
`that tells the decoder that a run length has been encoded as well as the number of
`
`bits in a repeat count; and 3) the repeat count.
`
`2. O’Brien Shows Dictionary Encoding
`
`35.
`
`In O’Brien, “[d]ata bytes that are not part of a run-length are encoded
`
`by reference value encoder 304 into a code value based on the lookup table stored
`
`in compression string table 306.” (Id., 9:48-51.) As will be apparent from the
`
`explanation below, O’Brien’s reference value encoder 304 is an example of an
`
`LZW variation of the LZ78 dictionary encoder.
`
`36. First, O’Brien initializes the dictionary with code words for all
`
`possible single characters. In particular, at the beginning of each data segment, the
`
`reference value encoder 304 is provided with two variables, END and BEGIN,
`
`which correspond to the largest and smallest individual character codes in the data
`
`
`
`
`
`Page 24
`
`Page 24 of 82
`
`
`
`
`
`segment. (Id., 10:19-25.) These variables define the range of reference values, or
`
`code words, which represent single characters and are known as “character
`
`reference values.” (Id., 10:22-28.) Setting this range of character reference values
`
`and the associated single characters is analogous to the initialization of the LZW
`
`dictionary with all possible single characters.
`
`37.
`
`In addition to initializing the dictionary with code words for all single
`
`characters, the dictionary in O’Brien is initialized to contain indices corresponding
`
`to the run length indicators. (Id., 12:55-13:17.) As discussed above, these indices,
`
`or control codes, are listed in Table A. These control codes instruct the decoder
`
`that a run length sequence has been encoded. (Id., 13:37-43.)
`
`38. After the dictionary is initialized, the reference value encoder 304 of
`
`O’Brien implements the signature features of an LZ78 dictionary encoder. In
`
`particular, reference values above the character reference value range are used to
`
`encode strings of multiple bytes and are known as “string reference values.” (Id.,
`
`10:28-31.) These string reference values are added to the dictionary by mapping
`
`them to new strings built from the input data.
`
`39.
`
`In operation, the reference value encoder 304 combines the character
`
`or string represented by the previous reference value with the character indicated
`
`by next byte from the input stream. (Id., 12:33-36.) The reference value encoder
`
`304 then determines whether the “resultant string” is found in the string table 306.
`
`
`
`
`
`Page 25
`
`Page 25 of 82
`
`
`
`
`
`(Id., 12:36-38.) If so, the previous reference value is updated to be the string
`
`reference value for the matched, resultant string found in the string table 306.
`
`Then, the next input byte is combined with the resultant string to produce a new
`
`string, and the string table 306 is searched again to see if the new string is present.
`
`(Id., 12:38-43.) This process is repeated, adding to the string one character at a
`
`time, until a string is created that is not in the string table. (Id., 12:43-44). When
`
`this occurs, the previous reference value (for the last string found in the string table
`
`306 or an individual character) is output, and the current string is added to the
`
`string table 306 and assigned the next available reference value. (Id., 12:44-48.)
`
`The previous reference value is updated to be the reference value for the final
`
`character of the current string, which caused the current string not to be found in
`
`the string table 306. The entire process is then repeated using the next user input
`
`byte. (Id., 12:48-50).
`
`40. Although O’Brien does not include a flow chart, the flow chart below
`
`can be created from the written description of O’Brien’s reference value encoder
`
`304. As can be seen, O’Brien’s reference value encoder uses the signature LZ78
`
`features of 1) reading the next input character; 2) adding the next input character to
`
`the current prefix string to build a new, combined string; 3) searching the
`
`dictionary to see if the new string is found; 4) if so, continuing by updating the
`
`prefix string, building the new string by adding one character at a time, and
`
`
`
`
`
`Page 26
`
`Page 26 of 82
`
`
`
`
`
`continuing to search the dictionary until the new st