`
`DocuSign Envelope ID: 55367E1 D-BF3B-45EF-ADBE-F774B42C2E26
`
`IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
`
`lnventor(s)
`
`Patent Owner
`
`Patrick PIRIM
`
`Image Processing Technologies LLC
`
`Reexam. Control No.
`
`90/014,056
`
`Filed
`
`December 15, 2017
`
`Confirmation No.
`
`Patent No.
`
`Issue Date
`
`Application No.
`
`App. Filing Date
`
`Title
`
`Examiner
`
`Art Unit
`
`1361
`
`6,959,293
`
`October 25, 2005
`
`09/792,436
`
`February 23, 2001
`
`METHOD AND DEVICE FOR AUTOMATIC VISUAL
`PERCEPTION
`
`Majid Banankhah
`
`3992
`
`Mail Stop Ex Parte Reexam
`Commissioner for Patents
`P.O. Box 1450
`Alexandria, Virginia 22313-1450
`
`DECLARATION UNDER 37 C.F.R. § 1.132
`
`Dear Examiner:
`
`This Declaration Under 37 C.F.R. § 1.132 is in support ofa Reply to Non-Final Office
`
`Action filed on June 26, 2018 in the above-referenced reexamination proceeding.
`
`I, Alan Conrad Bovik, hereby declare that:
`
`My Background
`
`1.
`
`I hold a Ph.D. in in Electrical and Computer Engineering from the University of
`
`Illinois, Urbana-Champaign (awarded in 1984). I also hold a Master's degree in Electrical and
`
`Computer Engineering from the University of Illinois, Urbana-Champaign (awarded in 1982).
`
`{00196160-)
`
`Petitioner LG Ex-1008, 0002
`
`
`
`DocuSIgn Envelope ID: 55387E10-8F3B-45EF-ADBE-F774B42C2E26
`
`2.
`
`I am a tenured full Professor and I hold the Cockrell Family Regents Endowed
`
`Chair at the University of Texas at Austin. My appointments are in the Department of Electrical
`
`and Computer Engineering, the Department of Computer Sciences, and the Department of
`
`Biomedical Engineering. I am also the Director of the Laboratory for Image and Video
`
`Engineering ("LIVE").
`
`3.
`
`My research is in the general area of digital television, digital cameras, image and
`
`video processing, computational neuroscience, and modeling of biological visual perception. I
`
`have published over 800 technical articles in these areas and hold seven U.S. patents. lam also
`
`the author of The Handbook oflmagc and Video Processing, Second Edition (Elsevier Academic
`
`Press, 2005); Modern Image Quality Assessment (Morgan & Claypool, 2006); The Essential
`
`Guide to lmage Processing (Elsevier Academic Press, 2009); and The Essential Guide to Video
`
`Processing (Elsevier Academic Press, 2009); and numerous other publications.
`
`4.
`
`I received the 2017 Edwin H. Land Medal from the Optical Society of America in
`
`September 2017 with citation: For substantially shaping the direction and advancement of
`
`modem perceptual picture quality computation, and for energetically engaging industry to
`
`transform his ideas into global practice.
`
`5.
`
`I received a Primetime Emmy Award for Outstanding Achievement in
`
`Engineering Development, for the Academy of Television Arts and Sciences, in October 2015,
`
`for the widespread use of my video quality prediction and monitoring models and algorithms that
`
`are widely used throughout the global broadcast, cable, satellite and internet Television
`
`industries.
`
`6.
`
`Among other awards and honors, I have received the 2013 IEEE Signal
`
`Processing Society's "Society Award," which is the highest honor accorded by that technical
`
`(00196160-)
`
`2
`
`Petitioner LG Ex-1008, 0003
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`society ("for fundamental contributions to digital image processing theory, technology,
`
`leadership and education"). In 2005, I received the Technical Achievement Award of the IEEE
`
`Signal Processing Society, which is the highest technical honor given by the Society, for "broad
`
`and lasting contributions to the field of digital image processing"; and in 2008 I received the
`
`Education Award of the IEEE Signal Processing Society, which is the highest education honor
`
`given by the Society, for "broad and lasting contributions to image processing, including popular
`
`and important image processing books, innovative on-line courseware, and for the creation of the
`
`leading research and educational journal and conference in the image processing field."
`
`7.
`
`My technical articles have been widely recognized as well, including the 2009
`
`IEEE Signal Processing Society Best Journal Paper Award for the paper "Image quality
`
`assessment: From error visibility to structural similarity," published in IEEE Transactions on
`
`Image Processing, volume 13, number 4, April 2004; this same paper received the 2017 IEEE
`
`Signal Processing Society Sustained Impact Paper Award as the most impactful paper published
`
`over a period of at least ten years; the 2013 Best Magazine Paper Award for the paper "Mean
`
`squared error: Love it or leave it?? A new look at signal fidelity measures," published in IEEE
`
`Transactions on Image Processing, volume 26, number 1, January 2009; the IEEE Circuits and
`
`Systems Society Best Journal Paper Prize for the paper "Video quality assessment by reduced
`
`reference spatio-temporal entropic differencing," published in the IEEE Transactions on Circuits
`
`and Systems for Video Technology, vol. 23, no. 4, pp. 684-694, April 2013; and the 2017 IEEE
`
`Signal Processing Letters Best Paper Award for the paper A. Mittal, R. Soundararajan and A.C.
`
`Bovik, "Making a 'completely blind' image quality analyzer," published in the IEEE Signal
`
`Processing Letters, vol. 21, no. 3, pp. 209-212, March 2013.
`
`{00196160-}
`
`3
`
`Petitioner LG Ex-1008, 0004
`
`
`
`DocuSign Envelope JD; 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`8.
`
`I received the Google Scholar Classic Paper citation twice in 2017, for the paper
`
`"Image information and visual quality," published in the IEEE Transactions on Image
`
`Processing, vol. 15, no. 2, pp. 430-444, February 2006 (the main algorithm developed in the
`
`paper, called the Visual Information Fidelity (VIF) Index, is a core picture quality prediction
`
`engine used to quality-assess all encodes streamed globally by Netflix), and for "An evaluation
`
`of recent full reference image quality assessment algorithms," published in the IEEE
`
`Transactions on Image Processing, vol. 15, no. 11, pp. 3440-3451, November 2006. (the picture
`
`quality database and human study described in the paper, the LIVE Image Quality Database, has
`
`been the standard development tool for picture quality research since its first introduction in
`
`2003). Google Scholar Classic Papers are very highly-cited papers that have stood the test of
`
`time, and are among the ten most-cited articles in their area of research over the ten years since
`
`their publication.
`
`9.
`
`I have also been honored by other technical organizations, including the Society
`
`for Photo-optical and Instrumentation Engineers (SPIE), from which I received the Technology
`
`Achievement Award (2013) "For Broad and Lasting Contributions to the Field of Perception(cid:173)
`
`Based Image Processing," and the Society for Imaging Science and Technology, which accorded
`
`me Honorary Membership, which is the highest recognition by that Society given to a single
`
`individual, "for his impact in shaping the direction and advancement of the field of perceptual
`
`image processing." I was also elected as a Fellow of the Institute of Electrical and Electronics
`
`Engineers (IEEE) "for contributions to nonlinear image processing" in 1995, a Fellow of the
`
`Optical Society of America (OSA) for "fundamental research contributions to and technical
`
`leadership in digital image and video processing" in 2006, and as a Fellow of SPIE for
`
`(00196160-}
`
`4
`
`Petitioner LG Ex-1008, 0005
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`"pioneering technical, leadership, and educational contributions to the field of image processing"
`
`in 2007.
`
`l 0.
`
`Among other relevant research, I have worked with the National Aeronautics and
`
`Space Administration ("NASA") to develop high compression image sequence coding and
`
`animated vision technology, on various military projects for the Air Force Office of Scientific
`
`Research, Phillips Air Force Base, the Anny Research Office, and the Department of Defense.
`
`These projects have focused on developing local spatio-temporal analysis in vision systems,
`
`scalable processing of multi-sensor and multi-spectral imagery, image processing and <lat.a
`
`compression tools for satellite imaging, AM-FM analysis of images and video, the scientific
`
`foundations of image representation and analysis, computer vision systems for automatic target
`
`recognition and automatic recognition of human activities, vehicle structure recovery from a
`
`moving air platform, passive optical modeling, and detection of speculated masses and
`
`architectural distortions in digitized mammograms. My research has also recently been funded
`
`by Netflix, Qualcomm, Facebook, Texas Instruments, Intel, Cisco, and the National Institute of
`
`Standards and Technology (NIST) for research on image and video quality assessment. I have
`
`also received numerous grants from the National Science Foundation for research on image and
`
`video processing and on computational vision.
`
`11.
`
`Additional <let.ails about my employment history, fields of expertise, and
`
`publications are further described in my curriculum vitae, which is attached as Exhibit A to this
`
`report.
`
`12.
`
`In the present reexamination proceeding, 1 have been retained by Image
`
`Processing Technologies, LLC to provide my expert opinion. Tam also acting as an expert in
`
`Image Processing Technologies, LLC v. Samsung Electronics Co., Ltd. et al., Case No.
`
`(00196160-)
`
`5
`
`Petitioner LG Ex-1008, 0006
`
`
`
`DocuSign Envelope \D: 55387E10-8F38-45EF-ADBE-F774B42C2E26
`
`2: 16-CV-505 ("the Samsung litigation' 1
`) (E.D. Tx.) (Pending) retained by Image Processing
`
`Technologies, LLC. I am being compensated at my customary rate for expert consulting work.
`
`My pay in each of these matters is not dependent upon the outcome of the matter.
`
`13.
`
`I have carefully reviewed the following and am considering these items from the
`
`perspective of a person of ordinary skill in the art at the time of the invention:
`
`• Pirim United States Patent No. 6,959,293 (hereinafter "the '293 patent");
`
`•
`
`International Patent Publication WO 99/36893, published July 22, 1999,
`
`(hereinafter 11Prim PCT");
`
`• Siegel, Howard J., et al., "PASM: A Partitionable SIMD/MIMD System
`
`for Image Processing and Pattern Recognition," IEEE Transactions on
`
`Computers, Vol. C-30, No. 12 (December 1981) (hereinafter "Siegel");
`
`and
`
`• Hirota et al. United States Patent No.6,118,895 (hereinafter "Hirota").
`
`The 1293 Patent
`
`14.
`
`As the 1293 patent teaches, using the same two classification signals for each of
`
`two histogram calculation units (HCUs) is important because this is what allows the
`
`classification results from the two units to be taken into account at the time that data associated
`
`with each pixel is being evaluated for addition to the two histograms. As a simple example,
`
`given two HCUs that each look at 24-bit color values (e.g., 8 most significant bits being for the
`
`color's red component, 8 least significant bits being for the color' blue component, and 8 bits in
`
`the middle bits being for the color's green component), the invention as recited in Claim 1 could
`
`allow for one histogram to be calculated on pixels having a strong red component (e.g., the red
`
`bits corresponding to a number higher than 128) and a weak green component (e.g., the green
`
`(00196160-)
`
`6
`
`Petitioner LG Ex-1008, 0007
`
`
`
`DocuSign Erwelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`bits corresponding to a number lower than 128), and another histogram to be calculated for
`
`pixels having a strong red component and a strong green component (e.g., the green bits
`
`corresponding to a number higher than 128). This would allow a simultaneous evaluation of the
`
`data from two different perspectives, which is an innovative approach that, in the context of
`
`image processing would enable completion of an analysis in fewer computational steps.
`
`Ground #1
`
`15.
`
`The data values, data(v), input to Pirim PCT's histogram formation units are
`
`received serially pixel-by-pixel. This is clear from Pirim PCT at page 28 where it states:
`
`Validation units 30-35 receive the classification
`information in parallel from all classification units in histogram
`formation blocks 24 - 29. Each validation unit generates a
`validation signal which is communicated to its associated
`histogram formation block 24 - 29. The validation signal
`determines, for each incoming pixel, whether the histogram
`formation block will utilize that pixel in forming it histogram.
`
`(Emphasis added).
`
`16.
`
`The histogram formation units of Pirim PCT complete their calculation of the
`
`corresponding histogram immediately as each pixel's data value is received. This can clearly be
`
`seen by referring to FIG. 14 of Pirim PCT, which is copied below.
`
`{00196160-}
`
`7
`
`Petitioner LG Ex-1008, 0008
`
`
`
`
`
`DocuSign Envelope ID: 55387E10-8F3B-45EF·ADBE-F774842C2E26
`
`memory location for a bin of the histogram is updated as each color value is presented, and the
`
`entire histogram is completed, in this example, while the color value for the last pixel in the
`
`sensor is present on data(v).
`
`19.
`
`Because the histogram for data(v) is created, updated, and completed immediately
`
`as the data is received by the histogram formation units of Pi rim PCT, it is simply not possible to
`
`increase the speed of Pirim PCT by any proposed combination of it and Siegel. The histogram
`
`cannot be completed before the data arrives.
`
`20.
`
`FIG. 12 of Pirim PCT (copied below) shows six histogram units to process
`
`various parameters. Adding additional histogram units would not increase the processing speed
`
`of the circuit of FIG. 12 of Pirim PCT. This is the case because, with or without additional
`
`histogram formation units, the histograms would be completed almost instantaneously as the last
`
`piece of data is received at the memory device in the histogram units of FIG. 12 of Pirim PCT.
`
`Even with multiple histogram units, each unit would operate on the same data at the same speed
`
`as what is shown in and described above in connection with FIG. 14 ofPirim PCT.
`
`{00196160-}
`
`9
`
`Petitioner LG Ex-1008, 0010
`
`
`
`
`
`DocuSign Envelope 1D: 55387E1D-8F38-45EF-ADBE-F774842C2E26
`
`formation unit would have completed its histogram. When the last pixel for the second row is
`
`received, the corresponding histogram formation unit would have completed its histogram.
`
`However, these two histograms would still need to be combined together to provide the same
`
`histogram that would be generated by a histogram formation unit of Pirim PCT without
`
`modification. To do that, the values of the bins in each histogram would need to be read out of
`
`the memories, added together, and written back to a memory. All of this would have to happen
`
`after the last pixel is received. This would make the modified Pirim PCT significantly slower,
`
`add significant complexity, and provide no benefit over what is described in Pirim PCT.
`
`22.
`
`Using two or more copies of the system in FIG. 12 of Hirota to fonn two or more
`
`histograms of each parameter also would not increase processing speed. If two or more
`
`histograms were calculated for different portions of a sensor, those histograms could not be
`
`completed until all of the data is received. Then after the two histograms are completed, the
`
`histograms would need to be combined together. This combining would happen after an
`
`unmodified version of Pirim PCT would have already completed calculating the desired
`
`histogram.
`
`23.
`
`Adding an additional element 28a adjacent and similar to element 28 in FIG. 12 in
`
`Pirim PCT to process x-position data for the pixels and process the same parameter (x-position)
`
`for different segments of pixels would not work. As shown in the illustration below, in Claim I
`
`of the 1293 patent, a first validation signal (V 1) is produced so that a calculation of a first
`
`histogram (H1) depends on a first binary classification signal (Ci) and a second binary
`
`classification signal (C2), and a second validation signal (V2) is produced so that a calculation of
`
`a second histogram (H2) depends on the first binary classification signal (Ci) and the second
`
`binary classification signal (C2). As shown in the example below, the first classification signal
`
`{00196160-}
`
`11
`
`Petitioner LG Ex-1008, 0012
`
`
`
`
`
`DocuSign Envelope ID: 55387E1D-BF3B-45EF-ADBE-F774B42C2E26
`
`24.
`
`For the same reasons that multiple histogram formation units treating parameters
`
`of different pixels would not work, a combination of Pirim PCT with multiple processors of
`
`Siegel, which each simultaneously process a group of pixels, would not work because the
`
`validation units in Pirim PCT are configured to generate validation signals based on different
`
`classifications of a single pixel, not based on different classifications of different pixels. If
`
`multiple processors of Siegel were operating simultaneously on different pixels, each would need
`
`to provide a classification output to Pirim PCT's data bus. These classification signals would
`
`then be used to generate validation signals. However, generating validation signals based on
`
`classifications of different pixels does not make sense as the purpose of the validation signal is to
`
`determine whether a single pixel should be included in a histogram. Such validation signals
`
`could not be used, for each incoming pixel, to determine whether the histogram formation block
`
`will utilize that pixel in forming it histogram.
`
`Ground #2
`
`25.
`
`Pi rim PCT is directed to a mechanism for detecting drowsiness of a user by
`
`monitoring the eyes of the user.
`
`26.
`
`Hirota, on the other hand, is directed to a copying machine that can detect whether
`
`a color document or a black and white document is being copied.
`
`27.
`
`Determining whether a document being copied is color or black and white has
`
`absolutely nothing to do with determining whether a user is drowsy.
`
`28.
`
`Pirim PCT docs not have a stated objective of disclosing a generic image
`
`processing system that can be used for a variety of applications. Rather, Pirim PCT is quite clear
`
`that it is taking a generic image processing system and modifying that system for the specific
`
`application of detecting drowsiness.
`
`{00196160-)
`
`13
`
`Petitioner LG Ex-1008, 0014
`
`
`
`DocuSign Envelope ID: 55387E1D-BF3B-45EF-ADBE-F774B42C2E26
`
`29.
`
`For example, Pirim PCT's "Field of the Invention" states: '1The present invention
`
`relates generally to an image processing system, and more particularly to the use of a generic
`
`image processing system to detect drowsiness." (Emphasis added). This statement is analogous
`
`to saying that a general-purpose computer can be programmed and modified to fonn a special(cid:173)
`
`purpose computer. Clearly, in such a case, the goal is not to provide a more-flexible general(cid:173)
`
`purpose computer, but rather to provide a less-flexible special-purpose computer.
`
`30.
`
`As another example, the last two lines on page 2 of Pirim PCT state that "[i]t
`
`would be desirable to apply such a generic image processing system [as described in
`
`PCT /FR97/01354 and PCT /EP98/05383] to detect the drowsiness of a person." (Emphasis
`
`added). Again, it is clear hear that the goal is not to provide a generic image processing system,
`
`but rather to take a generic image processing system and make it suitable for a specific
`
`application (i.e., detecting the drowsiness of a person).
`
`31.
`
`As still another example, the first paragraph of the Detailed Description of Pirim
`
`PCT ( on page 10) states: "The present invention discloses an application of the generic image
`
`processing system disclosed in commonly-owned PCT Application Serial Nos. PCT/FR97/01354
`
`and PCT/EP98/05383, the contents of which are incorporated herein by reference{,] for detection
`
`of various criteria associated with the human eye, and especially to detection that a driver is
`
`falling asleep while driving a vehicle." (Emphasis added).
`
`32.
`
`As yet another example, the last paragraph of the Detailed Description of Pirim
`
`PCT (on p. 57), which is clearly directed to defining the scope of Pirim PCT's disclosure, states:
`
`It will be appreciated that while the invention has been
`described with respect to detection of the eyes of a driver using
`certain criteria, the invention is capable of detecting any criteria of
`the eyes using any possible measurable characteristics of the
`pixels, and that the characteristics of a driver falling asleep may be
`discerned from any other infonnation in the histograms fanned by
`
`(00196160-}
`
`14
`
`Petitioner LG Ex-1008, 0015
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`the invention. Also, while the invention has been described with
`respect to detecting driver drowsiness, it is applicable to any
`application in which drowsiness is to be detected.
`
`(Emphasis added). As can be seen from the underlined portions, Pirim PCT makes clear that,
`
`while its invention has been described with respect to driver drowsiness, in the broadest
`
`application, Pirim PCT's invention can be used for "any application in which drowsiness is to be
`
`detected." If Pirim PCT had the "objective of disclosing a 'generic' image processing system,"
`
`this final paragraph of Pirim PCT would not specifically state that its invention is '1applicable to
`
`any application in which drowsiness is to be detected." Rather, it would broadly state that it
`
`could be used as a generic image processing system.
`
`33.
`
`Pirim PCT at pp. 10-11 states: "It will be appreciated that when used in non-
`
`vehicular applic_ations, the camera may be mounted in any desired fashion to detect the specific
`
`criteria of interest." When read in the context of the remainder of the application, which is very
`
`clearly directed to detecting drowsiness, it is quite clear that this statement is referring to using
`
`Pirim PCT's invention to detect drowsiness in non-vehicular applications (which could be useful
`
`for detecting whether a night watchman is falling asleep, for example).
`
`34.
`
`Adding features to make Pirim PCT more generic would add no benefit to the
`
`drowsiness detection system of Pirim PCT and making such modifications would only make
`
`Pirim PCT more expensive because such modifications would require additional hardware in the
`
`way of devices to couple multiple of its histogram formation units to the same parameter and/or
`
`additional histogram formation units.
`
`35.
`
`Even if one were to conclude that Pirim PCT was directed to a generic image
`
`processing system, which I believe not be the case, it is simply nonsensical to use Pirim PCT's
`
`system to detect whether a piece of paper is color or black and white. Pirim PCT is configured to
`
`detect complex video of a moving user who could possibly be drowsy. Hirota, on the other hand,
`
`{00196160-}
`
`15
`
`Petitioner LG Ex-1008, 0016
`
`
`
`DocuSign Envelope 10: 55387E1 D-8F38-45EF-ADBE-F774842C2E26
`
`processes simple images of still documents to determine whether the images reflect color or
`
`black and white content. Applying the complex system of Pirim PCT to perform a simple task
`
`like color or black and white document detection would be wasteful as the system of Pirim PCT
`
`is much more expensive to implement than the system of Hirota.
`
`36.
`
`Two portions of Hirota describe color document and black and white document
`
`detection in connection with FIGS. 4 and 13 of Hirota. First, FIG. 4 is copied below. Hirota, at
`
`column 8, lines 7-22, describes the operation of the circuit of its FIG. 4 as follows:
`
`Two histogram memories 202 and 204 are used for automatic color
`selection and for document type determination. It is noted that data
`on all the dots can be written to memory 202 because the WE input
`of the first histogram memory 202 is always kept at L level. Thus,
`the first histogram memory 202 is used to generate a value
`histogram for a document simply. On the other hand, the second
`one 204 generates a histogram of achromatic dots in the document.
`In order to detect an achromatic dot, a minimum circuit 212 and a
`maximum circuit 214 detect a minimum (MIN) and a maximum
`(MAX) of input R, G and B data, and a subtraction circuit 216
`calculates a difference between them. Then, a comparator 218
`compares the difference (MAX-MIN) with a reference level SREF,
`and if the difference is smaller than the reference level, data is
`allowed to be written to the third histogram memory 204.
`
`{00196160-}
`
`16
`
`Petitioner LG Ex-1008, 0017
`
`
`
`
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`would need to be configured to treat a value VH output from value generator 200 of Hirota. As
`
`described in Hirota at column 7, lines 27-34:
`
`value generator 200 receives ... 8-bit R, G and B data and converts
`them to a value signal VH according to a following equation to be
`sent as an address signal to the first and second histogram
`memories 202 and 204:
`
`VI I~0.3 I 640625*R+0.65625*G+0.02734375'8
`
`(I)
`
`The value signal VH obtained resembles human sensitivity for
`observing an object. As explained above, the value data VH is
`used instead of the R, G and B data because the value data are used
`in the automatic exposure processing, as will be explained later.
`
`40.
`
`Next, as done by Hirota's elements 212, 214, 216, and 218, for each pixel, Pirim
`
`PCT would need to be configured to calculate a maximum of three 8-bit R, G, and B values for
`
`that pixel and a minimum of the three 8-bit R, G, and B values for that pixel, calculate a
`
`difference between the maximum and the minimum, and finally compare the difference to a
`
`reference level SREF.
`
`41.
`
`However, no mechanism is available in Pirim PCT to calculate the maximum of
`
`the R, G, and B values used to calculate values VH, to calculate the minimum of the R, G, and B
`
`values used to calculate values VI I, or the difference between the maximum and the minimum.
`
`As such, it would not be possible to compare such a difference to a reference level SREF as done
`
`in Hirota.
`
`42. Moreover, because the classifier in Pirim PCT would be connected to the same
`
`value VH upon which Pirim PCT's memories would be calculating histograms as described in
`
`Pirim PCT, the classifier would be unable to determine difference values equivalent to the output
`
`of element 218 of Hirota based on values VH so that those difference values could be compared
`
`to a reference level. For example, if a value VH of 148.6 were provided to the classifier of Pirim
`
`PCT, there would be no way to detennine whether this value VH was the result of a set ofR, G,
`
`{00196160-}
`
`19
`
`Petitioner LG Ex-1008, 0020
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`B values {137,150,250} (which would result a value VH of 148.6 at the input of the classifier,
`
`but also result in a difference value of 113 at the output of Hirota's element 218) or {250, 100,
`
`143} (which would result in a value VH of 148.6 at the input of the classifier of Pirim PCT, but
`
`also result in a difference value of 150 at the output of Hirota's element 218).
`
`43.
`
`As discussed above, FIG. 13 of Hirota shows an edge detector that is used to
`
`detect edges of content so that the edges are ignored for detennining whether pixels of the
`
`content are black and white or color. While Pirim PCT includes an edge detector for detecting
`
`the edge of the head of a user, the edge detector could not be used to detect the edges of content
`
`on a static document. The reason for this is that, as explained at Pirim PCT's page 46, its edge
`
`detector detects the edge of a user's head in video by observing sudden changes in a pixel from
`
`one frame of the video to the next as the head moves. For example, if a pixel is at the edge of a
`
`user's face in a video frame, and the user suddenly moves in the right direction, the pixel will
`
`suddenly change in luminance and/or hue. Pirim PCT can detect this and identify the
`
`corresponding pixel as being an edge. In contrast, the content in Hirota is not moving with
`
`respect to its background. That is, the content is static on the paper. Thus, Hirota uses an edge
`
`detector that looks at successive (adjacent) pixels and presumably looks for changes in the value
`
`VH output from its element 220. See Hirota, FIG. 12 and column 16, line 67 through column 17,
`
`line 4. Because of the differences in their operation, the edge detector in Pirim PCT cannot
`
`operate to detect the same edges as those detected by Hirota. Accordingly, Pirim PCT modified
`
`to detect color and black and white documents as is done in Hirota would not work.
`
`{00196160-}
`
`20
`
`Petitioner LG Ex-1008, 0021
`
`
`
`DocuSign Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`Ground #3
`
`44.
`
`As discussed above, the address (ADR) inputs to memories 202 and 204 of Hirota
`
`are connected to the output of value generator 200. As described in Hirota at column 7, lines 27-
`
`34:
`
`value generator 200 receives ... 8-bit R, G and B data and converts
`them to a value signal VH according to a following equation to be
`sent as an address signal to the first and second histogram
`memories 202 and 204:
`
`VH~0.31640625*R+0.65625*G+0.02734375*B
`
`(I)
`
`The value signal VH obtained resembles human sensitivity for
`observing an object. As explained above, the value data VH is
`used instead of the R, G and B data because the value data are used
`in the automatic exposure processing, as will be explained later.
`
`Hirota forms histograms in memories 202 and 204 based on value signal VH, which resembles
`
`human sensitivity for observing an object. The alleged parameter is not color.
`
`45.
`
`The signal output by the combination of elements 212, 214, 216, and 218 does not
`
`result from a comparison of a pixel's value VH, rather it is based on a comparison of the
`
`difference between the maximum ofa pixel's B, G, and R values and the minimum of the pixel's
`
`B, G, and R values {MAX(B,G,R)-MIN(B,G,R)) and a reference level SREF. This comparison
`
`happens in Hirota's element 218.
`
`46.
`
`The following illustrates the how a value VH and the difference produced at the
`
`output of element 216 are not the same thing. Assume that the color of a pixel is represented by
`
`three 8-bit R, G, B values. For example, a pixel may have R, G, B values { 10, 150,250}. The
`
`value VH for this pixel would be:
`
`VH~0.31640625 '(R ~ 10)+0.65625 *(G~ l 50)+0.027343 75 *(B~250)~ 108.43 75
`
`On the other hand, the di ffercnce value at the output of element 216 would be:
`
`(MAX(B,G,R)-MlN(B,G,R))~(MAX(250, 150, 10)-MIN (250, 150, 10) )~(250-10)~240
`
`{00196160-}
`
`21
`
`Petitioner LG Ex-1008, 0022
`
`
`
`DocuSIgn Envelope ID: 55387E1D-8F3B-45EF-ADBE-F774B42C2E26
`
`47.
`
`Clearly, these numbers (108.4375 and 240) show that the difference output by
`
`clement 216 (e.g., 240) is not the same as the parameter upon which memories 202 and 204
`
`calculate histograms (e.g., 108.4375).
`
`48.
`
`I lirota provides a mechanism to "evaluate only part of an image or document,
`
`rather than the entire document." More particularly, sampling interval circuit 206 of Hirota
`
`already provides a mechanism to evaluate only part of an image or document in Hirota. The
`
`circuit is described in Hirota at column 7, line 55-57, where it states that "the sampling interval
`
`circuit 206 allows generation of a histogram only in the document area determined by the signals
`
`HD and VD."
`
`49.
`
`An AND gate is clearly not "at least a register and a comparator." For example, a
`
`register can store data, whereas an AND gate cannot. Moreover, an AND gate is clearly not an
`
`equivalent of a register and a comparator since it does not store anything nor does it compare.
`
`50.
`
`None of elements 212 (which determines a minimum of three values at its inputs),
`
`214 (which determines a maximum of three values at its inputs), 216 (which determines a
`
`difference of two values at its inputs), or 220 (which detects edges) of Hirota is a register.
`
`Moreover, none of these elements is an equivalent of a register, as they store nothing.
`
`{00196160-}
`
`22
`
`Petitioner LG Ex-1008, 0023
`
`
`
`DocuSign Envelope ID: 553B7E1D-BF38-45EF-ADBE-F774B42C2E26
`
`S 1.
`
`I hereby further declare that all statements made herein of my own knowledge are
`
`true, and that aH statements made on information and belief are believed to be true; and, further,
`
`that these statements were made with the knowledge that willful false statements and the like so
`
`made are punishable by fine or imprisonment, or both, under Section 1001 of Title 18 of the
`
`United States Code and that such willful false statements may jeopardize the validity of the '2