throbber

`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`____________________
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`____________________
`
`
`SAMSUNG ELECTRONICS CO., LTD; AND SAMSUNG ELECTRONICS
`AMERICA, INC
`Petitioner
`
`v.
`
`IMAGE PROCESSING TECHNOLOGIES, LLC
`Patent Owner.
`____________________
`
`CASE IPR2017-00353
`Patent No. 8,983,134
`____________________
`
`DECLARATION OF DR. ALAN BOVIK IN SUPPORT OF PATENT
`OWNER RESPONSE PURSUANT TO
`37 C.F.R. § 42.120
`
`
`
`
`
`
`
`
`
`
`
` 1
`
`Exhibit 2007
`IPR2017-00353
`Petitioner- Samsung Electronics Co., Ltd., et al.
`Patent Owner- Image Processing Technologies LLC
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`TABLE OF CONTENTS
`
`
`I.
`
`Introduction ............................................................................................................. 1
`
`A. Background and Qualifications ....................................................................... 2
`
`B. Materials Considered ....................................................................................... 7
`
`C. Claim Construction .......................................................................................... 8
`
`1. “forming at least one histogram . . . said at least one histogram referring to
`classes defining said target” ................................................................................ 9
`
`2. “wherein forming the at least one histogram further comprises determining
`X minima and maxima and Y minima and maxima of boundaries of the target.”
`
`14
`
`II. Summary of Opinions ........................................................................................... 15
`
`III. My Analysis of Claims 1 and 2 ............................................................................ 15
`
`A. Summary ........................................................................................................ 15
`
`B. Discussion of References .............................................................................. 16
`
`1. Gilbert .........................................................................................................16
`
`2. Hashima ......................................................................................................18
`
`3. Ueno ............................................................................................................22
`
`C. The Asserted References Do Not Teach or Suggest All Elements of
`the ’134 Patent ............................................................................................... 24
`
`1. Gilbert Does Not Teach or Suggest Claim Elements [1a], [1b], and [1c] .25
`
`2. Hashima Does Not Teach or Suggest Claim Elements [1a] and [1c] ........32
`
`3. Ueno Does Not Teach or Suggest Claim Element [1c] ..............................37
`
`D. A POSA Would Not Have Selected and Combined the Asserted
`References...................................................................................................... 40
`
`1. A POSA Would Not Have Selected and Combined Gilbert and Hashima 40
`
`2. A POSA Would Not Have Selected and Combined Gilbert and Ueno......45
`
`IV. Concluding Statement ........................................................................................... 48
`
`i
`
`
`
`
`
` 2
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`LIST OF APPENDICES
`
`APPENDIX A
`
`Dr. Alan Bovik Curriculum Vitae
`
`
`
`ii
`
`
`
`
`
` 3
`
`

`

`IPR2017-00353
`Ex. 2006 - Declaration of Dr. Bovik
`I hereby declare that all the statements made in this Declaration are of my
`
`own knowledge and true; that all statements made on information and belief are
`
`believed to be true; and further that these statements were made with the
`
`knowledge that willful false statements and the like so made are punishable by
`
`fine or imprisonment, or both, under 18 U.S.C. § 1001 and that such willful false
`
`statements may jeopardize the validity of the application or any patent issue
`
`thereupon.
`
`I declare under penalty of perjury under the laws of the United States of
`
`America that the following is true and correct.
`
`Dated: August 25, 2017
`
`
`
`
`
`
`Respectfully Submitted
`
`
`___________________
`
`Alan Bovik
`
`
`
`
`
`
`
` 4
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`I.
`
`INTRODUCTION
`
`1.
`
`I have been retained by counsel for Image Processing Technologies
`
`LLC (“Image Processing” or “Patent Owner”) as an expert consultant in regards to
`
`inter partes review proceeding IPR2017-00353 for U.S. Patent No. 8,983,134.
`
`2.
`
`In IPR2017-00353, I understand that Petitioners, Samsung Electronics
`
`Co., Ltd. and Samsung Electronics America, Inc. (“Samsung” or “Petitioners”)
`
`challenged the validity of Claims 1 and 2 of the ’134 Patent.
`
`3.
`
`I understand that the Board instituted an inter partes review on the
`
`following Grounds: Claims 1 and 2 as obvious under 35 U.S.C. § 103(a) over
`
`Gilbert in view of Hashima; Claims 1 and 2 as obvious under 35 U.S.C. § 103(a)
`
`over Ueno in view of Gilbert. Paper No. 12 (Institution Decision) at 29.
`
`4.
`
`I was asked to consider whether the instituted claims of the U.S.
`
`Patent No. 8,983,134 (“the ’134 Patent”) (Ex. 1001), which are claims 1 and 2,
`
`would have been obvious to a person of ordinary skill in the art (“POSA”) as of the
`
`date of the invention.
`
`5.
`
`Based on my analysis of the ’134 Patent and my understanding of the
`
`state of the relevant prior art as well as the specific references relied upon by the
`
`Petitioner for the ground that was instituted by the Board, it is my opinion that the
`
`challenged claims would not have been obvious to a POSA as of the date of the
`
`invention.
`
`1
`
`
`
`
`
` 5
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`A. Background and Qualifications
`
`6.
`
`This is a summary of my background and qualifications. I set forth
`
`my background in more detail in my Curriculum Vitae which is attached as
`
`Appendix A.
`
`7.
`
`I hold a Ph.D. in in Electrical and Computer Engineering from the
`
`University of Illinois, Urbana-Champaign (awarded in 1984). I also hold a
`
`Master's degree in Electrical and Computer Engineering from the University of
`
`Illinois, Urbana-Champaign (awarded in 1982).
`
`8.
`
`I am a tenured full Professor and I hold the Cockrell Family Regents
`
`Endowed Chair at the University of Texas at Austin. My appointments are in the
`
`Department of Electrical and Computer Engineering, the Department of Computer
`
`Sciences, and the Department of Biomedical Engineering. I am also the Director
`
`of the Laboratory for Image and Video Engineering (“LIVE”).
`
`9. My research is in the general area of digital television, digital cameras,
`
`image and video processing, computational neuroscience, and modeling of
`
`biological visual perception. I have published over 800 technical articles in these
`
`areas and hold seven U.S. patents. I am also the author of The Handbook of Image
`
`and Video Processing, Second Edition (Elsevier Academic Press, 2005); Modern
`
`Image Quality Assessment (Morgan & Claypool, 2006); The Essential Guide to
`
`Image Processing (Elsevier Academic Press, 2009); and The Essential Guide to
`
`2
`
`
`
`
`
` 6
`
`

`

`Video Processing (Elsevier Academic Press, 2009); and numerous other
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`publications.
`
`10.
`
`I will receive the 2017 Edwin H. Land Medal from the Optical
`
`Society of America in September 2017 with citation: For substantially shaping the
`
`direction and advancement of modern perceptual picture quality computation, and
`
`for energetically engaging industry to transform his ideas into global practice. I
`
`also received a Primetime Emmy Award for Outstanding Achievement in
`
`Engineering Development, for the Academy of Television Arts and Sciences, in
`
`October 2015, for the widespread use of my video quality prediction and
`
`monitoring models and algorithms that are widely used throughout the global
`
`broadcast, cable, satellite and internet Television industries.
`
`11. Among other awards and honors, I have received the 2013 IEEE
`
`Signal Processing Society’s “Society Award,” which is the highest honor accorded
`
`by that technical society (“for fundamental contributions to digital image
`
`processing theory, technology, leadership and education”). In 2005, I received the
`
`Technical Achievement Award of the IEEE Signal Processing Society, which is
`
`the highest technical honor given by the Society, for “broad and lasting
`
`contributions to the field of digital image processing”; and in 2008 I received the
`
`Education Award of the IEEE Signal Processing Society, which is the highest
`
`education honor given by the Society, for “broad and lasting contributions to image
`
`3
`
`
`
`
`
` 7
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`processing, including popular and important image processing books, innovative
`
`on-line courseware, and for the creation of the leading research and educational
`
`journal and conference in the image processing field.”
`
`12. My technical articles have been widely recognized as well, including
`
`the 2009 IEEE Signal Processing Society Best Journal Paper Award for the paper
`
`“Image quality assessment: From error visibility to structural similarity,” published
`
`in IEEE Transactions on Image Processing, volume 13, number 4, April 2004; this
`
`same paper received the 2017 IEEE Signal Processing Society Sustained Impact
`
`Paper Award as the most impactful paper published over a period of at least ten
`
`years; the 2013 Best Magazine Paper Award for the paper “Mean squared error:
`
`Love it or leave it?? A new look at signal fidelity measures,” published in IEEE
`
`Transactions on Image Processing, volume 26, number 1, January 2009; the IEEE
`
`Circuits and Systems Society Best Journal Paper Prize for the paper “Video quality
`
`assessment by reduced reference spatio-temporal entropic differencing,” published
`
`in the IEEE Transactions on Circuits and Systems for Video Technology, vol. 23,
`
`no. 4, pp. 684-694, April 2013.
`
`13.
`
`I received the Google Scholar Classic Paper Award twice in 2017, for
`
`the paper “Image information and visual quality,” published in the IEEE
`
`Transactions on Image Processing, vol. 15, no. 2, pp. 430-444, February 2006 (the
`
`main algorithm developed in the paper, called the Visual Information Fidelity (VIF)
`
`4
`
`
`
`
`
` 8
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Index, is a core picture quality prediction engine used to quality-assess all encodes
`
`streamed globally by Netflix), and for “An evaluation of recent full reference
`
`image quality assessment algorithms,” published in the IEEE Transactions on
`
`Image Processing, vol. 15, no. 11, pp. 3440–3451, November 2006. (the picture
`
`quality database and human study described in the paper, the LIVE Image Quality
`
`Database, has been the standard development tool for picture quality research since
`
`its first introduction in 2003). Google Scholar Classic Papers are very highly-cited
`
`papers that have stood the test of time, and are among the ten most-cited articles in
`
`their area of research over the ten years since their publication.
`
`14.
`
`I have also been honored by other technical organizations, including
`
`the Society for Photo-optical and Instrumentation Engineers (SPIE), from which I
`
`received the Technology Achievement Award (2013) “For Broad and Lasting
`
`Contributions to the Field of Perception-Based Image Processing,” and the Society
`
`for Imaging Science and Technology, which accorded me Honorary Membership,
`
`which is the highest recognition by that Society given to a single individual, “for
`
`his impact in shaping the direction and advancement of the field of perceptual
`
`image processing.” I was also elected as a Fellow of the Institute of Electrical and
`
`Electronics Engineers (IEEE) “for contributions to nonlinear image processing” in
`
`1995, a Fellow of the Optical Society of America (OSA) for “fundamental research
`
`contributions to and technical leadership in digital image and video processing” in
`
`5
`
`
`
`
`
` 9
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`2006, and as a Fellow of SPIE for “pioneering technical, leadership, and
`
`educational contributions to the field of image processing” in 2007.
`
`15. Among other relevant research, I have worked with the National
`
`Aeronautics and Space Administration (“NASA”) to develop high compression
`
`image sequence coding and animated vision technology, on various military
`
`projects for the Air Force Office of Scientific Research, Phillips Air Force Base,
`
`the Army Research Office, and the Department of Defense. These projects have
`
`focused on developing local spatio-temporal analysis in vision systems, scalable
`
`processing of multi-sensor and multi-spectral imagery, image processing and data
`
`compression tools for satellite imaging, AM-FM analysis of images and video, the
`
`scientific foundations of image representation and analysis, computer vision
`
`systems for automatic target recognition and automatic recognition of human
`
`activities, vehicle structure recovery from a moving air platform, passive optical
`
`modeling, and detection of speculated masses and architectural distortions in
`
`digitized mammograms. My research has also recently been funded by Netflix,
`
`Qualcomm, Texas Instruments, Intel, Cisco, and the National Institute of Standards
`
`and Technology (NIST) for research on image and video quality assessment. I
`
`have also received numerous grants from the National Science Foundation for
`
`research on image and video processing and on computational vision.
`
`16. Additional details about my employment history, fields of expertise,
`
`6
`
`
`
`
`
`10
`
`

`

`and publications are further described in my curriculum vitae, which is attached as
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Exhibit A to this declaration.
`
`B. Materials Considered
`
`17. For time spent in connection with this case, I am being compensated
`
`at my customary rate of $500/hour. My compensation is not dependent upon the
`
`outcome of this petition or any issues involved in or related to the ’134 Patent, and
`
`I have no other financial stake in this matter. I have no financial interest in, or
`
`affiliation with, any of the real parties in interest or the Patent Owner.
`
`18. The materials I considered include the ’134 Patent and the prosecution
`
`history for the ’134 Patent, the Petition from Samsung for inter partes review
`
`(Paper No. 2), the Patent Trial and Appeal Board (“PTAB”) decision to institute
`
`inter partes review in IPR2017-00353 (Paper No. 12), and IPT’s Preliminary
`
`Response (Paper No. 6). I also considered the materials that I refer to and that I
`
`cite in this declaration, and, to the extent I considered them relevant, the materials
`
`provided by the Petitioner.
`
`19.
`
`In addition, I have drawn on my experience and knowledge, as
`
`discussed above and described more fully in my CV.
`
`20. The opinions I express herein are given from the point of view of a
`
`person of ordinary skill in the art, as described above, at the time of the invention
`
`of the ’134 Patent. Even if I do not repeat this explicitly, this is the perspective
`
`7
`
`
`
`
`
`11
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`that I applied in my analysis and in this declaration, unless I indicate otherwise.
`
`C. Claim Construction
`
`21.
`
`I understand that the claims and specification of a patent must be read
`
`and construed through the eyes of a person of ordinary skill in the art at the time of
`
`the priority date of the claims.
`
`22.
`
`I further understand that the claim construction standard that applies
`
`for the purposes of this proceeding is the Phillips standard, under which claim
`
`terms are given the meaning that the term would have to a person of ordinary skill
`
`in the art in question at the time of the invention. I understand further that a claim
`
`term’s meaning can take into account both intrinsic evidence (the claims,
`
`specification, and prosecution history) as well as extrinsic evidence, such as
`
`dictionary definitions. I have applied this standard in claim constructions I have
`
`set forth below.
`
`23. The Board has construed term “forming at least one histogram of the
`
`pixels in the one or more of a plurality of classes in the one or more of a plurality
`
`of domains” not to be limited to “forming at least one histogram of the pixels in
`
`two or more classes that are in two or more domains.” D.I. 12 at 10. I have
`
`applied the Board’s construction in my analysis.
`
`24.
`
`In conducting my analysis of the challenged claims of the ’134 patent,
`
`I have applied the claim constructions below consistent with the Phillips standard.
`
`8
`
`
`
`
`
`12
`
`

`

`Elsewhere in my analysis, except when I state otherwise, I have applied the
`
`ordinary meaning of claim terms as they are used in the specification, under the
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Phillips standard.
`
`1.
`
`“forming at least one histogram . . . said at least one
`histogram referring to classes defining said target”
`
`25.
`
`I understand that Patent Owner has proposed a claim construction for
`
`the term “forming at least one histogram . . . said at least one histogram referring to
`
`classes defining said target” in claims 1 and 2 of ’134 patent of “forming at least
`
`one histogram . . . said at least one histogram being formed of pixels in the one or
`
`more classes that define said target.” I understand that Patent Owner has construed
`
`this term to mean the “at least one histogram” is composed only of pixels in at least
`
`one class defining said target. I have reviewed and agree with this proposed
`
`construction.1
`
`26. Patent Owner’s proposed construction is consistent with my opinion
`
`of how a POSA would have understood the claim term at the time of the patent as
`
`well as the embodiments I reviewed in the specification of the ’134 patent.
`
`27. A POSA would interpret a histogram “referring to classes defining
`
`said target” to apply a limitation on the histogram being formed in claim 1.
`
`Specifically, it is my opinion that, in light of the patent specification and claim 1 as
`
`
`1 As discussed above in paragraph 23, I understand the Board has held that “one or
`more classes” is sufficient for claim 1. I have applied this construction in my
`analysis.
`
`9
`
`
`
`
`
`13
`
`

`

`a whole, a POSA would construe this term to mean that the histogram must be
`
`made up only of pixels belonging to classes which “define” the target, i.e. classes
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`that correspond to target characteristics.
`
`28. For example, Patent Owner’s proposed construction is consistent with
`
`the surrounding claim language. For reference, the first part of claim 1 of the ’134
`
`patent is copied below, with relevant portions bolded.
`
`1. A process of tracking a target in an input signal implemented
`
`using a system comprising an image processing system, the input
`
`signal comprising a succession of frames, each frame comprising a
`
`succession of pixels, the target comprising pixels in one or more of
`
`a plurality of classes in one or more of a plurality of domains, the
`
`process performed by said system comprising, on a frame-by-frame
`
`basis:
`
`forming at least one histogram of the pixels in the one or more of
`
`a plurality of classes in the one or more of a plurality of
`
`domains, said at least one histogram referring to classes
`
`defining said target; and
`
`Ex. 1001 at 26:36–46 (emphasis added).
`
`29. The target comprises pixels in one or more of a plurality of classes in
`
`one or more of a plurality of domains. A histogram is then formed in “the one or
`
`more of a plurality of classes in the one or more of a plurality of domains”
`
`(emphasis added). The word “the” in the claim to refers back to the first reference
`
`of the pixels in a plurality of classes which make up the target. A POSA would
`
`10
`
`
`
`
`
`14
`
`

`

`thus interpret the claim language to mean that the histogram is formed only in the
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`classes that comprise the target.
`
`30. The plain meaning of the claim language thus supports Patent
`
`Owner’s construction of “said at least one histogram referring to classes defining
`
`said target” requiring the pixels making up the histogram to be in classes defining
`
`the target.
`
`31. Patent Owner’s proposed construction also appears consistent with the
`
`specification of the patent. For example, this limitation in forming a histogram can
`
`be seen in Figure 17, copied below.
`
`Ex. 1001 at Fig. 17.
`
`
`
`32. As seen above, Figure 17 is a “diagram illustrating histograms formed
`
`on the shape of the head of a participant in a video conference.” Ex. 1001 at 8:66–
`
`11
`
`
`
`
`
`15
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`67. I understand from the specification that the target of these histograms is a
`
`user’s head. See Ex. 1001 at 22:32–43. The histograms used to identify the user’s
`
`face V rely on movement, where DP=1. Ex. 1001 at 22:44–54.
`
`33.
`
`I understand from Figure 17 and the accompanying specification text
`
`that a class defining the target is based on movement. As seen in Figure 17, the
`
`user’s face V comprises edge pixels which are in the histogram and non-edge
`
`pixels which are not. As explained in the specification, this is because the greatest
`
`movement occurs at the peripheral edges of the head. Ex. 1001 at 22:44–45.
`
`34. The histograms used to identify the face end up being formed based
`
`on pixels in classes defining the target face, but not all pixels of the face are in
`
`these classes. Pixels that are not in classes defining the target are not included.
`
`35. Another example of this limitation in the embodiments of the ’134
`
`patent can be seen in Figure 12 copied below, a “two-dimensional histogram of a
`
`moving area.” Ex. 1001 at 8:54–55.
`
`12
`
`
`
`
`
`16
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ex. 1001 at Fig. 12.
`
`
`
`36. As explained by the specification, the target in Figure 12 is defined by,
`
`for example, “significant speeds.” See Ex. 1001 at 21:37–40. A POSA would
`
`recognize from the diagram of Figure 12 that the histograms are formed from
`
`pixels having significant speeds, illustrated in Figure 12 by black dots representing
`
`pixels. The black blob labeled 40 has white portions in it, indicating pixels not
`
`falling into classes defining the target.
`
`37.
`
`It is my opinion that a POSA would recognize that these handful of
`
`pixels completely surrounded by pixels of significant speeds are nevertheless likely
`
`to be pixels comprising the target. Consistent with Patent Owner’s proposed
`
`construction therefore, the histogram only includes values in the target classes, but
`
`13
`
`
`
`
`
`17
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`the target may comprise pixels not in these classes.
`
`2.
`
`“wherein forming the at least one histogram further
`comprises determining X minima and maxima and Y
`minima and maxima of boundaries of the target.”
`
`38.
`
`I understand that Patent Owner has proposed a claim construction for
`
`the term “wherein forming the at least one histogram further comprises
`
`determining X minima and maxima and Y minima and maxima of boundaries of
`
`the target” in claims 1 and 2 of ’134 patent wherein this term requires that the
`
`determination of X and Y boundaries of the target is done as a part of forming or
`
`creating the histogram. I have reviewed and agree with this proposed construction.
`
`39. Patent Owner’s proposed construction
`
`is consistent with my
`
`understanding of how a POSA would understand the plain meaning of the above
`
`language as well as consistent with dictionary definitions of the word “forming.”
`
`40. Based on my understanding of the ’134 patent and its specification, it
`
`is my opinion that a POSA would understand “forming the histogram” to mean
`
`“adding data to the histogram.” The specification describes, for example, “forming
`
`a histogram for pixels of the output signal within the classes selected by the
`
`classifier within each domain selected by the validation signal.” Ex. 1001 at 6:11–
`
`14. A POSA would understand this to mean that forming the histogram involves
`
`adding data based on the pixel data of the output signal.
`
`41. The patent specification goes on to say that the process “further
`
`14
`
`
`
`
`
`18
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`includes the steps of forming histograms along coordinate axes for the pixels
`
`within the classes selected by the classifier within each domain selected by the
`
`validation signal.” Ex. 1001 at 6:15–18. A POSA would understand that forming
`
`a histogram along coordinate axes for pixels within classes involves adding data to
`
`the histogram, for example the pixel data for the pixels falling within those classes.
`
`42.
`
`It is my opinion that A POSA at the time of the invention would thus
`
`understand that forming a histogram is the same as creating a histogram.
`
`II.
`
`SUMMARY OF OPINIONS
`
`43.
`
`In this Section I present a summary of my opinions. The full
`
`statement of my opinions and the bases for my opinions are contained in the
`
`appropriate sections of my declaration. I give this summary, however, for the
`
`convenience of the reader.
`
`44. Based on my analysis of the ’134 Patent, my knowledge and
`
`experience, any references cited in this declaration, as well as the specific
`
`references relied upon by the Petitioner for the ground that was instituted by the
`
`Board, it is my opinion that the challenged claims (1 and 2) would not have been
`
`obvious to a POSA as of the date of the invention. I have considered the date of
`
`the invention of the ’134 patent to be 1996.
`
`III. MY ANALYSIS OF CLAIMS 1 AND 2
`
`A.
`
`Summary
`
`45.
`
`In my opinion Gilbert and Hashima, and Ueno and Gilbert do not
`
`15
`
`
`
`
`
`19
`
`

`

`disclose several elements of the ’134 patent claims. Gilbert does not disclose
`
`elements [1a], [1b], or [1c]; Hashima does not disclose element [1a] or [1c]; and
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ueno does not disclose element [1c].
`
`46.
`
`It is also my opinion that a POSA would not have selected and
`
`combined Gilbert and Hashima or Ueno and Gilbert to arrive at the invention of
`
`claims 1 and 2 of the’134 patent.
`
`B. Discussion of References
`
`1. Gilbert
`
`47. Gilbert is entitled “A Real-Time Video Tracking System”. Gilbert
`
`states in the first paragraph of the section Introduction:
`
`IMAGE PROCESSING methods constrained to operate on sequential
`
`images at a high repetition rate are few. Pattern recognition techniques
`
`are generally quite complex, requiring a great deal of computation to
`
`yield an acceptable classification. Many problems exist, however,
`
`where such a time-consuming technique is unacceptable. Reasonably
`
`complex operations can be performed on wide-band data in real time,
`
`yielding solutions to difficult problems in object identification and
`
`tracking.
`
`Ex. 1005 (Gilbert) at 47.
`
`48. Gilbert states that the image consist of m x n pixels, and that, “pixel
`
`intensity is digitized and quantized into eight bits (256 gray levels), counted into
`
`one of six 256-level histogram memories, and then converted by a decision
`
`16
`
`
`
`
`
`20
`
`

`

`memory to a 2-bit code indicating its classification (target, plume, or background).
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ex. 1005 at 47–48.
`
`49. Gilbert discloses that a tracking window is placed around the target
`
`image, and:
`
`The tracking window frame is partitioned into a background region
`
`(BR) and a plume region (PR). The region inside the frame is called
`
`the target region (TR) as shown in Fig. 2. During each field, the
`
`feature histograms are accumulated for the three regions of each
`
`tracking window.
`
`Ex. 1005 (Gilbert) at 47.
`
`50. Figure 2 of Gilbert shows the tracking window:
`
`51. Gilbert describes that, based on intensity of the pixels, the image is
`
`binarized into “target” and “non-target” pixels:
`
`
`
`The video processor described above separates the target image from
`
`the background and generates a binary picture, where target presence
`
`is represented by a “1” and target absence by a “0.”
`
`Ex. 1005 (Gilbert) at 48.
`
`52. The binary image is analyzed:
`
`17
`
`
`
`
`
`21
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`The target location, orientation, and structure are characterized by the
`
`pattern of 1 entries [that is, pattern of “1” versus “0” values] in the
`
`binary picture matrix, and the target activity is characterized by a
`
`sequence of picture matrices.
`
`Ex. 1006 (Gilbert) at 60 (explanation added in brackets for clarity).
`
`53. One analysis of the binary image is via projection histograms, for
`
`example as shown in Figure 4:
`
`Ex. 1006 (Gilbert) at 51.
`
`
`
`54. A “projection histogram” as used in Gilbert is, generally speaking, a
`
`histogram of a parameter such as based on x-axis location or y-axis location.
`
`2. Hashima
`
`55. Hashima is entitled “System for and Method of Recognizing and
`
`18
`
`
`
`
`
`22
`
`

`

`Tracking Target Mark”.
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`56. Hashima states in the first paragraph of the section Technical Field:
`
`The present invention relates to a system for and a method of
`
`recognizing and tracking a target mark using a video camera, and
`
`more particularly to a system for and a method of recognizing and
`
`tracking a target mark for detecting the position and attitude of the
`
`target mark by processing an image of the target mark produced by a
`
`video camera, detecting a shift of the position of the target mark from
`
`a predetermined position, and controlling the position and attitude of a
`
`processing mechanism based on the detected shift.
`
`Ex. 1006 (Hashima) at 1:5–16.
`
`57. The system of Hashima is designed to operate in real-time based on
`
`the benefit of knowledge of the shape and color of the target (which is known in
`
`advance), and the ability of the robot hand or grip to move in six dimensions of
`
`freedom, such that Hashima can control the position and attitude of the object with
`
`respect to the video camera. Ex. 1006 (Hashima) at 1:3–12, 2:64–3:13.
`
`58. Hashima is directed to tracking a predefined mark mounted on an
`
`object with known, contrasting colors, in order for a robot to grip the object.
`
`Ex. 1006 (Hashima) at 1:7–16; 7:45–65. Hashima relies on knowing, for example,
`
`that the target is a triangle that rests on a post and thus is offset from a circular
`
`background of a contrasting color, Ex. 1006 (Hashima) at Figures 2 & 4 and 7:66–
`
`8:17, and uses that knowledge to track the target in three-dimensional space in
`
`19
`
`
`
`
`
`23
`
`

`

`order to guide the robot towards the target.
`
`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ex. 1006 (Hashima) at Fig. 2.
`
`
`
`Ex. 1006 (Hashima) at Fig. 4.
`
`
`
`59. For example, Hashima computes the target’s three-dimensional shift
`
`by moving the camera along the x- and y-axes, in addition to rotating the camera
`
`about the x-, y-, and z-axes of the target coordinate system. Ex. 1006 (Hashima) at
`
`16:50–17:5. Hashima describes that changes in position between the target mark
`
`and the camera are “relatively small” because Hashima controls the camera
`
`position with respect to the target with six degrees of freedom. Ex. 1006 (Hashima)
`
`at 17:6–11.
`
`60. Figure 5 of Hashima is a flowchart illustrating a sequence for
`
`detecting the target mark. Ex. 1006 (Hashima) at 5:18–19.
`
`20
`
`
`
`
`
`24
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ex. 1006 (Hashima) at Fig. 5.
`
`
`
`61. Figure 6 of Hashima is a diagram showing the X- and Y- histograms
`
`of the target mark. Ex. 1006 (Hashima) at 5:20–22.
`
`21
`
`
`
`
`
`25
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`Ex. 12 (Hashima) at Fig. 6.
`
`
`
`3.
`
`Ueno
`
`62. Ueno is entitled “System for and Method of Recognizing and
`
`Tracking Target Mark”. Ex. 1007 (Ueno) at Title.
`
`63. Ueno states in the first paragraph of the section Field of the Invention:
`
`The present invention relates to an apparatus for encoding video
`
`signals used for a teleconference or videophone.
`
`Ex. 1007 (Ueno) at 1:8–11.
`
`64. The system of Ueno operates in a fixed, assumed background of a
`
`teleconference or videophone, and requires the assumption that (i) there is one
`
`person in the video signal; (ii) the person is moving and so can be detected by
`
`22
`
`
`
`
`
`26
`
`

`

`IPR2017-00353
`Ex. 2007 - Declaration of Dr. Bovik
`
`movement; (iii) the person has a particular head-on-top-of-shou

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket