`
`____________
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`____________
`
`
`APPLE INC.
`Petitioner
`
`v.
`
`COREPHOTONICS, LTD.,
`Patent Owner
`____________
`
`Case IPR2018-01133
`U.S. Patent No. 9,538,152
`____________
`
`
`DECLARATION OF JAMES KOSMACH, PH.D.
`
`
`
`
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 1 of 29
`
`
`
`Table of Contents
`
`I.
`
`Introduction ........................................................................................................ 1
`
`II. Background and Qualifications .......................................................................... 2
`
`III. The ’152 Patent................................................................................................... 3
`
`IV. Understanding of the Law .................................................................................. 8
`
`A. Claim Construction ......................................................................................... 8
`
`B. Anticipation ..................................................................................................... 8
`
`C. Obviousness .................................................................................................... 9
`
`V. Claim Construction ........................................................................................... 10
`
`VI. Patentability Over Border and Parulski ............................................................ 12
`
`A. Border Fails to Disclose [1.10] where “FOV2<FOVZF<FOV1 then the point
`of view of the output image is that of the first camera” ....................................... 13
`
`B. Border Fails to Disclose [1.11]: a processor “configured to register the
`overlap area of the second image as non-primary image to the first image as
`primary image to obtain the output image” ......................................................... 20
`
`C. The Petition Fails to Explain Why or How a POSITA would Combine
`Border with Parulski’s Teaching of Modifying a Primary Image with a Non-
`Primary Image ...................................................................................................... 24
`
`D. Border Does Not Disclose An “Output Image from the Point of View of the
`Second Camera” where “FOV2≧FOVZF” ............................................................ 26
`
`VII. Conclusion ........................................................................................................ 27
`
`
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 2 of 29
`
`
`
`I, James Kosmach, do hereby declare as follows:
`
`I.
`
`Introduction
`
`1.
`
`I have been retained as an independent expert witness on behalf of
`
`Corephotonics Ltd. (“Patent Owner” or “Corephotonics”) for the above-captioned
`
`Inter Partes Review of U.S. Patent No. 9,538,152. I am being compensated at my
`
`usual and customary rate for the time I spent in connection with this IPR. My
`
`compensation is not affected by the outcome of this IPR.
`
`2.
`
`I have been asked to provide my opinions regarding whether claims 1–
`
`4 (“Challenged Claims”) are invalid as they would have been obvious to a person
`
`having ordinary skill in the part (“POSITA”) as of the earliest claimed priority date,
`
`specifically with reference to the arguments made by Apple Inc. in its Petition for
`
`Inter Partes Review (“Petition”) regarding U.S. Patent Application Pub. No.
`
`2008/0030592 A1 (Ex. 1006, “Border”) and U.S. Patent No. 7,859,588 B2 (Ex.
`
`1007, “Parulski”).
`
`3.
`
`In preparing this Declaration, I have reviewed:
`
`a. Apple’s Petition for Inter Partes Review Under 35 U.S.C. § 312
`
`and 37 C.F.R. § 42.104, in IPR2018-01133;
`
`b. Ex. 1001, the ’152 Patent;
`
`c. Ex. 1002, the prosecution file history of the ’152 Patent;
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 3 of 29
`
`
`
`d. Ex. 1003, the prosecution file history of U.S. Provisional App.
`
`No. 61/730/570;
`
`e. Ex. 1004, the Declaration of Dr. Oliver Cossairt;
`
`f. Ex. 1006, U.S. Patent Application Pub. No. 2008/00;30592 A1
`
`(“Border”);
`
`g. Ex. 1007, U.S. Patent No. 7,859,588 B2 (“Parulski”);
`
`h. Ex. 1008, Excerpts of Jacobson, The Manual of Photography
`
`(“Jacobson”);
`
`i. Ex. 1010, Excerpts of Szeliski, Computer Vision (“Szeliski”);
`
`j. Ex. 2002, transcript of the Deposition of Oliver Cossairt;
`
`k. Ex. 2007, U.S. Patent No. 9,185,291, “Dual Aperture Zoom
`
`Digital Camera” (“the ’291 Patent”).
`
`II. Background and Qualifications
`
`4. My qualifications are set forth in my curriculum vitae, a copy of which
`
`is attached as Exhibit 2004. As set forth in my curriculum vitae:
`
`5.
`
`I am clinical associate professor in the department of electrical and
`
`computer engineering at the University of Illinois Chicago (UIC). I teach
`
`undergraduate and graduate courses such as computer vision, signals and systems,
`
`and probability and random processes for engineers.
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 4 of 29
`
`
`
`6.
`
`I am also employed by Personify, Inc. and hold the position of Vice
`
`President of Engineering, a company focused on developing artificial intelligence
`
`and machine learning technology for use in imaging and video applications. Prior
`
`to my faculty appointment at UIC, I was employed fulltime at Personify, Inc as the
`
`Vice President of Engineering, where I led a team of more than 20 engineers in the
`
`development of computer vision algorithms specifically for use on video and image
`
`data. Our technology has been licensed to, among others, Intel, Inc.
`
`7.
`
`Prior to my work at Personify, I was Senior Director of Engineering for
`
`PacketVideo, a multimedia technology company in San Diego, California. There, I
`
`led a team of 15 multimedia codec engineers to develop technology that was adopted
`
`by, among others, Google, Verizon, and NTT Docomo.
`
`8.
`
`Prior to my work at Personify, I started my career in corporate R&D
`
`group of Motorola, working on video signal processing where I was involved in the
`
`research and development of video compression algorithms.
`
`III. The ’152 Patent
`
`9.
`
`The ’152 Patent is directed to a “multi-aperture imaging system
`
`comprising a first camera with a first sensor that captures a first image and a second
`
`camera with a second sensor that captures a second image.” Ex. 1001, at Abstract.
`
`It was issued on January 3, 2017, and claims priority to a provisional patent
`
`application filed on November 28, 2012. The face of the ‘152 Patent lists Gal
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 5 of 29
`
`
`
`Shabtay, Noy Cohen, Oded Gigushinski, and Ephraim Goldenberg as the inventors.
`
`The face of the ’152 Patent identifies Corephotonics as the initial assignee of the
`
`’152 Patent.
`
`10. At the time of its filing, the ’152 Patent addressed the difficulty in the
`
`prior art of including optical zoom functionality in mobile phones. The ’152 Patent
`
`recognized optical zoom solutions (such as mechanical zoom camera modules) was
`
`“common in digital still cameras” but were “typically too thick for most camera
`
`phones.” Ex. 1001, at 1:35-36. Such mechanical zoom modules resulted in “poor
`
`light sensitivity and higher noise (especially in low-level scenarios)” and, in the case
`
`of such mechanical zoom modules being used in mobile cameras, also resulted in
`
`resolution compromise as a result of the technological limitations of such modules.
`
`Id. at 1:25-43.
`
`11. One prior art alternative to mechanical zoom solutions was a software-
`
`based approach: over-sampling a captured image and then cropping and
`
`interpolating it in accordance with a desired zoom level. Id. at 44-46. But this
`
`approach resulted in “thick optics” and “an expensive image sensor due to the larger
`
`number of pixels” required for generating the raw image data necessary for over-
`
`sampling. Id. at 46-49.
`
`12. The ’152 Patent teaches a dual-aperture imaging (“DAI”) system that
`
`combines image data from two separate camera sensors, a Wide angle sensor and a
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 6 of 29
`
`
`
`Tele (zoom) sensor, to output a single, high-quality zoomed image. The system
`
`developed by Corephotonics resulted in improved image and color resolution over
`
`conventional multi-aperture (“MAI”) techniques. See id. at 1:60-2:15.
`
`13. The ’152 patent discloses capturing synchronous images from both
`
`Wide and Tele cameras, and fusing the Wide and Tele images to “reach optical zoom
`
`capabilities.” Id. at 3:11-24. A “different magnification image of the same scene is
`
`grabbed by each subset, resulting in field of view (FOV) overlap between the two
`
`subsets.” Id. at 3:11-14.
`
`14. Fig. 1B, above, illustrates the FOV overlap as the area bounded by the
`
`dotted line and is labeled object 110. The synchronous images from the Wide and
`
`Tele cameras are then processed “by the MAI system to fuse and output one fused
`
`
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 7 of 29
`
`
`
`(combined) output zoom image processed according to a user ZF [zoom factor] input
`
`request” (id. at 3:17-20), resulting in an output image that has higher image quality,
`
`resolution, and color in comparison with an image captured by either the Wide or
`
`Tele cameras.
`
`15. Fig. 1B illustrates one of the issues that arise due to the different fields
`
`of view and apertures of a DAI system with a Wide and Tele lens arrangement. The
`
`Tele sensor of a DAI system provides optical zoom capability with improved
`
`resolution but, compared to a Wide sensor, has a narrower field of view. As Fig 1B
`
`shows, because the Tele camera has a narrower field of view than the Wide camera,
`
`the image generated by the Tele camera overlaps within part of the wider field of
`
`view of the image generated by the Wide camera (the “overlap area”). But because
`
`the cameras are at different spatial positions, the images taken from each of the Wide
`
`and Tele cameras are seen from different points of view (POV), which is the “camera
`
`angle” from which an image is captured. Id. at 9:26-28. The image generated by
`
`the Tele sensor cannot simply be “stitched” onto an overlap area of on the image
`
`from the Wide sensor; each image has a different POV (where standard image
`
`stitching produces parallax errors) and different color resolution, both issues which
`
`must be solved to minimize visual and color artifacts in an output image.
`
`16. The ’152 Patent explains that combining information from the Tele and
`
`Wide cameras was not straightforward. Though fusion of synchronously captured
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 8 of 29
`
`
`
`images was known in the prior art, the use of “known signal processing algorithms
`
`used together with existing MAI systems” resulted in degraded output image quality
`
`by “introducing artifacts when combining information from different apertures.” Id.
`
`at 2:4-7. Those signal processing algorithms, or “image registration” algorithms,
`
`were a “primary source of these artifacts” in the prior art. Id. at 2:6-11.
`
`17. To address the parallax and color resolution artifacts inherent in prior
`
`art MAI systems, the ’152 Patent applies image processing algorithms to
`
`synchronously captured Tele and Wide images that included registering the
`
`luminance data of the two images as well as “demosaicing” image data to ensure
`
`color correctness in an output image. See id. at 7:49-8:26. As the ’152 Patent
`
`recognizes, “[p]erforming the registration on luminance images has the advantage
`
`of enabling registration between images captured by sensors with different CFAs or
`
`between images captured by a standard CFA or non-standard CFA sensor and a
`
`standard CFA or Clear sensor and avoiding color artifacts that may arise from
`
`erroneous registration.” Id. at 8:21-26.
`
`18. The claims of the ’152 Patent require a processor configured to “register
`
`the overlap area” of a “second image as non-primary image” to a “first image as
`
`primary image to obtain the output image,” where the output image must be from
`
`either the “point of view of the first camera” or the “point of the view of the second
`
`camera.” Id. at 13:5-17. Importantly, the image registration enables the “output
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 9 of 29
`
`
`
`image point of view” to be “determined according to the primary image point of
`
`view (camera angle).” Id. at 9:26-29.
`
`IV. Understanding of the Law
`
`19.
`
`I understand that the disclosure of a patent is to be viewed from the
`
`perspective of a person having ordinary skill in the art (“POSITA”) as of the filing
`
`date of the application that became the patent.
`
`A. Claim Construction
`
`20.
`
`I understand that “claim construction” is the process of determining a
`
`patent claim’s meaning. I also have been informed and understand that the proper
`
`construction of a claim term is the plain and ordinary meaning that a person of
`
`ordinary skill in the art would have given to that term in light of the specification.
`
`In performing my analyses set forth in this declaration, I have interpreted the claims
`
`of the ’152 Patent to have their plain and ordinary meaning.
`
`B. Anticipation
`
`21.
`
`I understand that a patent claim is unpatentable as anticipated if each
`
`element of that claim is present either explicitly or inherently in a single prior art
`
`reference. I have also been informed that, to be an inherent disclosure, the prior art
`
`reference must necessarily disclose the limitation, and the fact that the reference
`
`might possibly practice or contain the claimed limitation is insufficient to establish
`
`that the reference inherently teaches the limitation.
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 10 of 29
`
`
`
`C. Obviousness
`
`22.
`
`I understand that a patent claim is unpatentable as obvious if the subject
`
`matter of the claim as a whole would have been obvious to a person of ordinary skill
`
`in the art as of the time of the invention at issue. I understand the following factors
`
`must be evaluated to determine whether the claimed subject matter is obvious: (1)
`
`the scope and content of the prior art; (2) the difference or differences, if any,
`
`between the scope of the claim of the patent under consideration and the scope of
`
`the prior art; and (3) the level of ordinary skill in the art at the time the patent was
`
`filed.
`
`23.
`
`I understand that prior art references can be combined to find a claim
`
`unpatentable as obvious when there was an apparent reason for one of ordinary skill
`
`in the art, at the time of the invention, to combine the references, which includes, but
`
`is not limited to: (A) identifying a teaching, suggestion, or motivation to combine
`
`prior art references; (B) combining prior art methods according to known methods
`
`to yield predictable results; (C) substituting one known element for another to obtain
`
`predictable results; (D) using a known technique to improve a similar device in the
`
`same way; (E) applying a known technique to a known device ready for
`
`improvement to yield predictable results; F) trying a finite number of identified,
`
`predictable potential solutions, with a reasonable expectation of success; or (G)
`
`identifying that known work in one field of endeavor may prompt variations of it for
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 11 of 29
`
`
`
`use in either the same field or a different one based on design incentives or other
`
`market forces if the variations are predictable to one of ordinary skill in the art.
`
`24. Moreover, I have been informed and I understand that, when available,
`
`so-called objective indicia of non-obviousness (also known as “secondary
`
`considerations” and or the real world factors) like the following are also to be
`
`considered when assessing obviousness: (1) widespread acclaim; (2) commercial
`
`success; (3) long-felt but unresolved needs; (4) copying of the invention by others
`
`in the field; (5) initial expressions of disbelief by experts in the field; (6) failure of
`
`others to solve the problem that the inventor solved; and (7) unexpected results,
`
`among others. I also understand that evidence of objective indicia of non-
`
`obviousness must be commensurate in scope with the claimed subject matter. I
`
`understand this is commonly referred to as a “nexus.” I have not been asked to
`
`consider, and I have not considered, any such objective indicia of non-obviousness
`
`in connection with my opinions expressed herein.
`
`V. Claim Construction
`
`25.
`
`I have read claims 1-4 of the ’152 Patent. Each of those claims concern
`
`creating an “output image” with a specific “point of view.” In claims 1 and 3,
`
`reference is made to an “output image” with a “point of view” of a “first camera.”
`
`In claims 2 and 4, reference is made to an “output image” with a “point of view” of
`
`a “second camera.”
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 12 of 29
`
`
`
`26. The specification of the ’152 Patent provides that the term “point of
`
`view” of an image refers to the camera angle with which the image is captured. This
`
`is the plain and ordinary meaning of the term “point of view” in the context of the
`
`patent. For instance, in column 9 of the ’152 Patent, lines 26-28 provide: “The
`
`output image point of view is determined according to the primary image point of
`
`view (camera angle).”
`
`27. The “point of view” of a given image is the visual perspective provided
`
`by the “camera angle” of the image. Because each camera in a MAI system occupies
`
`different physical space, it is impossible for more than one camera to capture the
`
`same scene from the same point of view at the same time. Each camera will produce
`
`an image with a different point of view, even if only slightly.
`
`28.
`
`In U.S. Patent No. 9,185,291, “Dual Aperture Zoom Digital Camera”
`
`(“the ’291 Patent”), which I understand to be a Corephotonics patent and to share
`
`the same four inventors as the ’152 Patent, the specification states:
`
`In a dual-aperture camera image plane, as seen by each sub-camera (and
`
`respective image sensor), a given object will be shifted and have
`
`different perspective (shape). This is referred to as point-of-view
`
`(POV). The system output image can have the shape and position
`
`of either sub-camera image or the shape or position of a
`
`combination thereof. If the output image retains the Wide image shape
`
`then it has the Wide perspective POV. If it retains the Wide camera
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 13 of 29
`
`
`
`position then it has the Wide position POV. The same applies for Tele
`
`images position and perspective.
`
`A POSITA would have understood that that a given output image’s point of view
`
`“could have the shape and position of either sub-camera [i.e., either the Wide or Tele
`
`camera]” or be a “combination” of the points of view captured by those cameras.
`
`Each point of view of the subset images from which an output image is generated in
`
`the ’152 Patent differs from another, since photographed objects are “shifted and
`
`have different perspective” across different points of view.
`
`29.
`
`In light of the above, I conclude that a POSITA would have understood
`
`“point of view” in the context of the ’152 Patent to mean “camera angle.”
`
`VI. Patentability Over Border and Parulski
`
`30.
`
`I understand that the patentability of claims 1-4 of the ’152 Patent are
`
`being challenged in view of U.S. Patent Application Pub. No. 2008/0030592 A1 (Ex.
`
`1006, “Border”) and U.S. Patent No. 7,859,588 B2 (Ex. 1007, “Parulski”). The
`
`Board has instituted a trial to consider whether these claims would have been
`
`obvious to a person of ordinary skill in the art at the time of the invention of the ’152
`
`Patent in view of combinations of these references. In my opinion, these claims are
`
`not obvious in view of the Border and Parulski, as described below.
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 14 of 29
`
`
`
`A. Border Fails to Disclose [1.10] where “FOV2<FOVZF<FOV1 then
`
`the point of view of the output image is that of the first camera”
`
`31. The Claims 1 and 3, as well as claims 2 and 4 depending therefrom,
`
`recites a dual-camera system which generates “a first image” with “a first camera”
`
`(Wide) and a “second image” with a “second camera” (Tele), and with the second
`
`camera having telephoto functionality for providing a “zoom factor.” Claim 1
`
`further requires the first camera to have a “first field of view (FOV1)” and the second
`
`camera to have a “second field of view (FOV2) such that FOV2<FOV1.” Ex. 1001,
`
`at 12:60-67. The claimed system is “configured to provide an output image from a
`
`point of view of the first camera based on a zoom factor (ZF) input.” Id. at 13:5-8.
`
`And, specifically, when “FOV2<FOVZF<FOV1 then the point of view of the
`
`output image is that of the first camera.” Id. at 13:9-10.
`
`32.
`
`I understand that the Petition cites to Border as purportedly disclosing
`
`an “output image” whose “point of view” is that of the first camera. See Pet. 39-47
`
`(discussing [1.8]), 55-56 (discussing [1.10]). But Border fails to discuss any concept
`
`of creating an output image from the images of multiple cameras that is from the
`
`point of view of any specific camera. The “point of view” of a given image is the
`
`visual perspective provided by the “camera angle” of the image. Ex. 1001, at 9:26-
`
`28. Because each camera in a MAI system occupies different physical space, the
`
`information captured by each camera sensor are not identical, even if the data is
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 15 of 29
`
`
`
`synchronously captured and where the cameras are capturing the same scene. Thus,
`
`where claim 1 of the ’152 Patent requires the “point of view of the output image is
`
`that of the first camera,” it is not enough that the output image has the POV of a
`
`combination of cameras. Rather, the required output image must have the
`
`perspective of the camera angle of the first camera.
`
`33. Border does not disclose anything about fusing two images from two
`
`cameras with different points of view and different focal lengths to create a single
`
`composite image with a point of view of only one of the cameras, let alone a first
`
`camera. Rather, Border discusses prior art “image stitching” techniques wherein
`
`“two images are matched to ‘locate the high resolution image accurately into the
`
`low-resolution image and then stitched into place so the edge between the two
`
`images in the composite image is not discernible.’” Pet., at 44.
`
`34. A composite image generated via Border’s image-stitching technique
`
`“uses pixel data from the wide image 204 otherwise (i.e. the region outside the
`
`dashed line 220).” Pet., at 39 (citing Ex. 1004 and Ex. 1002, at ¶99). The Petition
`
`reproduces Figure 6 from Border with annotations:
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 16 of 29
`
`
`
`
`
`Pet., at 38. In Border’s output image (object 208), “[t]he dashed line 220 shows
`
`where the transition is. Thus, the composite image 208 has higher resolution in the
`
`interior and lower resolution on the edges.” Ex. 1006, at ¶ 47. The higher-resolution
`
`“overlap area” in Border’s output image has the point of view of the second image
`
`(object 206). The remaining portion of the composite image that is outside of the
`
`overlap image, in contrast, is of a lower resolution and has the point of view of the
`
`first image (object 204).
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 17 of 29
`
`
`
`35.
`
`I understand that the Petitioner’s expert witness, Dr. Cossairt, testified that
`
`Border’s image stitching process would result in a composite image comprised of
`
`two portions with different points of view. Ex. 2002, at 96:19-97:12. This is of
`
`course correct, because in a MAI system with multiple cameras, “you produce
`
`multiple images with different, from different viewpoints or perspectives.” See id.
`
`at 48:2-15. The differences between images captured by different sensors in a MAI
`
`system include, for example, the shape and perspective of objects captured in the
`
`images, where one portion of an object would be occluded in the image captured by
`
`one sensor but not included in the image captured by another sensor in the system.
`
`36. An example of the differences that can occur with a change in camera point
`
`of view is shown in the following images from the Jacobson textbook cited by Dr.
`
`Cossairt. Ex. 1008, p. 59. In the bottom image, taken from one point of view, the
`
`left side wall of the building is visible. In the top image, taken from another point of
`
`view, that left side wall is not visible, as it is behind the front wall of the building.
`
`Ex. 2002, at 51:1-13, 52:9-20. Likewise, the ball on a pedestal to the left of the front
`
`door has a different position relative to the windows of the building in the two
`
`images. In the top image, it is to the left of the window immediately to the left of
`
`the door, and it does not block any portion of that window. In the bottom image, the
`
`ball is to the right of that same window and partially blocks (i.e., occludes) that
`
`window. While these specific images do not appear to have been captured using two
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 18 of 29
`
`
`
`apertures in a mobile device of the kind contemplated in the ’152 patent, the same
`
`types of changes in the visibility, relative positions, and occlusions among objects
`
`will exist whenever a scene made up of three-dimensional objects is captured from
`
`two different points of view. As Dr. Cossairt testified, homography cannot change
`
`these characteristics of an image taken from one point of point of view to match
`
`those of the scene from a different point of view. Ex. 2002, at 110:12-112-18.
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 19 of 29
`
`
`
`'
`
`f 7
`i
`' m
`
`n
`.
`
`J 1w
`:'
`llIllln'” '
`I,
`‘i
`
`'
`
`
`
`"‘
`
`
`2
`
`'1
`
`-
`
`,
`
`i
`
`m.
`.‘i
`'
`
`'
`
`of!
`'~. Ir;
`4L1 '
`i
`,
`‘.
`o
`I;
`,
`'
`.
`I‘
`- |
`. i!
` f- llllm
`- ~7-
`
`
`
`<ear—we’‘1‘““V l,
`
`(b)"
`
`
`
`int. (b) Closer oblique viewpo:ifigple V. Corephotonics
`Exhibit 2005
`
`IPR2018-01133
`
`Exhibit 2005 Page 20 of 29
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 20 of 29
`
`
`
`37.
`
` In reference to Figure 6 of Border, Dr. Cossairt confirmed that a stitched
`
`composite image would have the points of view from both the Wide and Tele
`
`cameras:
`
`[Q.] So in the system described in Border with the separate
`wide and telephoto cameras, even if they are coplanar, the
`objects that are occluded in the wide image may or may
`not be occluded to the same degree in the telephoto image,
`correct? The existence or degree of occlusion may differ
`between the two images, correct?
`
`A. I believe it is possible, yes.
`
`Q. So suppose that a portion of Object A is occluded by
`Object B in the wide image, but that it is not occluded in
`the tele image.
`
`A. Okay.
`
`Q. If that object, if those objects were outside of the dashed
`line 220 in the output image in Figure 6, then in the output
`image the Object A would be occluded by Object B,
`correct?
`
`A. I believe that’s right.
`
`Ex. 2002, at 93:6-94:1 (objections omitted); see also id. at 97:2-25.
`
`38. The Petition does state that Border’s DAI system, under the precise
`
`condition where an image from the Tele sensor is not needed (that is, when no zoom
`
`capability is used, or when FOV1=FOVZF), then “the composite image is the wide
`
`image.” Pet. 44 (discussing [1.8]) (“Specifically, for Z=1, Border states that ‘the
`
`composite image is the wide image 204.’”). Even though no image stitching or
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 21 of 29
`
`
`
`fusion occurs in Border when FOV1=FOVZF, the output image, the Petition states, is
`
`“necessarily from the point of view of the first camera.” Id. The cited disclosure in
`
`Border, however, does not pertain to a situation when the field of view of the Tele
`
`camera is less than the field of view of the zoom factor, which is in turn less than the
`
`field of view of the Wide camera – or “FOV2<FOVZF<FOV1”, as required by claim
`
`1.
`
`B.
`
`Border Fails to Disclose [1.11]: a processor “configured to register
`
`the overlap area of the second image as non-primary image to the
`
`first image as primary image to obtain the output image”
`
`39. Claim 1 and 3, and thus its dependents claims 2 and 4, require a
`
`processor to be “configured to register the overlap area . . . to obtain the output
`
`image.” Ex. 1001. Furthermore, the claimed “output image” must be either from
`
`the “point of view ... of the first camera” (claim 1) or the “point of view of the
`
`second camera” (claim 2). Although the Petition includes Parulski as a combination
`
`reference for other elements of [1.11], it cites and discusses only Border for the
`
`registration-specific aspects of the claim. See Pet., at 60 (“Border teaches that its
`
`processor 50 uses the registration from the telephoto image 206 to the wide image
`
`204 to obtain the composite images 208.”); see generally id. at 59-61.
`
`40. The ’152 Patent explains that its “registration process considers the
`
`primary image as the baseline image and registers the overlap area in the auxiliary
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 22 of 29
`
`
`
`image to it,” by finding corresponding pixels in the overlap area between the primary
`
`image and the auxiliary image. Ex. 1001, at 9:22-27. This species of image
`
`registration requires two predicates: (1) different portions of the overlap region
`
`between the Wide and Tele images are treated differently based on differences of
`
`the relative positions and shapes of objects in the two images, meaning that pixels
`
`of one image cannot be simply translated to those of another image; and (2)
`
`identification of the primary image is necessary since the registration process must
`
`identify which objects (and pixels) must be included in the output image and which
`
`objects (and pixels) do not. Both of these predicates are required for a system to
`
`generate an output image that has the point of view of only one of the input images.
`
`And whereas the homography registration technique taught by Border simply maps
`
`every pixel of one image to every pixel of another, the required registration of the
`
`’152 Patent is computationally more complex and maps only certain pixels of the
`
`Tele image which match the pixels of the Wide image.
`
`41.
`
`In the ’152 Patent, the registration process attempts to find for each
`
`pixel in the primary image (Wide) a corresponding pixel in the non-primary image
`
`(Tele); but because the images’ respective points of view are different, there are
`
`potentially pixels in the non-primary image (for example, due to object occlusion)
`
`which are never mapped to a corresponding pixel in the primary image. See, e.g.,
`
`Ex. 1001, at 9:22-26 (“The registration process considers the primary image as the
`
`Apple v. Corephotonics
`Exhibit 2005
`IPR2018-01133
`
`Exhibit 2005 Page 23 of 29
`
`
`
`baseline image and registers the overlap area in the auxiliary image to it, by finding
`
`for each pixel in the overlap area of the primary image its corresponding pixel in the
`
`auxiliary image.”), 9:43-45 (“The output of the registration stage is a map relating
`
`Wide image pixels indices to matching Tele image pixels indices.”).
`
`42. Border’s disclosure of its homography registration is qualitatively
`
`different from the registration of the ’152 Patent. Specifically, Border teaches
`
`performing a “simple homography … that provides translation and scale may be
`
`used to map pixels of the telephoto image 206 to the wide image 204.” Id. at 59. As
`
`explained above, the homographic registration techniques taught by Border are used
`
`to produce stitched output images which are from neither the point of view of the
`
`Wide camera or the Tele camera, because the output images depict two different
`
`points of view that are stitched together such that “the edge between the two images
`
`in the composite image is not discernible.” Pet., at 44. The claims of the ’152 Patent,
`
`however, require more than simple image stitching.
`
`43. Border teaches simple planar homography which assumes that the
`
`scene being viewed by both cam