throbber
PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`
`Paper No.
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`__________________
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`___________________
`
`SONY CORPORATION
`Petitioner
`
`v.
`
`Patent of YISSUM RESEARCH DEVELOPMENT COMPANY OF THE
`HEBREW UNIVERSITY OF JERUSALEM
`Patent Owner
`___________________
`
`Case IPR2013-00219 (SCM)1
`Patent 7,477,284
`Title: SYSTEM AND METHOD FOR CAPTURING AND VIEWING
`STEREOSCOPIC PANORAMIC IMAGES
`_____________________
`
`PATENT OWNER’S RESPONSE
`UNDER 37 C.F.R. § 42.120
`
`1 The IPR2013-00327 proceeding has been joined with this proceeding.
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`TABLE OF CONTENTS
`
`INTRODUCTION ......................................................................................... 3
`I.
`PETITIONER’S BURDEN OF PROOF ........................................................ 3
`II.
`III. UNDISPUTED MATERIAL FACTS........................................................... 3
`IV. OVERVIEW OF U.S. PATENT NO. 7,477,284........................................... 5
`V. THE BOARD DECISION ENTERED SEPTEMBER 23, 2013..................... 8
`VI. THE BOARD SHOULD FIND CLAIMS 1-4, 7, 10, 20, 27-29, AND 36-
`38 PATENTABLE................................................................................................. 9
`A. Kawakita fails to anticipate Claims 1, 10, 27, 36, and 38 under 35
`U.S.C. §102 or render obvious Claims 1-4, 7, 10, 27-29, 36, and 38 under 35
`U.S.C. §103........................................................................................................ 9
`1. General Description of the Kawakita reference....................................... 9
`2. Kawakita fails to teach “a processor [to] generate a plurality of
`mosaics …[that] provide a sense of depth of the scene.”...............................13
`3.
`Inconsistent statements made by Dr. Darrell during his deposition. .......17
`B. Asahi fails to anticipate Claims 1, 3, 20, 27, 29, and 37 under 35 U.S.C.
`§102..................................................................................................................21
`1. General Description of the Asahi reference. ..........................................21
`2. Asahi fails to teach “a display that receives a plurality of the mosaics
`and displays them so as to provide a sense of depth of the scene.”................24
`3. Asahi fails to teach “a processor [to] generate a plurality of
`mosaics…[that] provide a sense of depth of the scene.”................................30
`CONCLUSION ........................................................................................32
`VII.
`CERTIFICATE OF SERVICE .........................................................................34
`
`–1–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`TABLE OF AUTHORITIES
`
`Statutes
`35 U.S.C. § 316(e) ................................................................................................. 3
`Rules
`37 C.F.R. § 42.1(d.) ............................................................................................... 3
`37 C.F.R. § 42.23................................................................................................... 3
`
`–2–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`
`I.
`
`INTRODUCTION
`
`Claims 1-4, 7, 10, 20, 27-29, and 36-38 of the ’284 Patent are patentable
`
`over the challenges that were submitted by the Petitioner Sony Corporation
`
`(“Sony”) and that were authorized by the Board. Specifically, the applied
`
`references fail to disclose or suggest at least the requirements relating to a
`
`processor [to] generate a plurality of mosaics …[that] provide a sense of depth of
`
`the scene and a display that receives a plurality of the mosaics and displays them
`
`so as to provide a sense of depth of the scene.2 This is clear from the face of the
`
`Petition, which either glosses over such elements or makes bare assertions that the
`
`cited references do not support.
`
`II.
`
`PETITIONER’S BURDEN OF PROOF
`
`35 U.S.C. § 316(e) states “[i]nter partes review instituted under this chapter,
`
`the petitioner shall have the burden of providing a proposition of unpatentability by
`
`a preponderance of the evidence.” (See also 37 C.F.R. § 42.1(d.))
`
`III. UNDISPUTED MATERIAL FACTS
`
`Under Board Rule 37 C.F.R. § 42.23, Patent Owner presents below its
`
`statement of material fact. The following facts have been confirmed by experts
`
`from both parties: Sony’s expert Dr. Trevor Darrell during his November 6, 2013
`
`2 For the sake of reference, the following paper will present claim language in bold
`and italics.
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`deposition (“Darrell Dep.” YRD-2008) and Patent Owner’s expert Dr. Irfan Essa
`
`in his attached declaration (“Essa Decl.” YRD-2010.)
`
`1.
`
`To a person of ordinary skill in the art, the term “stereoscopic image”
`
`is not by itself limited to an image that provides a perception of depth to a
`
`person. Instead, the term “stereoscopic image” is a broad term that includes
`
`images that are used by computers or machines to measure distance to an
`
`object. (Darrell Dep., YRD-2008 at 26:11-16, 28:25-29-2, 31:2-5, 31:14-17;
`
`see also Essa Decl. YRD-2010 at ¶ 48.) Further, for some academic fields, it
`
`is reasonable to use the term “stereoscopic image” solely for robotic vision.
`
`(Darrell Dep., YRD -2008 at 31:2-5; see also Essa Decl. YRD-2010 at ¶ 48.)
`
`Importantly, the Patent Owner agrees with Sony and the Board that, as used in
`
`the ’284 Patent claims and specification, the term “stereoscopic image” is
`
`limited to an image that provides a perception of depth to a person. (See
`
`Paper 16 at 7-8.)
`
`2.
`
`In applications where a “stereoscopic image” is being used to provide
`
`a perception of depth, it is important that the items or elements in the image be
`
`at different depths.
`
`(Darrell Dep., YRD-2008 at 32:16-23; see also Essa
`
`Decl. YRD-2010 at ¶ 27.) If objects in the scene are roughly at the same
`
`distance from the camera, there would be no perception of depth because there
`
`would be no “depth differences.” (Darrell Dep., YRD-2008 at 47:9-21; see
`
`–4–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`also Essa Decl. YRD-2010 at ¶ 27.) That is, if all objects are roughly at the
`
`same distance in a scene, one would see the objects as being at infinity or
`
`close, in “whatever depth it was.” (Darrell Dep., YRD-2008 at 47:19-21; see
`
`also, Essa Decl. YRD-2010 at ¶ 27.)
`
`IV. OVERVIEW OF U.S. PATENT NO. 7,477,284
`
`The ’284 Patent is a continuation in part and incorporates by reference,
`
`amongst other disclosures, the disclosure of U.S. Patent Application No.
`
`09/396,248, filed September 16, 1999, issued as the ’003 Patent.
`
`The invention disclosed in the’284 Patent addresses the need for generating
`
`and displaying a panoramic mosaic image pair that provides a perception of depth
`
`to a person. Perception of depth (i.e., stereopsis), is the visual perception of
`
`differential distances among objects in a person’s line of sight. (See e.g., YRD-
`
`2003; see also Darrell Dep., YRD-2008 at 32:16-23.) That is, one object in an
`
`image will be perceived as being closer to the person viewing the image, as
`
`compared to another object in the image. A common day example would be a 3D
`
`movie a person would view at a movie theater. (See e.g., Sony-1001, title.)
`
`For the sake of brevity, the ’284 Patent notes that the image recording
`
`arrangement for recording images is similar to the arrangements described in the
`
`’003 Patent. (Sony-1001 at 3:26-60; see also 9:16-19.) In that regard, the ’003
`
`Patent, in connection with Figs 1A-1B (reproduced below), notes that a series of
`
`–5–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`image strips are generated and mosaiced to create a panoramic mosaic image pair
`
`that conforms to the perspective of human eyes:
`
`It will be apparent from FIG. 1A that each the succession of images
`as seen by the observer’s two eyes as he or she rotates, can be
`separated into separate sets of images, with one set of images being
`associated with each eye…to facilitate the viewing of a stereoscopic
`panoramic image of the scene by a viewer, the images as would be
`received by each of the observer’s eyes can be separately recorded
`and viewed by, or otherwise displayed to, the respective eyes of the
`viewer.
`(SONY-1002 at 3:8-31; see also 2:55-59, emphasis added.)
`
`It will be appreciated that the left and right panoramic images 31L
`and 31R conform to what an observer would see through his or her
`left and right eyes, respectively, as they revolve through the left and
`right viewing circles 5L and 5R described above in connection with
`FIG. 1B.
`(SONY-1002 at 6:42-47, emphasis added.)
`
`–6–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`The ’284 Patent, in one illustrative embodiment of Figs. 2-5, describes a
`
`video camera (21) as a stereoscopic data source that includes an image capture unit
`
`(30), local memory unit (31), a processing unit (32), one or more displays 33A and
`
`33B. (Sony-1001 at 6:55-60.) As the video camera is rotated, it records a series of
`
`images from which image segments or strips for left and right eyes are generated.
`
`(Sony-1001 at 3:42-53) The image segments or strips are then mosaiced in
`
`accordance with the separation and from the perspective of human eyes into a set
`
`of panoramic images comprising a stereoscopic image set. Id. (See also, SONY-
`
`1002 at 6:42-47.)
`
`The panoramic mosaic images, which were specifically created to conform
`
`to what an observer would see through his or her eyes, can then be displayed to or
`
`viewed simultaneously by the left and right eyes of a person to provide a
`
`perception of depth.
`
`–7–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`As shown below, Sony fails to provide any disclosure that amounts to a
`
`processor [to] generate a plurality of mosaics …[that] provide a sense of depth of
`
`the scene and a display that receives a plurality of the mosaics and displays them
`
`so as to provide a sense of depth of the scene, as claimed in the ’284 Patent.
`
`V.
`
`THE BOARD DECISION ENTERED SEPTEMBER 23, 2013
`
`On September 23, 2013, the Board entered a Decision joining IPR2013-
`
`00327 with IPR2013-00219. Further, in IPR 2013-00219 and in IPR2013-00327,
`
`the Board granted Sony’s Petition in part and denied Sony’s Petition in part.3
`
`Specifically, the Board has instituted trial based on the following:
`
`A. Claims 1-4, 7, 10, 27-29, 36, and 38 with reference to Kawakita,
`
`B. Claims 1, 3, 20, 27, 29, and 37 with reference to Asahi.
`
`All other grounds of rejection proposed by Sony were denied. Patent Owner
`
`therefore is not responding to the substance of other challenges.
`
`3 Because in connection with the independent claims Sony’s Petition and the
`Board’s Decision in IPR2013-00219 and IPR2013-00327 are largely the same,
`Patent Owner provides the following discussion only with specific reference to
`Sony’s Petition and the Board’s decision in IPR2013-00219.
`
`–8–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`VI. THE BOARD SHOULD FIND CLAIMS 1-4, 7, 10, 20, 27-29, AND 36-
`38 PATENTABLE
`
`A.
`
`Kawakita fails to anticipate Claims 1, 10, 27, 36, and 38 under 35
`U.S.C. §102 or render obvious Claims 1-4, 7, 10, 27-29, 36, and 38
`under 35 U.S.C. §103.
`As discussed below, Kawakita4 fails to disclose all the elements of Claims 1-
`
`4, 7, 10, 27-29, 36, and 38. For the sake of discussion, Claim 1 is representative.
`
`1. General Description of the Kawakita reference.
`
`Kawakita is directed to a technique for generating images for stereoscopic
`
`viewing. (SONY-1004 at 13-14.) Kawakita discloses a tripod that is rotated
`
`manually about an axis to capture vertical strips of images. (SONY-1004 at 14.)
`
`The setup is illustrated below in Kawakita’s Fig. 1.
`
`Kawakita discusses computing the optical flow between images to determine
`
`the size of vertical strips within the same panorama, and then mosaicing these
`
`strips together to form a panorama. (SONY-1004 at 14-15; see also Essa Decl.
`
`YRD-2010 at ¶¶ 29-33.) Optical flow is different for objects closer to the camera
`
`4 It is noted that although Patent Owner agreed to not challenge the priority of
`Kawakita in this proceeding (see notice 34), Patent Owner reserved the right to
`challenge the priority of Kawakita in any other proceeding.
`
`–9–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`then for objects farther from the camera. (SONY-1004 at 14-15; see also Essa
`
`Decl. YRD-2010 at ¶¶ 29-33.) Specifically, as the camera rotates around the
`
`vertical axis, objects closer to the camera appear to move faster (in this case
`
`horizontally) than objects further away. (SONY-1004 at 14-15; see also Essa
`
`Decl. YRD-2010 at ¶¶ 29-33.)
`
`Kawakita describes two difference situations, or scenarios, that may occur,
`
`based on the relative distances of the objects in the scene:
`
`When the left and right panoramic images obtained using the
`foregoing procedure are viewed binocular stereoscopically, a
`stereoscopic view is possible that
`faithfully reproduces
`the
`positional relationships. However, if the camera was placed at a
`comparatively close distance, or if the distance from the camera to
`the objects varies greatly, the positions representing the left and the
`right panoramic images must be adjusted.
`
`(SONY-1004 at 16-17.) Said differently, Kawakita refers to the following two
`
`scenarios:
`
`Scenario #1: when the objects in the image are roughly at the same distance.
`
`Kawakita briefly notes that a stereoscopic view is possible if the image was
`
`captured from a sufficient distance and if the distances from the camera to the
`
`objects do not vary greatly. (SONY-1004 at 16-17.)
`
`–10–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`Scenario #2: when the objects in the image are at different distances.
`
`Kawakita discusses this scenario in more detail. Kawakita notes that in this
`
`scenario, the resulting mosaiced image has a problem in that the objects from the
`
`left and right images appear to overlap. Kawakita addresses this problem by
`
`performing a dynamic parallax adjustment of the left and right images to remove
`
`some of the overlap. (SONY-1004 at 17.)
`
`More specifically, Kawakita discusses a “Field Test” to perform the dynamic
`
`parallax adjustment of the images displayed. (SONY-1004 at 18.) The Field Test
`
`applied the parallax adjustment technique to images of an elevator hallway in
`
`which the distance to objects varied. (SONY-1004 at 18.) An illustration how the
`
`alignment was performed for one viewpoint is illustrated in Fig. 7, below.
`
`Notably, the parallax adjustment was performed while observers were
`
`actually looking at the images in a specific direction and if the observer would look
`
`–11–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`in a different direction another adjustment would take place. (SONY-1004 at 18;
`
`see also Essa Decl. YRD-2010 at ¶¶ 38-39.)
`
`Sony’s expert, Dr. Darrell, made the following statements regarding this
`
`second scenario in Kawakita:
`
`1. Kawakita discusses performing the Field Test using ten (10) researchers.
`
`The ten researchers perceived overlapping (double) images, which is
`
`indicative of the failure of stereoscopic fusion. (Darrell Dep., YRD-2008 at
`
`68:22-23.)
`
`2. The term “stereo fusion” refers to a process whereby corresponding points in
`
`two scenes are brought together by the visual system so as to create a sense
`
`of depth. (Darrell Dep., YRD-2008 at 33:8-13.)
`
`3. Kawakita performs multiple alignment adjustments to the images, one for
`
`each sight line direction that an observer looks. (Darrell Dep., YRD-2008 at
`
`76:2-9; 80:13-17.)
`
`4. In Kawakita, if no subsequent alignment adjustment is made when a viewer
`
`shifts to a different line of direction, faithful stereoscopic viewing in that
`
`portion of the scene is not possible. (Darrell Dep., YRD-2008 at 84:15-24,
`
`86:14-22.)
`
`–12–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`Kawakita fails to teach “a processor [to] generate a plurality
`2.
`of mosaics …[that] provide a sense of depth of the scene.”
`
`Claim 1 of the ’284 Patent refers to a display that receives a plurality of the
`
`mosaics and displays them so as to provide a sense of depth of the scene. The
`
`’284 Patent makes clear, and the Board has recognized that “the sense of depth
`
`must be perceived by a person.” (Paper 16 at 16-17, emphasis added). Sony’s
`
`Petition asserts that displaying the images of Kawakita’s Fig. 5 discloses the
`
`claimed processor [to] generate a plurality of mosaics …[that] provide a sense of
`
`depth of the scene. (Petition at 16-20.) As discussed above in the section titled
`
`“General Description of the Kawakita Reference,” Kawakita teaches two scenarios,
`
`and Sony’s arguments fail in each scenario.
`
`Kawakita’s first scenario is when the distance from the camera to the objects
`
`does not vary greatly. Regarding this embodiment, Kawakita states:
`
`When the left and right panoramic images obtained using the
`foregoing procedure are viewed binocular stereoscopically, a
`stereoscopic view is possible that
`faithfully reproduces
`the
`positional relationships. However, if the camera was placed at a
`comparatively close distance, or if the distance from the camera to
`the objects varies greatly, the positions representing the left and the
`right panoramic images must be adjusted.
`(SONY-1004 at 16-17). Said differently, in this scenario, stereoscopic viewing is
`
`possible if (1) the images are captured from a sufficient distance and (2) the
`
`–13–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`distance from the camera to the objects is roughly the same. (Essa Decl. YRD-
`
`2010 at ¶¶ 27, 33-35). As discussed above in the statements of Material Fact #2, in
`
`scenes where the distance from the camera to the objects is roughly the same there
`
`will not be a perception of depth.
`
`This understanding was confirmed by Sony’s expert, Dr. Darrell when he
`
`noted that there must be differential distance of objects in the scene to provide a
`
`perception of depth:
`
`Q. What does it mean in your field to provide a
`perception of depth to a human?
`A. I would say that it would mean that if a human
`viewed the stimulus, that they would sense differential
`distances of objects or surfaces or other elements of a
`scene, and that they could distinguish that from the
`case where there were no such differences in depth of
`such elements.
`(Darrell Dep., YRD-2008 at 32:16-23; see also Essa Decl., YRD-2010 at ¶ 27.)
`
`Sony’s expert Dr. Darrell further confirmed that if objects in the scene are roughly
`
`at the same distance from camera there would be no sense of depth when a person
`
`viewed the images:
`
`Q. All the objects in the scene to be recorded are
`roughly the same distance from the cameras. Will the
`resultant image provide a perception of depth as to
`those?
`
`–14–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`
`…T
`
`HE WITNESS: It would -- it would -- it would
`provide the degenerate sense of depth, that everything
`was at the same distance. So generally one would answer
`the question no, because you -- you want to see depth
`differences.
`(Darrell Dep., YRD-2008 at 47:9-19; see also Essa Decl., YRD-2010 at ¶ 27.)
`
`Accordingly, there will not be a perception of depth in the first scenario of
`
`Kawakita because a perception of depth requires having objects at different
`
`distances from the camera and this scenario has objects as roughly the same
`
`distance. Thus, this scenario does not teach the claimed processor [to] generate a
`
`plurality of mosaics …[that] provide a sense of depth of the scene. (See Essa
`
`Decl., YRD-2010 at ¶ 35.)
`
`Kawakita’s second scenario is when the objects in the image are at different
`
`distances. As mentioned in the section above, there is a problem in this scenario
`
`with overlap (also referred to as double images by Dr. Darrell, see e.g., Darrell
`
`Dep., YRD-2008 at 68:22-24; see also Essa Decl., YRD-2010 at ¶ 37). Kawakita
`
`addresses this problem by performing a parallax adjustment process to align the
`
`images. However, the alignment is only performed on part of the image:
`
`A field test was conducted applying these techniques to panoramic
`images of an elevator hallway in which the distance to objects
`varies greatly. First, while actually looking at
`the panoramic
`images, alignment was performed in several sight line directions so
`
`–15–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`faithful stereoscopic viewing would be possible, and the depth
`parallax angle in each sight line direction was recorded.
`(SONY-1004 at 18, emphasis added.) As such, only part of the image (the sight
`
`line looked at by the viewer) is adjusted. (See Essa Decl., YRD-2010 at ¶ 39.) As
`
`for the remaining parts of the image, overlap remains and accordingly there is no
`
`perception of depth at to those parts.
`
`This understanding was confirmed by Sony’s expert, Dr. Darrell, during his
`
`deposition on November 6, 2013, where he noted that Kawakita performs multiple
`
`alignment adjustments to the images, one for each sight line direction that an
`
`observer looks:
`
`… K
`
`awakita discloses performing the alignment in
`several sight line directions while viewing the
`panoramic images.
`Q. So for each sight line direction in a pair of
`images, Kawakita describes that a new adjustment must be
`made for each of those sight lines when an observer is
`viewing?
`A. I think so. Yes.
`(Darrell Dep., YRD-2008 at 76:2-9; see also Essa Decl., YRD-2010 at ¶ 39.)
`
`Sony’s expert Dr. Darrell went on to confirm that in Kawakita if no subsequent
`
`adjustment is made when a viewer shifts to a different line of direction faithful
`
`stereoscopic viewing is that portion of the scene is not possible:
`
`–16–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`Q. If no subsequent adjustment of the panoramic
`images is made when a viewer shifts their viewing to a
`different sight line direction in the panoramic images
`and the depth parallax angle that's been calculated for
`that new line of sight is different than the old, will
`faithful stereoscopic viewing of that portion of the
`scene be possible --
`A. No.
`Q. -- in Kawakita?
`A. I don't think so.
`(Darrell Dep., YRD-2008 at 84:15-24; see also Essa Decl., YRD-2010 at ¶ 39.) As
`
`such, in Kawakita’s embodiment where parallax adjustment is performed, only part
`
`of the image is actually displayed and aligned and other parts of the image overlap.
`
`(Essa Decl., YRD-2010 at ¶ 39.) Because the partially parallax adjusted images of
`
`Kawakita are only partially viewable they provide a perception of depth of a partial
`
`scene. (See Essa Decl., YRD-2010 at ¶ 41-42.) Thus, Kawakita does not disclose
`
`the claimed processor [to] generate a plurality of mosaics …[that] provide a
`
`sense of depth of the scene.
`
`Inconsistent statements made by Dr. Darrell during his
`3.
`deposition.
`
`During his deposition, Dr. Darrel stated that even prior to any adjustment,
`
`the images of Kawakita that appear to overlap are stereoscopic images that provide
`
`a perception of depth:
`
`–17–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`Q. So even in the circumstances described by
`Kawakita and his colleagues in which objects appear to
`overlap, such objects are still part of a stereoscopic
`panoramic image pair under your definition?
`A. In that hypothetical case, if there were other
`objects in the scene that didn't overlap, that did have
`proper depth, there would be some depth perception in
`that scene. It may not be a very high-quality
`perception, or faithful.
`(Darrell Dep., YRD-2008 at 59:17-25, emphasis added.)
`
`Q. I want to confirm something we talked about
`before lunch. It's your opinion that the image pairs
`generated by the Kawakita process are a stereoscopic
`panoramic image pair even prior to any of these
`adjustment techniques that Kawakita discloses, correct?
`A. Quite possibly, yes.
`(Darrell Dep., YRD-2008 at 93:17-22.) In support of the above statements, Dr.
`
`Darrell, suggested that in using the term “faithful,” Kawakita was making a
`
`distinction between high and low fidelity perception of depth:
`
`A. So when you have faithful stereoscopic viewing,
`such that "a stereoscopic view is possible that
`faithfully reproduces the positional relationships," I
`take that to mean, by the authors, they're expressing
`the goal of a very high-fidelity, accurate
`reconstruction and perception, I should say, of the
`
`–18–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`depth relationships in the scene, and that is a stricter
`definition of depth perception than the one that I would
`use when defining the term "stereoscopic panorama."
`(Darrell Dep., YRD-2008 at 59:8-16, emphasis added.) These positions, however,
`
`are wholly inconsistent with the express language of Kawakita which discloses that
`
`in embodiments where the distance from the camera to the objects varies greatly
`
`“objects appear to overlap or some other fault, making faithful stereoscopic
`
`viewing impossible.” (SONY-1004 at 17.) Moreover, this position is inconsistent
`
`with Dr. Darrell’s earlier statement made during the deposition that seeing double
`
`images is indicative of failure of stereoscopic fusion:
`
`Q. Did you discern in this Kawakita article any
`mechanism to perform the adjustments they describe so as
`to permit faithful stereoscopic viewing of these images?
`A. Yes. They say that the method -- they disclose
`the method in section 6 which performs the adjustments
`of depth parallax angle and applied that in a field
`test, using an apparatus that they constructed, and had
`human viewers -- they mentioned ten research
`personnel -- view the panoramas stereoscopically,
`through some apparatus that isn't specifically
`disclosed, but a stereoscopic viewing apparatus that
`could have had double images, which is indicative of the
`failure of stereoscopic fusion, and they say those ten
`personnel experienced a faithful reproduction of
`
`–19–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`
`their -- of a sense of depth.
`(Darrell Dep., YRD-2008 at 68:12-69:1, emphasis added.) “Stereo fusion” or
`
`“stereoscopic fusion” refers to a process whereby corresponding points in two
`
`scenes are brought together (i.e., fused) by a person’s visual system so as to create
`
`a sense of depth. (Darrell Dep., YRD-2008 at 33:8-13; see also Essa Decl., YRD-
`
`2010 at ¶ 26.)
`
`As also noted by Dr. Essa, in his declaration, if two images viewed
`
`simultaneously with appropriate apparatus result in overlapping objects (or double
`
`images), stereoscopic fusion is impossible because the points are not brought
`
`together:
`
`In my opinion, when Kawakita states that “objects appear to overlap
`… making faithful stereoscopic viewing impossible,” Kawakita is not
`attempting to distinguish between good and bad stereoscopic viewing.
`Rather, Kawakita is stating that it is impossible for the human mind to
`stereoscopically fuse the images together because of the overlap,
`which may also be referred to as double images (similar to seeing
`objects when looking cross-eyed). As I noted above, if a human
`perceives overlap (or there are double images) when looking at a
`stereo image pair as the sole stimulus,
`the human mind cannot
`stereoscopically fuse corresponding points of the images together and
`consequently there is no perception of depth. Here, Kawakita’s
`unadjusted images have “objects [that] appear
`to overlap” and
`consequently, the images cannot be stereoscopically fused without
`additional parallax adjustment.
`
`–20–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`(Essa Decl., YRD-2010 at ¶ 37, emphasis added.)
`Therefore, and contrary to Dr. Darrell’s inconsistent statements, Kawakita’s
`
`unadjusted images, in embodiments where the distance from the camera to the
`
`objects varies greatly, are not images that provide a perception of depth before they
`
`are adjusted because there is failure of stereoscopic fusion (i.e., images appear to
`
`overlap/double images) and consequently cannot provide a perception of depth.
`
`Thus, because Kawakita does not teach a processor [to] generate a plurality
`
`of mosaics …[that] provide a sense of depth of the scene, as recited in the claim,
`
`Petitioner has failed to establish by a preponderance of the evidence that Claims 1,
`
`10, 27, 36, and 38 under 35 U.S.C. §102 are unpatentable over Kawakita under
`
`§102 and that Claims 1-4, 7, 10, 27-29, 36, and 38 are unpatentable over Kawakita
`
`in view of Chen and Kodak, under 35 U.S.C. §103.
`
`B.
`
`Asahi fails to anticipate Claims 1, 3, 20, 27, 29, and 37 under 35
`U.S.C. §102.
`
`As discussed below, Asahi fails to disclose all the elements of Claims 1, 3,
`
`20, 27, 29, and 37. For the sake of discussion, Claim 1 is representative.
`
`1. General Description of the Asahi reference.
`
`Asahi is directed to calculating the heights of objects to make contour maps
`
`of a terrain. (SONY-1010 at ¶ 0003-0004.) Asahi discloses moving a video
`
`camera through the air via a helicopter, with GPS, gyro, and other metadata known
`
`for the camera location/orientation. (SONY-1010 at ¶ 0014-0015.)
`
`–21–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`
`.
`
`Three single pixel width image lines are extracted at the forward, rearward,
`
`and nadir points and combined to form 3 continuous mosaics; the forward, the
`
`rearward, and the nadir mosaic. (SONY-1010 at ¶ 0058.) A single mosaic is
`
`illustrated below in Figs. 11, 14 and 15.
`
`–22–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`The mosaic images are subsequently adjusted to remove vertical parallax
`
`relative to each other (which is based on orientation information gathered during
`
`the flight.) (SONY-1010 at ¶ 0050-0052.) Notably, although the continuous
`
`mosaic images are adjusted (i.e., rotated to remove the vertical parallax as shown
`
`in Fig. 14), the vertical parallax adjustment of the mosaic images does not generate
`
`horizontal aligned images suitable for human viewing (see Essa Decl., YRD-2010
`
`at ¶¶ 50-51) and the images still contain image defects (illustrated by Asahi as a
`
`letter F with wavy lines in Fig. 11 above.)
`
`Subsequently, the adjusted mosaic images are used to compute height (h) of
`
`objects in the scene and to ultimately create a Digital Elevation Model (DEM) of
`
`the scene over which the camera is flown. (SONY-1010 at ¶ 0064.) The formula
`
`for calculating height is h=dxH/B, where h is the height (meters) to be determined,
`
`d is the parallax difference (meters), B is the base length (meters), and H is the
`
`imaging altitude (meters.)
`
`(SONY-1010 at ¶ 0064.) Consequently, because the
`
`calculated height is a function of the ratio of the base and altitude (i.e., H/B), the
`
`base B must necessarily be in the order of meters (if not tens or hundreds of
`
`meters). As explained by Dr. Essa:
`
`A terrain is likely to have various hills and valleys, which can vary
`in height
`in the range of tens or hundreds of meters, and a
`helicopter needs to fly over these varying terrains at a safe altitude.
`Therefore, the base distance B between the forward and rearward
`
`–23–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`images (which is related to the flying altitude H, as defined by the
`equation h=dxH/B) should be very large,
`likely in the tens or
`hundreds of meters.
`(Essa Decl., YRD-2010 at ¶ 46.)
`
`It is noted that Asahi is only concerned with calculating height and never
`
`discusses displaying a plurality of mosaics to a person so as to provide a sense of
`
`depth of the scene. (See Essa Decl., YRD-2010 at ¶ 47.)
`
`2. Asahi fails to teach “a display that receives a plurality of the
`mosaics and displays them so as to provide a sense of depth of the
`scene.”
`
`Claim 1 of the ’284 Patent refers to a system for generating and displaying
`
`mosaic images which provide a perception of depth to a person. (Paper 16 at 17-
`
`18.)
`
`–24–
`
`

`

`PATENT OWNER’S RESPONSE
`IPR2013-00219 (Patent 7,477,284)
`Sony asserts that because Asahi is directed to a “‘stereo image formation
`
`device,’ id. claim 24, and discloses that “stereoscopic viewing is possible using
`
`[the] forward view image, [the] nadir view image, and [the] rearward view image”
`
`Asahi discloses displaying mosaic images to a person to provide a perception of
`
`depth of the scene. (Petition at 49, emphasis added.) Notably, Sony elected to not
`
`provide any expert testimony regarding Asahi or the meaning of the terms used
`
`therein, but rather relied completely on attorney argument.
`
`Notwithstanding the lack of expert testimony, and in view of Sony’s
`
`arguments in the Petition, the Board concluded that “[i]n view of the quoted
`
`disclosure from Asahi (EX. 1010 ¶ 35; claim 11) and the known definition of
`
`‘stereoscopic,” we are persuaded that Asahi’s ‘stereoscopic viewing’ discloses
`
`displaying mosaic images to a person so as to provide a sense of depth of

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket