`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`LG ELECTRONICS, INC. and LG ELECTRONICS U.S.A., INC.
`Petitioner
`
`v .
`
`GESTURE TECHNOLOGY PARTNERS LLC
`Patent Owner
`
`Case No. IPR2022-00092
`U.S. Patent No. 8,878,949
`
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NO. 8,878,949
`
`EAST\185898973.2
`
`
`
`TABLE OF CONTENTS
`
`I.
`II.
`
`Page
`INTRODUCTION ........................................................................................... 1
`SUMMARY OF THE ’949 PATENT ............................................................. 1
`A.
`The ’949 Patent’s Alleged Invention .................................................... 1
`B.
`The ’949 Patent’s Prosecution............................................................... 2
`C. Overview of the Proposed Grounds ...................................................... 4
`D. A Person Having Ordinary Skill in the Art ........................................... 5
`III. REQUIREMENTS FOR IPR UNDER 37 C.F.R. § 42.104 ............................ 6
`A.
`Standing Under 37 C.F.R. § 42.104(A) ................................................ 6
`B.
`Challenge Under 37 C.F.R. § 42.104(B) and Relief Requested ........... 6
`C.
`Claim Construction Under 37 C.F.R. § 42.104(B)(3) ........................... 7
`IV. THE CHALLENGED CLAIMS ARE UNPATENTABLE .......................... 10
`A. Ground 1: Claims 1-18 are obvious under pre-AIA 35 U.S.C. §
`103 over Numazaki in view of Nonaka ............................................... 10
`1.
`Overview of Numazaki ............................................................. 10
`2.
`Overview of Nonaka ................................................................. 17
`3. Motivation to Combine Numazaki and Nonaka ....................... 19
`a.
`Claim 1 ............................................................................ 24
`b.
`Claim 2 ............................................................................ 32
`c.
`Claim 3 ............................................................................ 35
`d.
`Claim 4 ............................................................................ 36
`e.
`Claim 5 ............................................................................ 36
`f.
`Claim 6 ............................................................................ 39
`g.
`Claim 7 ............................................................................ 41
`h.
`Claim 8 ............................................................................ 41
`i.
`Claim 10 .......................................................................... 42
`j.
`Claim 11 .......................................................................... 42
`k.
`. Claim 12 ........................................................................ 45
`l.
`Claim 13 .......................................................................... 45
`i
`
`EAST\185898973.2
`
`
`
`TABLE OF CONTENTS
`(Continued)
`
`Page
`m. Claim 14 .......................................................................... 47
`n.
`Claim 15 .......................................................................... 47
`o.
`Claim 16 .......................................................................... 47
`p.
`Claim 17 .......................................................................... 47
`q.
`Claim 18 .......................................................................... 47
`B. Ground 2: Claims 6, 12, and 17 are obvious under pre-AIA 35
`U.S.C. § 103 over Numazaki in view of Nonaka and in further
`view of Aviv ........................................................................................ 48
`1.
`Overview of Aviv...................................................................... 48
`2. Motivation to Modify Numazaki in view of Nonaka and in
`further view of Aviv .................................................................. 50
`a.
`Claim 6 ............................................................................ 53
`b.
`Claim 12 .......................................................................... 53
`c.
`Claim 17 .......................................................................... 53
`V. DISCRETIONARY CONSIDERATIONS ................................................... 54
`A.
`The Fintiv Factors Favor Institution ................................................... 54
`1.
`The Fintiv factors strongly favor institution ............................. 54
`a. Whether the court granted a stay or evidence exists
`that one may be granted if a proceeding is instituted. .... 54
`Proximity of the court’s trial date to the Board’s
`projected statutory deadline for a final written
`decision. .......................................................................... 55
`Investment in the parallel proceeding by the court
`and the parties. ................................................................ 56
`Overlap between issues raised in the petition and in
`the parallel proceeding. ................................................... 56
`e. Whether the petitioner and the defendant in the
`parallel proceeding are in the same party ....................... 57
`Other circumstances that impact the Board’s
`exercise of discretion, including the merits. ................... 57
`
`f.
`
`EAST\185898973.2
`
`ii
`
`b.
`
`c.
`
`d.
`
`
`
`TABLE OF CONTENTS
`(Continued)
`
`2.
`
`Page
`The Fintiv Framework Should Be Overturned ......................... 58
`a.
`The Fintiv framework exceeds the Director’s
`authority .......................................................................... 58
`The Fintiv framework is arbitrary and capricious .......... 60
`b.
`The Fintiv framework was impermissibly adopted without
`notice-and-comment rulemaking .............................................. 61
`VI. CONCLUSION .............................................................................................. 61
`VII. MANDATORY NOTICES UNDER 37 C.F.R. § 42.8(A)(1) ...................... 62
`A.
`Real Party-In-Interest .......................................................................... 62
`B.
`Related Matters .................................................................................... 62
`C.
`Lead and Back-Up Counsel ................................................................. 62
`
`3.
`
`EAST\185898973.2
`
`iii
`
`
`
`I.
`
`INTRODUCTION
`Petitioner LG Electronics, Inc. and LG Electronics U.S.A. Inc. (“Petitioner”)
`
`requests an Inter Partes Review (“IPR”) of claims 1–18 (the “Challenged Claims”)
`
`of U.S. Patent No. 8,878,949 (“the ’949 Patent”). This petition is substantively the
`
`same as IPR2021-00921 (which is currently pending institution), and is being filed
`
`concurrently with a motion for joinder with respect to that proceeding.
`
`II.
`
`SUMMARY OF THE ’949 PATENT
`A.
`The ’949 Patent’s Alleged Invention
`Generally directed to digital imaging, the ’949 Patent seeks to automate the
`
`process of taking a picture by analyzing the scene and capturing an image when
`
`“certain poses of objects, sequences of poses, motions of objects, or any other
`
`states or relationships of objects are represented.” ’949 Patent (Ex. 1001), 1:50-
`
`2:8. The patent describes a number of different scenarios that, when detected,
`
`cause the camera to capture an image. Some examples include detecting (1) a
`
`“[s]ubject in a certain pose,” (2) a “[s]ubject in a sequence of poses,” (3) a
`
`“[p]ortion of [s]ubject in a sequence of poses (e.g., gestures),” (4) a “[s]ubject or
`
`portion(s) in a specific location or orientation,” (5) a “[s]ubject in position relative
`
`to another object or person” such as a “bride and groom kissing in a wedding,” and
`
`(6) “a subject undertak[ing] a particular signal comprising a position or gesture”
`
`such as “raising one’s right hand.” Id. at 5:30-49. Only gestures are claimed,
`
`EAST\185898973.2
`
`1
`
`
`
`however. Each of the Challenged Claims requires detecting or determining a
`
`“gesture has been performed.” Id. at Independent Claims 1, 8, 13.
`
`The ’949 Patent contemplates multiple image sensors to accomplish its goal.
`
`For example, a “central camera . . . is for picture taking and has high resolution and
`
`color accuracy,” while “lower resolution” cameras “with little or no accurate color
`
`capability . . . are used to simply see object positions.” Id. at 5:1-6. Although the
`
`term is not used outside the claims, all Challenged Claims refer to the gesture-
`
`capturing sensor as an “electro-optical sensor.” Id. at Independent Claims 1, 8, 13.
`
`The ’949 Patent’s Prosecution
`B.
`The Application that resulted in the ’949 Patent was filed on August 7, 2013.
`
`The Application claims priority to provisional patent application No. 60/133,671,
`
`filed May 11, 1999. Id. at (22), (60). For purposes of this petition and without
`
`waiving its right to challenge priority in this or any other proceeding, Petitioner
`
`adopts May 11, 1999 as the invention date for the Challenged Claims.
`
`A first office action rejected all initially presented claims as anticipated or
`
`obvious over U.S. Patent No. 6,359,647 to Sengupta et al. (“Sengupta”). ’949 File
`
`History (Ex. 1002), 136-144. The examiner noted that Sengupta teaches an electro-
`
`optical sensor separate from a digital camera, which triggers an image capture
`
`when it detects movement within the sensor’s field of view. Id. at 140-141.
`
`EAST\185898973.2
`
`2
`
`
`
`In response, the Applicant characterized Sengupta as a system comprising
`
`multiple security cameras that transitions to an appropriate camera when an object
`
`moves from one camera’s field of view to another’s. Id. at 167-168. Focusing on
`
`structural distinctions, the Applicant argued that Sengupta did not teach “a device
`
`housing including a forward facing portion having an electro-optical sensor and a
`
`digital camera” as required by Claim 1 and its dependents. Id. at 168. The
`
`Applicant drew a functional distinction with respect to the claims that ultimately
`
`issued as independent Claims 8 and 13 (and their dependents), arguing Sengupta
`
`does not “identify a particular gesture apart from a plurality of gestures, where the
`
`particular gesture corresponds to an image capture command.” Id. at 169-170.
`
`A second office action rejected the Applicant’s alleged distinctions, finding
`
`the structural point was “not clearly defined in claim 1” and “the term ‘gesture’ []
`
`not clearly defined in the claim[s]” to support the purported distinction regarding
`
`independent Claims 8 and 13. Id. at 186. Following an examiner interview on
`
`August 7, 2014 (Id. at 199), the Applicant further amended the claims to distinguish
`
`the claimed invention from Sengupta. Id. at 210-217. The Applicant noted, “[w]ith
`
`respect to [the] amended independent claims . . . , Sengupta does not disclose, teach
`
`or suggest: a) a device housing including a forward facing portion that encompasses
`
`an electro-optical sensor and a digital camera; or b) a processor to determine a
`
`gesture corresponds to an image capture command, which causes the camera to
`
`EAST\185898973.2
`
`3
`
`
`
`store or capture an image.” Id. at 215. Regarding the claimed “gesture,” the
`
`Applicant distinguished Sengupta’s generic movement tracking from the claimed
`
`invention, which must “identif[y] a particular gesture” that triggers the device to
`
`capture an image. Id. at 216.
`
`In response, the Examiner held the pending claims allowable, and the
`
`application was deemed allowable on September 18, 2014 after multiple double
`
`patenting issues were resolved. Id. at 254-260.
`
`Overview of the Proposed Grounds
`C.
`Neither the camera structure—multiple forward-facing image sensors—nor
`
`the concept of detecting a specific gesture that causes a camera to capture an image
`
`were new. As set forth in detail below, U.S. Patent 6,144,366 to Numazaki et al.
`
`(“Numazaki”) teaches portable devices equipped with multiple forward-facing
`
`image sensors, detects gestures performed by users, connects those gestures with
`
`specific commands, and captures images with a separate sensor from those used to
`
`detect gestures. Although Numazaki does not expressly contemplate detecting a
`
`gesture that causes an image capture command to be processed, this was also
`
`known in the art. Namely, JPH4-73631 to Osamu Nonaka (“Nonaka”) (a Japanese
`
`Unexamined Patent Application Publication filed by Olympus Optical Co., Ltd.)
`
`teaches a process by which “the photographer gives a release instruction” for the
`
`camera to capture an image by performing a “predetermined motion towards the
`
`EAST\185898973.2
`
`4
`
`
`
`camera.” Nonaka (Ex. 1005), 3:34-36. The camera “detects this motion by the
`
`subject [] and exposure is carried out.” Id. at 37-38. Nonaka contemplates multiple
`
`predetermined gestures such as holding one’s hand out toward the camera (as
`
`depicted in Fig. 3 below left) or moving one’s hand toward the camera (as depicted
`
`in Fig. 7 below right).
`
`Id. at Figs. 3, 7. For the reasons discussed in the proposed grounds below, a
`
`PHOSITA would have been motivated to implement such image capture gesture
`
`functionality in Numazaki’s gesture-recognizing, multi-camera devices.
`
`A Person Having Ordinary Skill in the Art
`D.
`A person having ordinary skill in the art (“PHOSITA”) at the time of the
`
`’949 Patent would have had at least a bachelor’s degree in electrical engineering or
`
`equivalent with at least one year of experience in the field of human computer
`
`interaction. Additional education or experience might substitute for the above
`
`requirements. Bederson Dec. (Ex. 1003), ¶¶ 29-31.
`
`EAST\185898973.2
`
`5
`
`
`
`III. REQUIREMENTS FOR IPR UNDER 37 C.F.R. § 42.104
`A.
`Standing Under 37 C.F.R. § 42.104(A)
`Petitioner certifies that the ’949 Patent is available for IPR and that
`
`Petitioner is not barred or estopped from requesting an IPR challenging the claims
`
`of the ’949 Patent. Specifically, (1) Petitioner is not the owner of the ’949 Patent,
`
`(2) Petitioner has not filed a civil action challenging the validity of any claim of the
`
`’949 Patent, and (3) this Petition is filed less than one year after the Petitioner was
`
`served with a complaint alleging infringement of the ’949 Patent.
`
`Challenge Under 37 C.F.R. § 42.104(B) and Relief Requested
`B.
`In view of the prior art and evidence presented, claims 1–18 of the ’949
`
`Patent are unpatentable and should be cancelled. 37 C.F.R. § 42.104(b)(1). Further,
`
`based on the prior art references identified below, IPR of the Challenged Claims
`
`should be granted. 37 C.F.R. § 42.104(b)(2).
`
`Proposed Ground of Unpatentability
`Ground 1: Claims 1-18 are obvious under pre-AIA 35 U.S.C.
`§ 103 over U.S. Patent 6,144,366 to Numazaki, et al.
`(“Numazaki”) in view of JPH4-73631 to Osamu Nonaka
`(“Nonaka”)
`Ground 2: Claims 6, 11, and 12 are obvious under pre-AIA 35
`U.S.C. § 103 over Numazaki in view of Nonaka and in further
`view of U.S. Patent No. 5,666,157 to David G. Aviv (“Aviv”)
`
`Exhibits
`Ex. 1004,
`Ex. 1005
`
`Ex. 1004,
`Ex. 1005,
`Ex. 1006
`
`Section IV identifies where each element of the Challenged Claims is found
`
`in the prior art. 37 C.F.R. § 42.104(b)(4). The exhibit numbers of the evidence
`
`relied upon to support the challenges are provided above and the relevance of the
`
`EAST\185898973.2
`
`6
`
`
`
`evidence to the challenges raised is provided in Section IV. 37 C.F.R. §
`
`42.104(b)(5). Exhibits 1001-1017 are also attached.
`
`Claim Construction Under 37 C.F.R. § 42.104(B)(3)
`C.
`In this proceeding, claims are interpreted under the same standard applied by
`
`Article III courts (i.e., the Phillips standard). See 37 C.F.R § 42.100(b); see also 83
`
`Fed. Reg. 197 (Oct. 11, 2018); Phillips v. AWH Corp., 415 F.3d 1303, 1312 (Fed.
`
`Cir. 2005) (en banc). Under this standard, words in a claim are given their plain
`
`meaning which is the meaning understood by a person of ordinary skill in the art in
`
`view of the patent and file history. Phillips, 415 F.3d 1303, 1212–13. With the
`
`single exception discussed below in this section, Petitioner proposes that no terms
`
`require express construction to resolve the proposed grounds presented herein.
`
`Capture and store an image
`
`All three independent claims require capturing and/or storing “an image” in
`
`response to detecting an image capture gesture. See ’949 Patent (Ex. 1001), Claim
`
`1 (“the image capture command causes the digital camera to store an image to
`
`memory”), Claim 8 (“capturing an image to the digital camera in response to . . .
`
`the image capture command”), Claim 13 (“correlate the gesture detected . . . with
`
`an image capture function and subsequently capture an image using the digital
`
`camera”). For a number of reasons, these limitations should be construed broadly
`
`enough to encompass capturing/storing video or still images. First, the ’949 Patent
`
`EAST\185898973.2
`
`7
`
`
`
`repeatedly teaches that its invention applies both to still image and video capture.
`
`See, e.g., id. at 2:19-30 (describing efficiencies that can be realized in “movie
`
`making” by implementing the invention, noting it “allows more cost effective film
`
`production by giving the director the ability to expose the camera to the presence
`
`of masses of data, but only saving or taking that data which is useful), 8:10-30
`
`(describing “digital or other camera 320” that “record[s] the picture” when the
`
`subjects are posed in a predetermined manner, noting “camera 320 may be a video
`
`camera and recorder”).
`
`Second, the ’949 Patent teaches that captured video comprises a sequence of
`
`images. Id. at 8:10-30 (describing an embodiment in which “camera 320 may be a
`
`video camera and recorder which streams in hundreds or even thousands of frames
`
`of image data[,]” noting the subjects may then select individual prints from the
`
`captured image data). The Federal Circuit “has repeatedly emphasized that an
`
`indefinite article ‘a’ or ‘an’ in patent parlance carries the meaning of ‘one or more’
`
`in open-ended claims containing the transitional phrase ‘comprising.’” Baldwin
`
`Graphic Sys., Inc. v. Siebert, Inc., 512 F.3d 1338, 1342 (Fed. Cir. 2008) (quoting
`
`KCJ Corp. v. Kinetic Concepts, Inc., 223 F.3d 1351, 1356 (Fed. Cir. 2000)).
`
`Indeed, this principle “is best described as a rule, rather than merely as a
`
`presumption or even a convention” and “[t]he exceptions to this rule are extremely
`
`limited.” Id. (noting the patentee must evince a clear intent to limit “a” or “an” to
`
`EAST\185898973.2
`
`8
`
`
`
`“one”). Here, all independent claims are open-ended containing the transitional
`
`phrase “comprising,” and each requires capturing/storing “an image.” Applying the
`
`Federal Circuit’s well-established “rule,” the claims should be construed to capture
`
`capturing/storing one or more images to memory in response to detecting the
`
`image capture gesture. Because the ’949 Patent teaches that captured video is a
`
`sequence of still image frames, the claims should be construed to encompass
`
`capturing/storing still images or a video sequence of images.
`
`Finally, as discussed above, the prior art-based rejections during prosecution
`
`all focused on U.S. Patent No. 6,359,647 to Sengupta, et al. (“Sengupta”) (Ex.
`
`1014). Sengupta discloses “a system for controlling multiple video cameras” that
`
`supports “an automated camera handoff for selecting and directing cameras” by
`
`“tracking a figure.” The applicant conceded that “the security system in Sengupta
`
`tracks movement of an object ‘from one camera’s field of view to another camera’s
`
`field of view,’” but disputed that Sengupta taught detecting a gesture that
`
`corresponds to an image capture command. ’949 File History (Ex. 1002), 216
`
`(arguing that Sengupta simply “’reports the location’ of the moving objects”).
`
`Critically, however, the applicant did not attempt to distinguish Sengupta on the
`
`basis that it captures video, rather than still images. Accordingly, the prosecution
`
`history supports a conclusion that the applicant intended the Challenged Claims to
`
`encompass both capturing/storing still images and video.
`
`EAST\185898973.2
`
`9
`
`
`
`IV. THE CHALLENGED CLAIMS ARE UNPATENTABLE
`A.
`Ground 1: Claims 1-18 are obvious under pre-AIA 35 U.S.C. §
`103 over Numazaki in view of Nonaka
`1.
`Overview of Numazaki
`U.S. Patent No. 6,144,366 to Numazaki et al. (“Numazaki”) (Ex. 1004) was
`
`filed on October 17, 1997, issued on November 7, 2000, and is prior art to the ’949
`
`Patent under at least 35 U.S.C. § 102(e) (pre-AIA). Numazaki was not cited or
`
`considered during prosecution of the ’949 Patent. ’949 Patent (Ex. 1001).
`
`Numazaki is generally directed to a method for detecting a gesture or the
`
`movement of a user’s hand. Numazaki (Ex. 1004), Abstract, 4:9-40. Numazaki
`
`purports to have improved upon prior methods by using a controlled light source to
`
`illuminate the target object (e.g., the user’s hand), a first camera unit (referred to
`
`by Numazaki as a “photo-detection unit”),1 and a second camera unit. Id. at 11:9-
`
`23. This arrangement is illustrated in Fig. 2 below:
`
`1 A PHOSITA would have considered Numazaki’s photo-detection units to be
`
`camera units. Bederson Dec. (Ex. 1003), ¶ 35 (explaining that Numazaki describes
`
`using CMOS or CCD sensor units at 15:24-16:19, which were two of the more
`
`common optical sensors used in camera units at the time).
`
`EAST\185898973.2
`
`10
`
`
`
`Id. at Fig. 2. A timing control unit is used to turn lighting unit 101 on (i.e.,
`
`illuminating the target object) when the first camera unit is active and off when the
`
`second camera unit is active. Id. at 11:20-32. The result of this light control is the
`
`first camera unit captures an image of the target object illuminated by both natural
`
`light and the lighting unit 101 and the second camera unit captures an image of the
`
`target object illuminated by only natural light. Id. at 11:33-39. The difference
`
`between the two images—obtained by difference calculation unit 111—represents
`
`the “reflected light from the object resulting from the light emitted by the lighting
`
`unit 101.” Id. at 11:43-51. This information is then used by feature data generation
`
`unit 103 to determine gestures, pointing, etc. of the target object that may be
`
`converted into commands executed by a computer. Id. at 10:57-66.
`
`EAST\185898973.2
`
`11
`
`
`
`Throughout its lengthy disclosure, Numazaki describes various features and
`
`functionality based on this two-camera structure. Although referred to as
`
`“embodiments,” as discussed in detail below, a PHOSITA would have understood
`
`that many are complementary and would have been motivated to combine certain
`
`of these features and functionalities in a single device.
`
`Numazaki’s third embodiment is “another exemplary case of the feature data
`
`generation unit of the first embodiment, which realizes a gesture camera for
`
`recognizing the hand action easily and its application as a pointing device in the
`
`three-dimensional space.” Id. at 29:4-8. In this embodiment, data reflecting pre-
`
`registered gestures or hand positions is stored in “shape memory unit 332.” Id. at
`
`29:9-38. This stored data is compared with the output from the two-sensor
`
`“reflected light extraction unit” (i.e., component 102 from first embodiment) to
`
`identify when the user has performed a pre-registered gesture and instructs the
`
`device to implement a “command corresponding to that [gesture].” Id. at 29:22-
`
`30:5. In addition to “hand [g]esture recognition as the means for inputting the
`
`command into the three-dimensional space and the like into the computer,” the third
`
`embodiment contemplates additional gesture-based instructions such as “instructing
`
`the power ON/OFF of the TV, the lighting equipment, etc.” Id. at 31:3-10.
`
`Numazaki cautions that gestures can be erroneously detected when the user did not
`
`intend to command the system and proposes a sequence of gestures to prevent such
`
`EAST\185898973.2
`
`12
`
`
`
`“erroneous commanding.” Id. at 31:11-44, Fig. 29 (describing a sequence of “two
`
`fingers, fist, five fingers” that causes the system to “power on”). Such a sequence
`
`(frames a-c) and the rectangular data extracted from the images (frames d-f) are
`
`illustrated in Fig. 28 below:
`
`Id. at Fig. 28.
`
`Numazaki’s fifth embodiment is directed to video capture and transmission
`
`for applications such as videoconferencing. Specifically, the fifth embodiment
`
`recognizes the difficulties involved with videoconference applications using a
`
`portable device, including communication costs and power consumption. Id. at
`
`39:616. To combat these difficulties, Numazaki teaches a “TV telephone” that
`
`EAST\185898973.2
`
`13
`
`
`
`extracts and transmits only the faces of the participating users, removing
`
`extraneous background information that would otherwise consume bandwidth and
`
`battery power. Id. at 39:12-20. To accomplish this improvement, Numazaki uses
`
`the same two-camera “reflected light extraction unit 102” (i.e., the same
`
`component 102 from first embodiment) in conjunction with a separate “visible
`
`light photo-detection array 351 which is generally used as a CCD camera for
`
`taking video images.” Id. at 39:2149. This structure is depicted in Fig. 46 below:
`
`Id. at Fig. 46. Using this arrangement, the fifth embodiment processes the
`
`output of two-sensor structure 102 to identify an outline of the subject and
`
`subtracts everything outside this outline from the image captured by the visible
`
`light sensor 351. Id. at 39:24-60. In doing so, the fifth embodiment arrives at image
`
`information that contains only the subject without any background image
`
`EAST\185898973.2
`
`14
`
`
`
`information and uses much less data as a result of removing the extraneous image
`
`information. Id. at 40:32-35 (“By recording the extracted image, it is possible to
`
`reduce the recording capacity considerably (in an order of one tenth to one
`
`hundredth)[.]”). Numazaki’s eighth embodiment describes various portable
`
`computers such as laptops and handheld devices on which the earlier-described
`
`hardware and functionalities may be implemented. Id. at 50:19-24 For example,
`
`Fig. 74 depicts a laptop computer with a lighting unit 701 and camera unit 702:
`
`Id. at Fig. 74, 50:25-37 (describing the same). Similar implementations in
`
`which a user can control a cursor by moving a finger are described with reference
`
`to Fig. 78 (a PDA device) and Fig. 79 (a wristwatch device). Numazaki expressly
`
`teaches that these portable devices implement the information input generation
`
`apparatus described in the preceding embodiments. Id. at 50:19-24 (“This eighth
`
`embodiment is directed to a system configuration incorporating the information
`
`EAST\185898973.2
`
`15
`
`
`
`input generation apparatus of the present invention as described in the above
`
`embodiments.”). For example, Numazaki’s eighth embodiment devices include the
`
`controlled light and two-camera configuration described in its first embodiment.
`
`As described with respect to the first embodiment above, Numazaki’s information
`
`input generation apparatus uses controlled lighting and two camera sensors to
`
`isolate an image of an object (such as a user’s hand) by subtracting the image taken
`
`when the light is off from the image taken when the light is on, thereby obtaining
`
`an image that focuses on the illuminated object. Numazaki teaches that this precise
`
`image difference calculation is included in the eighth embodiment. Id. at 53:22-36.
`
`Accordingly, a PHOSITA would have understood that Numazaki’s eighth
`
`embodiment portable devices incorporate the controlled lighting and two-camera
`
`sensor structure described with respect to the first embodiment. Bederson Dec.
`
`(Ex. 1003), ¶ 44. As discussed below in the motivations to combine Numazaki and
`
`Nonaka section, a PHOSITA would also have been motivated to implement in the
`
`eighth embodiment laptop the third embodiment’s feature that maps specific
`
`gestures to specific commands and the fifth embodiment’s video capture feature.
`
`Because Numazaki, like the ’949 Patent, discloses a camera system that may
`
`be controlled by human gesture input, Numazaki is in the same field of endeavor as
`
`the ’949 Patent. Compare Numazaki (Ex. 1004), 31:3-10 (noting gesture
`
`recognition is used as “the means for inputting the command . . . into the
`
`EAST\185898973.2
`
`16
`
`
`
`computer” and noting gestures can be used to turn on/off equipment such as a TV
`
`or lighting equipment), 50:25-48 (describing a portable laptop device equipped
`
`with a light source and camera such that a user can control certain functionalities
`
`using hand gestures) with ’949 Patent (Ex 1001), 5:24-49 (describing the process
`
`of identifying a subject has performed a predetermined gesture and controlling the
`
`camera device to capture an image in response). Numazaki is therefore analogous
`
`art to the ’949 Patent. Bederson Dec. (Ex. 1003), ¶¶ 33-45.
`
`Overview of Nonaka
`2.
`Japanese Unexamined Patent Application Publication JPH4-73631 to Osamu
`
`Nonaka (“Nonaka”) (Ex. 1005) published on March 9, 1992 and is prior art to the
`
`’949 Patent under at least 35 U.S.C. § 102(b) (pre-AIA). Nonaka was not cited or
`
`considered during prosecution of the ’949 Patent. ’949 Patent (Ex. 1001).
`
`Nonaka teaches a “remote release device-equipped camera” that allows a
`
`user to signal a desire for the camera to take a picture by “mak[ing] a
`
`predetermined motion.” Nonaka (Ex. 1005), 15:11-14. Nonaka contemplates
`
`multiple predetermined gestures such as holding one’s hand out toward the camera
`
`(as depicted in Fig. 3 below left) or moving one’s hand toward the camera (as
`
`depicted in Fig. 7 below right):
`
`EAST\185898973.2
`
`17
`
`
`
`Id. at Figs. 3, 7, 3:34-4:4 (describing “the photographer giv[ing] a release
`
`instruction by means of a predetermined motion towards the camera . . . such as
`
`moving his or her hand forward”), 6:11-22 (describing the photographer “giv[ing]
`
`a release instruction . . . [by] mov[ing] his or her hand, etc. at a certain speed”).
`
`Because Nonaka, like the ’949 Patent, discloses a camera system that may
`
`be controlled by human gesture input, Nonaka is in the same field of endeavor as
`
`the ’949 Patent. Compare Nonaka (Ex. 1005), 14:32-36 (“[D]uring the remote
`
`release operation of the camera according to the present exemplary embodiment, if
`
`the photographer makes a motion such as moving his or her hand . . . when he or
`
`she intends to carry out the release operation, the photographer is thus able to
`
`communicate that intent to the camera.”) with ’949 Patent (Ex 1001), 5:24-49
`
`(describing the process of identifying a subject has performed a predetermined
`
`gesture and controlling the camera device to capture an image in response).
`
`Nonaka is therefore analogous art to the ’949 Patent. Bederson Dec. (Ex. 1003), ¶¶
`
`46-47.
`
`EAST\185898973.2
`
`18
`
`
`
`3. Motivation to Combine Numazaki and Nonaka
`As discussed above, Numazaki’s first embodiment teaches a two-camera
`
`“reflected light extraction unit” through which an object such as a user’s hand can
`
`be detected. Numazaki’s third embodiment takes the output from the two-sensor
`
`“reflected light extraction unit,” compares it with stored data reflecting pre-
`
`registered gestures or hand positions in “shape memory unit 332” to detect when
`
`the user has performed a pre-registered gesture, and instructs the device to
`
`implement a “command corresponding to that [gesture].” Numazaki (Ex. 1004),
`
`29:19-30:5. Such gesture commands, for example, can instruct the system to turn
`
`devices on or off. Id. at 31:3-10. In its fifth embodiment, Numazaki adds an
`
`additional “visible light photo-detection array 351 . . . for taking video images.” Id.
`
`at 39:21-49. Using this arrangement, the fifth embodiment processes the output of
`
`two-sensor “reflected light extraction unit” 102 to identify an outline of the subject
`
`and subtracts everything outside this outline from the image captured by the visible
`
`light sensor 351, thereby decreasing the size of the ultimate video file. Id. at 39:24-
`
`60, 4:32-35. Finally, Numazaki’s eighth embodiment teaches portable devices that
`
`implement the information input generation apparatus described in the preceding
`
`embodiments. Id. at 50:19-24. Although Numazaki does not expressly describe
`
`combining all these features into a single portable device such that a user could
`
`perform a gesture command (pursuant to its third embodiment) that causes video
`
`EAST\185898973.2
`
`19
`
`
`
`capture to initiate (pursuant to its fifth embodiment), a PHOSITA would have been
`
`motivated to implement Numazaki’s portable device in this manner pursuant to
`
`Nonaka’s image capture command gesture teachings.
`
`First, implementing Numazaki’s features as proposed would