`
`UNITED STATES DEPARTMENT OF COMMERCE
`United States Patent and Trademark Office
`Address: COMMISSIONER FOR PATENTS
`PO. Box 1450
`Alexandria, Virginia 2231371450
`
`16/177,408
`
`10/31/2018
`
`Toshihiro Horie
`
`18962-1038001/
`P35244USX1
`
`7821
`
`FISH & RICHARDSON P.C. (APPLE)
`PO BOX 1022
`MINNEAPOLIS, MN 55440-1022
`
`CHIU. WESLEY JASON
`
`ART UNIT
`
`2698
`
`PAPER NUMBER
`
`NOTIFICATION DATE
`
`DELIVERY MODE
`
`12/12/2019
`
`ELECTRONIC
`
`Please find below and/or attached an Office communication concerning this application or proceeding.
`
`The time period for reply, if any, is set in the attached communication.
`
`Notice of the Office communication was sent electronically on above—indicated "Notification Date" to the
`
`following e—mail address(es):
`PATDOCTC @ freom
`
`PTOL-90A (Rev. 04/07)
`
`
`
`017/09 A0170” Summary
`
`Application No.
`16/177,408
`Examiner
`WESLEY J CHIU
`
`Applicant(s)
`Horie et al.
`Art Unit
`2698
`
`AIA (FITF) Status
`Yes
`
`- The MAILING DA TE of this communication appears on the cover sheet wit/7 the correspondence address -
`Period for Reply
`
`A SHORTENED STATUTORY PERIOD FOR REPLY IS SET TO EXPIRE g MONTHS FROM THE MAILING
`DATE OF THIS COMMUNICATION.
`Extensions of time may be available under the provisions of 37 CFR 1.136(a). In no event, however, may a reply be timely filed after SIX (6) MONTHS from the mailing
`date of this communication.
`|f NO period for reply is specified above, the maximum statutory period will apply and will expire SIX (6) MONTHS from the mailing date of this communication.
`-
`- Failure to reply within the set or extended period for reply will, by statute, cause the application to become ABANDONED (35 U.S.C. § 133).
`Any reply received by the Office later than three months after the mailing date of this communication, even if timely filed, may reduce any earned patent term
`adjustment. See 37 CFR 1.704(b).
`
`Status
`
`1). Responsive to communication(s) filed on 07/09/2019.
`CI A declaration(s)/affidavit(s) under 37 CFR 1.130(b) was/were filed on
`
`2a)[:] This action is FINAL.
`
`2b)
`
`This action is non-final.
`
`3)[:] An election was made by the applicant in response to a restriction requirement set forth during the interview
`on
`; the restriction requirement and election have been incorporated into this action.
`
`4):] Since this application is in condition for allowance except for formal matters, prosecution as to the merits is
`closed in accordance with the practice under Expade Quay/e, 1935 CD. 11, 453 O.G. 213.
`
`Disposition of Claims*
`
`5)
`
`Claim(s) fl is/are pending in the application.
`
`5a) Of the above Claim(s)
`
`is/are withdrawn from consideration.
`
`
`
`[:1 Claim(s) _ is/are allowed.
`
`Claim(s) 19—11 and 19 is/are rejected.
`
`Claim(s) 2—8,12—18 and 20 is/are objected to.
`
`) ) ) )
`
`)
`are subject to restriction and/or election requirement
`C] Claim(s
`* If any claims have been determined allowable, you may be eligible to benefit from the Patent Prosecution Highway program at a
`
`participating intellectual property office for the corresponding application. For more information, please see
`
`http://www.jjsthQQv/patents/init_events/pph/index.'sp or send an inquiry to PPeredhack@g§ptg.ggv.
`
`Application Papers
`
`10):] The specification is objected to by the Examiner.
`
`11). The drawing(s) filed on 10/31/2018 is/are: a). accepted or b)D objected to by the Examiner.
`Applicant may not request that any objection to the drawing(s) be held in abeyance. See 37 CFR 1.85(a).
`Replacement drawing sheet(s) including the correction is required if the drawing(s) is objected to. See 37 CFR 1.121 (d).
`
`Priority under 35 U.S.C. § 119
`
`12)D Acknowledgment is made of a claim for foreign priority under 35 U.S.C. § 119(a)-(d) or (f).
`Certified copies:
`
`a)I:i All
`
`b)C] Some**
`
`c)[j None of the:
`
`1C] Certified copies of the priority documents have been received.
`
`2E] Certified copies of the priority documents have been received in Application No.
`
`3C] Copies of the certified copies of the priority documents have been received in this National Stage
`application from the International Bureau (PCT Rule 17.2(a)).
`
`** See the attached detailed Office action for a list of the certified copies not received.
`
`Attachment(s)
`
`1)
`
`Notice of References Cited (PTO-892)
`
`Information Disclosure Statement(s) (PTO/SB/08a and/or PTO/SB/08b)
`2)
`Paper No(s)/Mail Date_
`U.S. Patent and Trademark Office
`
`3) E] Interview Summary (PTO-413)
`Paper No(s)/Mail Date
`4) CI Other-
`
`PTOL-326 (Rev. 11-13)
`
`Office Action Summary
`
`Part of Paper No./Mai| Date 20191125
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 2
`
`DETAILED ACTION
`
`Notice of Pre-AIA or AIA Status
`
`The present application, filed on or after March 16, 2013, is being examined
`
`under the first inventor to file provisions of the AIA.
`
`Information Disclosure Statement
`
`The information disclosure statement (IDS) submitted on 03/06/2019 and
`
`11/16/2018 are in compliance with the provisions on 37 CFR 1.97. Accordingly, the
`
`information disclosure statement is being considered by the examiner.
`
`Claim Objections
`
`Claims 1, 4, 7, 10-11, 14, 17 and 19 are objected to because of the following
`
`informalities:
`
`ln claim 4, line 1-2, change “wherein applying a temporal filter to the high-
`
`resolution matte to generate a final matte further comprises” to “wherein applying the
`
`temporal filter to the high-resolution matte to generate the final matte further comprises”
`
`ln claim 14, line 1-2, change “wherein applying a temporal filter to the high-
`
`resolution matte to generate a final matte further comprises” to “wherein applying the
`
`temporal filter to the high-resolution matte to generate the final matte further comprises
`
`In claim 7, line 1, change “a shoulder/torso matte” to “the shoulder/torso matte”.
`
`In claim 17, line 1, change “a shoulder/torso matte” to “the shoulder/torso matte”.
`
`In claim 10, line 23, change “the matte” to “the high-resolution matte”.
`
`In claim 19, line 30, change “the matte” to “the high-resolution matte”.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 3
`
`Claims 1 and 11 recite: “receiving, by a depth sensor of the mobile device,
`
`depth data indicating a distance of the subject from the camera in the physical, real-
`
`world environment; receiving, by one or more motion sensors of the mobile device,
`
`motion data indicating at least an orientation of the first camera in the physical, real-
`
`world environment;”.
`
`The limitation suggests the depth sensor or the motion sensors are receiving
`
`data.
`
`Examiner suggests amending the limitation to recite:
`
`“receiving, from a depth sensor of the mobile device, depth data indicating a
`
`distance of the subject from the camera in the physical, real-world environment;
`
`receiving, from one or more motion sensors of the mobile device, motion data indicating
`
`at least an orientation of the first camera in the physical, real-world environment;”.
`
`Alternatively, Examiner suggests amending the limitation to recite:
`
`“capturing, by a depth sensor of the mobile device, depth data indicating a
`
`distance of the subject from the camera in the physical, real-world environment;
`
`capturing, by one or more motion sensors of the mobile device, motion data indicating
`
`at least an orientation of the first camera in the physical, real-world environment;”.
`
`Appropriate correction is required.
`
`Claim Rejections - 35 USC § 103
`
`In the event the determination of the status of the application as subject to AIA 35
`
`U.S.C. 102 and 103 (or as subject to pre-AIA 35 U.S.C. 102 and 103) is incorrect, any
`
`correction of the statutory basis for the rejection will not be considered a new ground of
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 4
`
`rejection if the prior art relied upon, and the rationale supporting the rejection, would be
`
`the same under either status.
`
`The following is a quotation of 35 U.S.C. 103 which forms the basis for all
`
`obviousness rejections set forth in this Office action:
`
`A patent for a claimed invention may not be obtained, notwithstanding that the claimed
`invention is not identically disclosed as set forth in section 102, if the differences between the
`claimed invention and the prior art are such that the claimed invention as a whole would have
`been obvious before the effective filing date of the claimed invention to a person having
`ordinary skill in the art to which the claimed invention pertains. Patentability shall not be
`negated by the manner in which the invention was made.
`
`Claim 1, 9-11 and 19 is/are rejected under 35 U.S.C. 103 as being
`
`unpatentable over Flack et al. (US 2017/0244908 A1) in view of Ciudad et al. (US
`
`2008/0307307 A1) in further view of Dunn et al. (US 2009/0315915 A1) in further
`
`view of Baruch et al. (US 2017/0124717 A1).
`
`Regarding claim 19, Flack et al. (hereafter referred as Flack) teaches a system
`
`(Flack, Fig. 1) comprising:
`
`a display (Flack, Fig. 1, Display 34);
`
`a camera (Flack, Fig. 1, Camera 12);
`
`one or more processors (Flack, Fig. 2, Processor 13);
`
`memory coupled to the one or more processors and storing instructions that
`
`when executed by the one or more processors (Flack, Fig. 2, Memories 23 and 25,
`
`Paragraph 0070), cause the one or more processors to perform operations comprising:
`
`a sequential frames of image data captured by the camera positioned in close
`
`range of a subject, the sequential frames of image data including close range image
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 5
`
`data of the subject (Flack, Paragraph 0066 and 0072) and image data of a background
`
`behind the subject in a physical, real world environment (Flack, Fig. 6);
`
`receiving a first user input to apply a virtual environment effect (Flack,
`
`Paragraphs 0042 and 0099);
`
`generating, by the one or more processors, a matte from the sequential frames of
`
`image data (Flack, Paragraphs 0082-0086 and 0127-0128), wherein generating the
`
`matte includes:
`
`generating, by a neural network, a low-resolution matte (Flack, Paragraphs 0127-
`
`0128); and
`
`generating a high-resolution matte from the low-resolution matte, where the high-
`
`resolution matte has higher resolution than the low-resolution matte (Flack, Paragraphs
`
`0082-0086);
`
`generating, by the one or more processors, composite sequential frames of
`
`image data, including the sequential frames of image data, the matte and a virtual
`
`background content (Flack, Paragraph 0108), and
`
`displaying, on the display, the composite sequential frames of image data (Flack,
`
`Paragraph 0108).
`
`However, Flack does not teach a forward-facing camera; a depth sensor; one or
`
`more motion sensors; presenting a preview on the display, the preview including
`
`sequential frames of preview image data; capturing, by the depth sensor, depth data
`
`indicating a distance of the subject from the forward-facing camera in the physical, real-
`
`world environment; capturing, by the one or more sensors, orientation data indicating at
`
`least an orientation of the forward-facing camera in the physical, real-world
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 6
`
`environment; generating, by the one or more processors, a camera transform based on
`
`the orientation data, the camera transform describing an orientation of a virtual camera
`
`in a virtual environment; generating, by the one or more processors, a matte from the
`
`sequential frames of the depth data; processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte; generating a high-resolution matte from the
`
`processed low-resolution matte; the virtual background content selected from the virtual
`
`environment using the camera transform.
`
`In reference to Ciudad et al. (hereafter referred as Ciudad), Ciudad teaches
`
`presenting a preview on the display, the preview including sequential frames of preview
`
`image data (Ciudad, Fig. 9, Paragraphs 0021 and 0040), and receiving a first user input
`
`to apply a virtual environment effect (Ciudad, Fig. 9, Paragraphs 0021 and 0040).
`
`These arts are analogous since they are both related to background substitution.
`
`Therefore, it would have been obvious to one of ordinary skill in the art at the time the
`
`invention was made (pre-AIA) or before the effective filing date of the claimed invention
`
`(AIA) to modify the invention of Flack with the method for selecting a background as
`
`seen in Ciudad to provide a user interface showing an unmodified image and allow the
`
`user to view the backgrounds that may be selected.
`
`However the combination of Flack and Ciudad does not teach a forward-facing
`
`camera; a depth sensor; one or more motion sensors; capturing, by the depth sensor,
`
`depth data indicating a distance of the subject from the forward-facing camera in the
`
`physical, real-world environment; capturing, by the one or more sensors, orientation
`
`data indicating at least an orientation of the forward-facing camera in the physical, real-
`
`world environment; generating, by the one or more processors, a camera transform
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 7
`
`based on the orientation data, the camera transform describing an orientation of a
`
`virtual camera in a virtual environment; generating, by the one or more processors, a
`
`matte from the sequential frames of the depth data; processing the low-resolution matte
`
`to remove artifacts in the low-resolution matte; generating a high-resolution matte from
`
`the processed low-resolution matte; the virtual background content selected from the
`
`virtual environment using the camera transform.
`
`In reference to Dunn et al. (hereafter referred as Dunn) Dunn teaches a forward-
`
`facing camera (Dunn, Fig. 1, Camera 110, Paragraph 0018);
`
`one or more motion sensors (Dunn, Fig. 2, Sensor 128, Paragraph 0019);
`
`capturing, by the one or more sensors, orientation data indicating at least an
`
`orientation of the forward-facing camera in the physical, real-world environment (Dunn,
`
`Paragraphs 0030 and 0032);
`
`generating, by the one or more processors, a camera transform based on the
`
`orientation data, the camera transform describing an orientation of a virtual camera in a
`
`virtual environment; and the virtual background content selected from the virtual
`
`environment using the camera transform (Dunn, Paragraphs 0030-0032, Figs. 7-8,
`
`Paragraph 0035).
`
`These arts are analogous since they are all related to background substitution.
`
`Therefore, it would have been obvious to one of ordinary skill in the art at the time the
`
`invention was made (pre-AIA) or before the effective filing date of the claimed invention
`
`(AIA) to modify the combination of Flack and Ciudad with front-facing camera, motion
`
`sensor and method of adjusting the background based on the motion sensor as seen in
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 8
`
`Dunn allow the user to see the image while capturing the image data and to increase
`
`the realism of the background substitution (Dunn, Paragraph 0002).
`
`However, the combination of Flack, Ciudad and Dunn does not teach a depth
`
`sensor; capturing, by the depth sensor, depth data indicating a distance of the subject
`
`from the forward-facing camera in the physical, real-world environment; generating, by
`
`the one or more processors, a matte from the sequential frames of the depth data;
`
`processing the low-resolution matte to remove artifacts in the low-resolution matte; nor
`
`generating a high-resolution matte from the processed low-resolution matte.
`
`In reference to Baruch et al. (hereafter referred as Baruch), Baruch teaches a
`
`depth sensor; capturing, by the depth sensor, depth data indicating a distance of the
`
`subject from the camera in the physical, real-world environment (Baruch, Paragraphs
`
`0021 and 0057);
`
`generating, by the one or more processors, a matte from the sequential frames of
`
`the depth data (Baruch, Fig. 9, Foreground mask 902, Paragraphs 0047 and 0056-
`
`0057x
`
`processing the low-resolution matte to remove artifacts in the low-resolution
`
`matte (Baruch, Figs. 10-11, Paragraphs 0056-0059, 0064);
`
`generating a high-resolution matte from the processed low-resolution matte
`
`(Baruch, Paragraphs 0023 and 0062-0064).
`
`These arts are analogous since they are all related to foreground and
`
`background segmentation. Therefore, it would have been obvious to one of ordinary
`
`skill in the art at the time the invention was made (pre-AIA) or before the effective filing
`
`date of the claimed invention (AIA) to modify the combination of Flack, Ciudad and
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 9
`
`Dunn with the depth sensor and segmentation method using the depth sensor as seen
`
`in Baruch to produce a rough foreground mask and reduces the computational load and
`
`time of processing the matte (Baruch, Paragraph 0063). That is, to using a rough mask
`
`based on the depth data so that the classification method of Flack are only applied near
`
`the border between background and foreground rather than the entire image.
`
`Claims 1 and 10-11 are rejected for the same reasons as claim 19.
`
`Regarding claim 9, the combination of Flack, Ciudad, Dunn and Baruch teaches
`
`the method of claim 1 (see claim 1 analysis), wherein the neural network is a
`
`convolutional neural network for image segmentation (Flack, Paragraph 0127).
`
`Claim 1, 9 and 11 is/are rejected under 35 U.S.C. 103 as being unpatentable
`
`over Flack et al. (US 2017/0244908 A1) in view of Dunn et al. (US 2009/0315915 A1)
`
`in further view of Baruch et al. (US 2017/0124717 A1).
`
`Regarding claim 11, Flack et al. (hereafter referred as Flack) teaches a system
`
`(Flack, Fig. 1) comprising:
`
`a display (Flack, Fig. 1, Display 34);
`
`a camera (Flack, Fig. 1, Camera 12);
`
`one or more processors (Flack, Fig. 2, Processor 13);
`
`memory coupled to the one or more processors and storing instructions that
`
`when executed by the one or more processors (Flack, Fig. 2, Memories 23 and 25,
`
`Paragraph 0070), cause the one or more processors to perform operations comprising:
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 10
`
`capturing, by the camera, image data, the image data including an image of a
`
`subject in a physical, real-world environment (Flack, Fig. 6, Paragraph 0072);
`
`generating a matte from the image data (Flack, Paragraphs 0082-0086 and
`
`0127-0128), wherein generating the matte includes:
`
`generating, by a neural network, a low-resolution matte (Flack, Paragraphs 0127-
`
`0128); and
`
`generating a high-resolution matte from the low-resolution matte, where the high-
`
`resolution matte has higher resolution than the low-resolution matte (Flack, Paragraphs
`
`0082-0086);
`
`generating a composite image data, using the image data, the high-resolution
`
`matte and a virtual background content (Flack, Paragraph 0108), and
`
`causing to display the composite image data on the display (Flack, Paragraph
`
`0108).
`
`However, Flack does not teach a depth sensor; one or more motion sensors;
`
`receiving, by the depth sensor, depth data indicating a distance of the subject from the
`
`camera in the physical, real-world environment; receiving, by the one or more motion
`
`sensors, motion data indicating at least an orientation of the camera in the physical,
`
`real-world environment; generating, a virtual camera transform based on the motion
`
`data, the camera transform for determining an orientation of a virtual camera in a virtual
`
`environment; generating a matte from the depth data; processing the low-resolution
`
`matte to remove artifacts in the low-resolution matte; generating a high-resolution matte
`
`from the processed low-resolution matte; the virtual background content selected from
`
`the virtual environment using the camera transform.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 11
`
`In reference to Dunn et al. (hereafter referred as Dunn) Dunn teaches a camera
`
`(Dunn, Fig. 1, Camera 110, Paragraph 0018);
`
`one or more motion sensors (Dunn, Fig. 2, Sensor 128, Paragraph 0019);
`
`receiving, by the one or more motion sensors, motion data indicating at least an
`
`orientation of the camera in the physical, real-world environment (Dunn, Paragraphs
`
`0030 and 0032);
`
`generating, by the one or more processors, a virtual camera transform based on
`
`the motion data, the camera transform describing an orientation of a virtual camera in a
`
`virtual environment; and the virtual background content selected from the virtual
`
`environment using the camera transform (Dunn, Paragraphs 0030-0032, Figs. 7-8,
`
`Paragraph 0035).
`
`These arts are analogous since they are all related to background substitution.
`
`Therefore, it would have been obvious to one of ordinary skill in the art at the time the
`
`invention was made (pre-AIA) or before the effective filing date of the claimed invention
`
`(AIA) to modify the invention of Flack with the motion sensor and method of adjusting
`
`the background based on the motion sensor as seen in Dunn allow the user to see the
`
`image while capturing the image data and to increase the realism of the background
`
`substitution (Dunn, Paragraph 0002).
`
`However, the combination of Flack and Dunn does not teach a depth sensor;
`
`receiving, by the depth sensor, depth data indicating a distance of the subject from the
`
`camera in the physical, real-world environment; generating a matte from the depth data;
`
`processing the low-resolution matte to remove artifacts in the low-resolution matte; nor
`
`generating a high-resolution matte from the processed low-resolution matte.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 12
`
`In reference to Baruch et al. (hereafter referred as Baruch), Baruch teaches a
`
`depth sensor; receiving, by the depth sensor, depth data indicating a distance of the
`
`subject from the camera in the physical, real-world environment (Baruch, Paragraphs
`
`0021 and 0057);
`
`Generating a matte from the depth data (Baruch, Fig. 9, Foreground mask 902,
`
`Paragraphs 0047 and 0056-0057);
`
`processing the low-resolution matte to remove artifacts in the low-resolution
`
`matte (Baruch, Figs. 10-11, Paragraphs 0056-0059, 0064);
`
`generating a high-resolution matte from the processed low-resolution matte
`
`(Baruch, Paragraphs 0023 and 0062-0064).
`
`These arts are analogous since they are all related to foreground and
`
`background segmentation. Therefore, it would have been obvious to one of ordinary
`
`skill in the art at the time the invention was made (pre-AIA) or before the effective filing
`
`date of the claimed invention (AIA) to modify the combination of Flack and Dunn with
`
`the depth sensor and segmentation method using the depth sensor as seen in Baruch
`
`to produce a rough foreground mask and reduces the computational load and time of
`
`processing the matte (Baruch, Paragraph 0063). That is, to using a rough mask based
`
`on the depth data so that the classification method of Flack are only applied near the
`
`border between background and foreground rather than the entire image.
`
`Claim 1
`
`is rejected for the same reasons as claim 11.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 13
`
`Regarding claim 9, the combination of Flack, Dunn and Baruch teaches the
`
`method of claim 1 (see claim 1 analysis), wherein the neural network is a convolutional
`
`neural network for image segmentation (Flack, Paragraph 0127).
`
`Allowable Subject Matter
`
`Claim 2-8, 12-18 and 20 objected to as being dependent upon a rejected base
`
`claim, but would be allowable if rewritten in independent form including all of the
`
`limitations of the base claim and any intervening claims.
`
`The following is an examiner’s statement of reasons for allowance:
`
`With regard to claim 2, prior art of record neither anticipates nor renders obvious:
`
`“The method of claim 1, wherein processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte, further comprises: generating an inner matte and
`
`an outer matte from at least one of a bounding box including a face of the subject or a
`
`histogram of the depth data; generating a hole-filled matte from the inner matte;
`
`generating a shoulder/torso matte from the hole-filled inner matte; dilating the inner
`
`matte using a first kernel; dilating the outer matte using a second kernel smaller than
`
`the first kernel; generating a garbage matte from an intersection of the dilated inner
`
`matte and the dilated outer matte; combining the low-resolution matte with the garbage
`
`matte to create a face matte; combining the face matte and the shoulder/torso matte
`
`into a denoised matte; and generating the high-resolution matte from the denoised
`
`matte.”
`
`Claims 3-8 depend on and further limit claim 2 and are therefore allowable for the
`
`same reasons.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 14
`
`With regard to claim 12, prior art of record neither anticipates nor renders
`
`obvious:
`
`“The system of claim 11, wherein processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte, further comprises: generating an inner matte and
`
`an outer matte from at least one of a bounding box including a face of the subject or a
`
`histogram of the depth data; generating a hole-filled matte from the inner matte;
`
`generating a shoulder/torso matte from the hole-filled inner matte; dilating the inner
`
`matte using a first kernel; dilating the outer matte using a second kernel smaller than
`
`the first kernel; generating a garbage matte from an intersection of the dilated inner
`
`matte and the dilated outer matte; combining the low-resolution matte with the garbage
`
`matte to create a face matte; combining the face matte and the shoulder/torso matte
`
`into a denoised matte; and generating the high-resolution matte from the denoised
`
`matte.”
`
`Claims 13-18 depend on and further limit claim 12 and are therefore allowable for
`
`the same reasons.
`
`With regard to claim 20, prior art of record neither anticipates nor renders
`
`obvious:
`
`“The system of claim 19, wherein processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte, further comprises: generating an inner matte and
`
`an outer matte from at least one of a bounding box including a face of the subject or a
`
`histogram of the depth data; generating a hole-filled matte from the inner matte;
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 15
`
`generating a shoulder/torso matte from the hole-filled inner matte; dilating the inner
`
`matte using a first kernel; dilating the outer matte using a second kernel smaller than
`
`the first kernel; generating a garbage matte from an intersection of the dilated inner
`
`matte and the dilated outer matte; combining the low-resolution matte with the garbage
`
`matte to create a face matte; combining the face matte and the shoulder/torso matte
`
`into a denoised matte; and generating the high-resolution matte from the denoised
`
`matte.”
`
`Conclusion
`
`Any inquiry concerning this communication or earlier communications from the
`
`examiner should be directed to WESLEY JASON CHIU whose telephone number is
`
`(571)270-1312. The examiner can normally be reached on Mon-Fri: 8am-4pm.
`
`Examiner interviews are available via telephone, in-person, and video
`
`conferencing using a USPTO supplied web-based collaboration tool. To schedule an
`
`interview, applicant is encouraged to use the USPTO Automated Interview Request
`
`(AIR) at http://www.uspto.gov/interviewpractice.
`
`If attempts to reach the examiner by telephone are unsuccessful, the examiner’s
`
`supervisor, Twyler Haskins can be reached on (571) 272-7406. The fax phone number
`
`for the organization where this application or proceeding is assigned is 571-273-8300.
`
`Information regarding the status of an application may be obtained from the
`
`Patent Application Information Retrieval (PAIR) system. Status information for
`
`published applications may be obtained from either Private PAIR or Public PAIR.
`
`
`
`Application/Control Number: 16/177,408
`Art Unit: 2698
`
`Page 16
`
`Status information for unpublished applications is available through Private PAIR only.
`
`For more information about the PAIR system, see http://pair-direct.uspto.gov. Should
`
`you have questions on access to the Private PAIR system, contact the Electronic
`
`Business Center (EBC) at 866-217-9197 (toll-free). If you would like assistance from a
`
`USPTO Customer Service Representative or access to the automated information
`
`system, call 800-786-9199 (IN USA OR CANADA) or 571 -272-1 000.
`
`NVESLEY J CHIU/
`
`Examiner, Art Unit 2698
`
`/TWYLER L HASKINS/
`
`Supervisory Patent Examiner, Art Unit 2698
`
`

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site