`
`l. A method comprising:
`
`capturing, by a first camera of a mobile device, image data, the image data including
`
`an image of a subject in a physical, real-world environment,
`
`receiving, by a depth sensor of the mobile device, depth data indicating a distance
`
`of the subject from the camera in the physical, real-world environment,
`
`receiving, by one or more motion sensors of the mobile device, motion data
`
`indicating at least an orientation of the first camera in the physical, real-world environment,
`
`generating, by one or more processors of the mobile device, a virtual camera
`
`transform based on the motion data, the camera transform for determining an orientation
`
`of a virtual camera in a virtual environment,
`
`generating, by the one or more processors, a matte from the image data and the
`
`depth data, wherein generating the matte includes:
`
`generating, by a neural network, a low-resolution matte, and
`
`processing the low-resolution matte to remove artifacts in the low-
`
`resolution matte,
`
`generating a high-resolution matte from the processed low-resolution matte,
`
`where the high-resolution matte has higher resolution than the low-resolution matte,
`
`generating, by the one or more processors, a composite image data, using the image
`
`data, the high-resolution matte and a virtual background content, the virtual background
`
`content selected from the virtual environment using the camera transform, and
`
`causing to display, by the one or more processors, the composite image data on a
`
`display of the mobile device.
`
`2. The method of claim 1, wherein processing the low-resolution matte to remove artifacts
`
`in the low-resolution matte, further comprises:
`
`generating an inner matte and an outer matte from at least one of a bounding box
`
`including a face of the subject or a histogram of the depth data,
`
`generating a hole-fllled matte from the inner matte,
`
`generating a shoulder/torso matte from the hole-fllled inner matte,
`
`dilating the inner matte using a first kernel,
`
`dilating the outer matte using a second kernel smaller than the first kernel,
`
`36
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`generating a garbage matte from an intersection of the dilated inner matte and the
`
`dilated outer matte;
`
`combining the low-resolution matte with the garbage matte to create a face matte;
`
`combining the face matte and the shoulder/torso matte into a denoised matte; and
`
`generating the high-resolution matte from the denoised matte.
`
`3. The method of claim 2; further comprising:
`
`applying a temporal filter to the high-resolution matte to generate a final matte; and
`
`generating the composite image data; using the image data; the final matte and the
`
`virtual background content.
`
`4. The method of claim 3; wherein applying a temporal filter to the high-resolution matte
`
`to generate a final matte further comprises:
`
`generating a per-pixel similarity map based on the image data and previous image
`
`data; and
`
`applying the temporal filter to the high-resolution matte using the similarity map
`
`and a previous final matte.
`
`5. The method of claim 4; wherein the temporal filter is a linear weighted average of two
`
`frames with weights calculated per-pixel dependent on pixel similarity represented by the
`
`per-pixel similarity map.
`
`6. The method of claim 2; wherein generating the high-resolution matte from the composite
`
`low-resolution matte; further comprises:
`
`generating a luma image from the image data; and
`
`upsampling; using a guided filter and the luma image; the denoised matte to the
`
`high-resolution matte.
`
`7. The method of claim 2; wherein generating a shoulder/torso matte from the hole-filled
`
`inner matte further comprises:
`
`dilating the inner matte to generate the hole-filled matte; and
`
`37
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`eroding the hole-filled matte to generate the shoulder/torso matte.
`
`8. The method of claim 2, wherein the inner matte includes depth data that is less than a
`
`depth threshold, the outer matte includes depth data that is less than the depth threshold or
`
`is unknown, and the depth threshold is determined by an average depth of a center region
`
`of the subject’s face detected in the image data and an offset to include the back of the
`
`subject’s head.
`
`9. The method of claim 1, wherein the neural network is a convolutional neural network
`
`for image segmentation.
`
`10. A method comprising:
`
`presenting a preview on a display of a mobile device, the preview including
`
`sequential frames of preview image data captured by a forward-facing camera of a mobile
`
`device positioned in close range of a subject, the sequential frames of preview image data
`
`including close range image data of the subject and image data of a background behind the
`
`subject in a physical, real world environment,
`
`receiving a first user input to apply a virtual environment effect,
`
`capturing, by a depth sensor of the mobile device, depth data indicating a distance
`
`of the subject from the forward-facing camera in the physical, real-world environment,
`
`capturing, by one or more sensors of the mobile device, orientation data indicating
`
`at
`
`least an orientation of the forward-facing camera in the physical,
`
`real-world
`
`environment,
`
`generating, by one or more processors of the mobile device, a camera transform
`
`based on the motion data, the camera transform describing an orientation of a virtual
`
`camera in a virtual environment,
`
`generating, by the one or more processors, a matte from the sequential frames of
`
`image data and the depth data, wherein generating the matte includes:
`
`generating, by a neural network, a low-resolution matte, and
`
`processing the low-resolution matte to remove artifacts in the low-
`
`resolution matte,
`
`38
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`generating high-resolution matte from the processed low-resolution matte,
`
`where the high-resolution matte has higher resolution than the low-resolution matte;
`
`generating, by the one or more processors, composite sequential frames of image
`
`data, including the sequential frames of image data, the matte and a virtual background
`
`content, the virtual background content selected from the virtual environment using the
`
`camera transform, and
`
`causing display, by the one or more processors, of the composite sequential frames
`
`of image data.
`
`11. A system comprising:
`
`a display,
`
`a camera,
`
`a depth sensor,
`
`one or more motion sensors,
`
`one or more processors,
`
`memory coupled to the one or more processors and storing instructions that when
`
`executed by the one or more processors, cause the one or more processors to perform
`
`operations comprising:
`
`capturing, by the camera, image data, the image data including an image of
`
`a subject in a physical, real-world environment,
`
`receiving, by the depth sensor, depth data indicating a distance of the subject
`
`from the camera in the physical, real-world environment,
`
`receiving, by the one or more motion sensors, motion data indicating at least
`
`an orientation of the camera in the physical, real-world environment,
`
`generating a virtual camera transform based on the motion data, the camera
`
`transform for determining an orientation of a virtual camera in a virtual environment,
`
`generating a matte from the image data and the depth data, wherein
`
`generating the matte includes:
`
`generating, by a neural network, a low-resolution matte, and
`
`processing the low-resolution matte to remove artifacts in the low-
`
`resolution matte,
`
`39
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`generating a high-resolution matte from the processed low-
`
`resolution matte, where the high-resolution matte has higher resolution than the low-
`
`resolution matte;
`
`generating a composite image data, using the image data,
`
`the high-
`
`resolution matte and a Virtual background content, the Virtual background content selected
`
`from the Virtual environment using the camera transform, and
`
`causing to display the composite image data on the display.
`
`12. The system of claim 11, wherein processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte, further comprises:
`
`generating an inner matte and an outer matte from at least one of a bounding box
`
`including a face of the subject or a histogram of the depth data,
`
`generating a hole-filled matte from the inner matte,
`
`generating a shoulder/torso matte from the hole-filled inner matte,
`
`dilating the inner matte using a first kernel,
`
`dilating the outer matte using a second kernel smaller than the first kernel,
`
`generating a garbage matte from an intersection of the dilated inner matte and the
`
`dilated outer matte,
`
`combining the low-resolution matte with the garbage matte to create a face matte,
`
`combining the face matte and the shoulder/torso matte into a denoised matte, and
`
`generating the high-resolution matte from the denoised matte.
`
`13. The system of claim 12, the operations further comprising:
`
`applying a temporal filter to the high-resolution matte to generate a final matte, and
`
`generating the composite image data, using the image data, the final matte and the
`
`Virtual background content.
`
`14. The system of claim 13, wherein applying a temporal filter to the high-resolution matte
`
`to generate a final matte further comprises:
`
`generating a per-pixel similarity map based on the image data and preVious image
`
`data, and
`
`40
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`applying the temporal filter to the high-resolution matte using the similarity map
`
`and a previous final matte.
`
`15. The system of claim 14, wherein the temporal filter is a linear weighted average of two
`
`frames with weights calculated per-pixel dependent on pixel similarity represented by the
`
`per-pixel similarity map.
`
`16. The system of claim 12, wherein generating the high-resolution matte from the
`
`composite low-resolution matte, further comprises:
`
`generating a luma image from the image data; and
`
`upsampling, using a guided filter and the luma image, the denoised matte to the
`
`high-resolution matte.
`
`17. The system of claim 12, wherein generating a shoulder/torso matte from the hole-filled
`
`inner matte further comprises:
`
`dilating the inner matte to generate the hole-filled matte, and
`
`eroding the hole-filled matte to generate the shoulder/torso matte.
`
`18. The system of claim 12, wherein the inner matte includes depth data that is less than a
`
`depth threshold, the outer matte includes depth data that is less than the depth threshold or
`
`is unknown, and the depth threshold is determined by an average depth of a center region
`
`of the subject’s face detected in the image data and an offset to include the back of the
`
`subject’s head.
`
`19. A system comprising:
`
`a display,
`
`a forward-facing camera,
`
`a depth sensor,
`
`one or more motion sensors,
`
`0116 or more processors,
`
`41
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`memory coupled to the one or more processors and storing instructions that when
`
`executed by the one or more processors, cause the one or more processors to perform
`
`operations comprising:
`
`presenting a preview on the display,
`
`the preview including sequential
`
`frames of preview image data captured by the forward-facing camera positioned in close
`
`range of a subject, the sequential frames of preview image data including close range image
`
`data of the subject and image data of a background behind the subject in a physical, real
`
`world environment,
`
`receiving a first user input to apply a virtual environment effect,
`
`capturing, by the depth sensor, depth data indicating a distance of the
`
`subject from the forward-facing camera in the physical, real-world environment,
`
`capturing, by the one or more sensors, orientation data indicating at least an
`
`orientation of the forward-facing camera in the physical, real-world environment,
`
`generating, by the one or more processors, a camera transform based on the
`
`motion data, the camera transform describing an orientation of a virtual camera in a virtual
`
`environment,
`
`generating, by the one or more processors, a matte from the sequential
`
`frames of image data and the depth data, wherein generating the matte includes:
`
`generating, by a neural network, a low-resolution matte, and
`
`processing the low-resolution matte to remove artifacts in the low-
`
`resolution matte,
`
`generating a high-resolution matte from the processed low-resolution matte,
`
`where the high-resolution matte has higher resolution than the low-resolution matte,
`
`generating, by the one or more processors, composite sequential frames of image
`
`data, including the sequential frames of image data, the matte and a virtual background
`
`content, the virtual background content selected from the virtual environment using the
`
`camera transform, and
`
`displaying, on the display, the composite sequential frames of image data.
`
`42
`
`
`
`Attorney Docket No. 18962-1038001/P35244USX1
`
`20. The system of claim 19; wherein processing the low-resolution matte to remove
`
`artifacts in the low-resolution matte, further comprises:
`
`generating an inner matte and an outer matte from at least one of a bounding box
`
`including a face of the subject or a histogram of the depth data;
`
`generating a hole-fllled matte from the inner matte;
`
`generating a shoulder/torso matte from the hole-fllled inner matte;
`
`dilating the inner matte using a first kernel;
`
`dilating the outer matte using a second kernel smaller than the first kernel;
`
`generating a garbage matte from an intersection of the dilated inner matte and the
`
`dilated outer matte;
`
`combining the low-resolution matte with the garbage matte to create a face matte;
`
`combining the face matte and the shoulder/torso matte into a denoised matte; and
`
`generating the high-resolution matte from the denoised matte.
`
`43
`
`

Accessing this document will incur an additional charge of $.
After purchase, you can access this document again without charge.
Accept $ ChargeStill Working On It
This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.
Give it another minute or two to complete, and then try the refresh button.
A few More Minutes ... Still Working
It can take up to 5 minutes for us to download a document if the court servers are running slowly.
Thank you for your continued patience.

This document could not be displayed.
We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.
You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.
Set your membership
status to view this document.
With a Docket Alarm membership, you'll
get a whole lot more, including:
- Up-to-date information for this case.
- Email alerts whenever there is an update.
- Full text search for other cases.
- Get email alerts whenever a new case matches your search.

One Moment Please
The filing “” is large (MB) and is being downloaded.
Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!
If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document
We are unable to display this document, it may be under a court ordered seal.
If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.
Access Government Site