throbber
US007573475B2
`
`(12) Ulllted States Patent
`Sullivan et al.
`
`(10) Patent N0.:
`(45) Date of Patent:
`
`US 7,573,475 B2
`Aug. 11, 2009
`
`(54) 2D TO 3D IMAGE CONVERSION
`
`7,254,265 B2
`
`8/2007 Naske
`
`(75) I
`nventors:
`
`CA (Us)
`S F
`s H
`s
`teve u ivan, an rancisco,
`;
`Alan D- Trombla’ Fairfax’ CA (Us);
`Francesco G. Callari, San Francisco,
`CA (US)
`
`(73) Assignee: Industrial Light & Magic, San
`Francisco, CA (U S)
`
`7/2003 Naske et a1.
`2003/0128871 A1
`2004/0239670 A1 12/2004 Marks
`Zoos/0099414 Al
`5/2005 Kaye et 31'
`
`_
`(comlnued)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 511 days.
`
`EP
`
`FOREIGN PATENT DOCUMENTS
`1 176 559
`V2002
`
`(21) App1.No.: 11/445,947
`
`(22) Filed:
`
`Jun. 1, 2006
`
`(65)
`
`Prior Publication Data
`
`US 2007/0279415 A1
`
`Dec. 6, 2007
`
`(51) Int. Cl.
`(2006.01)
`G06T 15/10
`(52) US. Cl. ..................... .. 345/427; 345/419; 345/426;
`345/581; 345/646; 345/647; 352/57
`(58) Field of Classi?cation Search ............... .. 345/419,
`345/426, 427, 581, 646, 647
`See application ?le for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`5/1979 Westell
`4,156,914 A
`4,972,359 A 11/1990 Silver et a1.
`5,613,048 A
`3/1997 Chen et a1.
`5,699,152 A * 12/1997 Fedor et a1. ............ .. 356/2401
`6,014,472 A
`1/2000 Minami
`6,208,348 B1
`3/2001 Kaye
`6,477,267 B1
`11/2002 Richards
`6,515,659 B1
`2/2003 Kaye et a1.
`6,686,926 B1
`2/2004 Kaye
`6,928,196 B1
`8/2005 Bradley et a1.
`7,102,633 B2
`9/2006 Kaye et a1.
`7,116,323 B2 10/2006 Kaye et a1.
`7,116,324 B2 10/2006 Kaye et a1.
`
`(Commued)
`OTHER PUBLICATIONS
`
`Bao et a1., “Non-linear View Interpolation,” J'. Wsualization Comp.
`Animation, 1999, l0:233_24l‘
`
`.
`(Commued)
`Primary ExamineriKimbinh T Nguyen
`(74) Attorney, Agent, or FirmiFish & Richardson PC.
`
`(57)
`
`ABSTRACT
`
`A method of creating a complementary stereoscopic image
`pair is described. The method includes receiving a ?rst 2D
`image comprising image data, Where the ?rst 2D image is
`captured from a ?rst camera location. The method also
`includes projecting at least a portion of the ?rst 2D image onto
`computer-generated geometry. The image data has depth val
`ues associated With the computer-generated geometry. The
`system includes rendering, using the computer-generated
`geometry and a second camera location that differs from the
`?rst camera location, a second 2-D image that is stereoscopi
`cally complementary to the ?rst 2-D image, and in?lling
`image data that is absent from the second 2-D image.
`
`38 Claims, 13 Drawing Sheets
`
`PRIME FOCUS EX 1006-1
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,573,475 B2
`Page 2
`
`US. PATENT DOCUMENTS
`
`2005/0104878 A1
`2005/0104879 A1
`2005/0117215 A1*
`2005/0146521 A1
`2005/0231505 A1
`
`5/ 2005 Kaye et al.
`5/ 2005 Kaye et al.
`6/2005 Lange ...................... .. 359/462
`7/2005 Kaye et al.
`10/2005 Kaye et al.
`
`FOREIGN PATENT DOCUMENTS
`
`20070042989
`KR
`WO WO/2005084298
`W0 WO 2006/078237
`WO WO/2006078249
`WO WO/2006078250
`
`4/2007
`9/2005
`7/2006
`7/2006
`7/2006
`
`OTHER PUBLICATIONS
`
`Bao et al., “Non-linear View Interpolation,” Computer Graphics and
`Applications, Paci?c Graphics ’98 Sixth Paci?c Conference, 1998,
`pp. 61-69, 225.
`Chen and Williams, “View Interpolation for Image Synthesis,” Proc.
`Of the 20th Annual Conference on Computer Graphics and Interac
`tive Techniques, 1993, 279-288.
`Fu et al., “An Accelerated Rendering Algorithm for Stereoscopic
`Display,” Comp. & Graphics, 1996, 20(2):223-229.
`
`McMillan and Bishop, “Head-tracked stereoscopic display using
`image Warping,” Stereoscopic Displays and Wrtual Reality Systems
`11, Fisher et al. (eds.), SPIE Proceedings 2409, San Jose, CA, Feb.
`5-10, 1995, pp. 21-30.
`Raskar, “Projectors: Advanced Graphics and Vision Techniques,”
`SIGGRAPH 2004 Course 22 Notes, 166 pages.
`SaWhney et al., “Hybrid Stereo Camera: An IBR Approach for Syn
`thesis of Very High Resolution Stereoscopic Image Sequences,”
`ACM SIGGRAPH 2001, Aug. 12-17, 2001, Los Angeles, CA, USA,
`pp. 45 1 -460.
`DeskoWitZ, Chicken Little Goes 3-D With Help from ILM, “VFX
`World”, (Nov. 7, 2005) 2 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, Non-?nal Of?ce
`Action, mailed Feb. 27, 2008, 26 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, response to Feb. 27,
`2008 Non-?nal Of?ce Action, 14 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, Non-?nal Of?ce
`Action, mailed Sep. 12, 2008, 10 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, response to Sep. 12,
`2008 Non-?nal Of?ce Action, 13 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, Final Of?ce Action,
`mailed Nov. 26, 2008, 11 pages.
`U.S. Appl. No. 11/446,576, ?led Jun. 1, 2006, response to Nov. 26,
`2008 Final Of?ce Action, 14 pages.
`* cited by examiner
`
`PRIME FOCUS EX 1006-2
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 1 0f 13
`
`US 7,573,475 B2
`
`fllIk
`
`
`
`COSWELOFE QOWF: Om
`
`@mEmO
`29m
`
`>65
`
`3
`
`#2.
`
`92
`
`glw
`
`>>o_>
`
`mLmEmo :3
`
`
`
`COSNELOFE QOQF: ON
`
`mmmhouw
`
`PRIME FOCUS EX 1006-3
`PRIME FOCUS v LEGEND3D
`
`

`
`PRIME FOCUS EX 1006-4
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 3 0f 13
`
`US 7,573,475 B2
`
`3001
`
`Build left camera
`View
`£72
`1
`
`_ .
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`.
`
`. .
`
`.
`
`.
`
`. .'
`
`; Build geometry for ;
`;
`left camera view 3
`M I
`1 .... .1 ..... ..
`
`Project left view
`on geometry
`3'26
`
`i Calculate left and
`
`right depth maps 4___—_
`M
`i
`Calculate offset
`map (see FlG. 10)
`£0
`
`Combine right
`camera view and
`left camera view
`
`information to
`
`create 30 Film
`information
`if
`I
`
`Run compositing
`script to generate
`n'ght camera view
`information
`
`T
`
`Create initial right
`camera images/
`Execute in?ll
`algon'thm
`
`M
`
`ls pixel
`part of 2D scene
`element?
`
`LL?
`
`Yes
`
`ls pixel
`occluded?
`
`ilé
`
`'
`Execute blurring
`algorithm
`
`3.11
`
`I ,
`
`FIG. 3
`
`PRIME FOCUS EX 1006-5
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 4 0f 13
`
`US 7,573,475 B2
`
`55:.
`
`.8»
`A1
`
`L1
`E»
`
`1N a> ex.
`
`EN
`
`PRIME FOCUS EX 1006-6
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 5 0f 13
`
`US 7,573,475 B2
`
`l
`
`Start
`
`‘F
`
`End
`
`500
`
`Compute 3D point -
`at distance from
`left camera center
`£213
`+
`Compute 20 point
`in ?eld of view of
`right camera
`M
`
`ifferenc
`between the
`expected distance
`and the measured
`distance exceed a
`predetermined
`threshold?
`
`Set left-right offset
`map for pixel as
`“background’
`M9
`
`Yes
`
`Distance
`from the left
`camera view to the pixel >=
`the background
`threshold?
`_5_0_4
`
`'
`
`Set left-right offset
`map for pixel as
`"outside"
`i7
`
`20
`point
`outside ?eld
`
`Set the distance of the
`2D point as the
`distance from the right
`camera center
`i6
`+
`Set coordinates of 20
`point as the integer
`closest to the 20 point
`if
`
`Distance
`from the camera
`view to the pixel exceed
`background
`threshold?
`M7
`
`FIG. 5
`
`‘
`
`Set left-right offset
`map for pixel as
`“occluded”
`.521
`
`Yes
`
`Set left-right offset
`map for pixel to a ‘
`distance value
`if
`
`PRIME FOCUS EX 1006-7
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 6 0f 13
`
`US 7,573,475 B2
`
`MSW
`
`PRIME FOCUS EX 1006-8
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 7 0f 13
`
`US 7,573,475 B2
`
`K
`\
`I / I I /
`\ \ \ \.
`\
`/
`\ I‘
`/ ll
`
`\|\r|l)
`
`
`
`Ema U=m> o2
`\l
`%Q..\
`
`mn .OE
`
`Ema 26> o2
`XI
`
`(h .9“
`
`PRIME FOCUS EX 1006-9
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 8 0f 13
`
`US 7,573,475 B2
`
`PRIME FOCUS EX 1006-10
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 9 0f 13
`
`US 7,573,475 B2
`
`(
`
`Start
`
`)
`
`V
`Receive next pixel
`of light camera ‘
`view
`L2
`
`is offset
`value of pixel
`‘background’?
`
`is offset
`value of pixel
`‘occluded’ or ‘outside’?
`
`Yes
`
`900K
`
`Assign right camera pixel
`color information from a
`pixel of the left view
`location at same 2D
`coordinates
`M
`
`Assign right camera pixel a
`value of “occluded“
`
`Assign right camera pixel a
`pixel color of a left camera
`pixel nearest to a location
`speci?ed by the 2D
`coordinates of the right
`camera pixel plus the offset
`value
`
`FIG. 9A
`
`Yes
`
`More
`pixels?
`M
`
`No
`
`PRIME FOCUS EX 1006-11
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 10 0f 13
`
`US 7,573,475 B2
`
`9502
`
`Yes
`
`More
`pixels?
`w
`
`No
`
`No
`
`(
`
`Start
`
`>
`
`Y
`Receive next pixel of
`rig ht camera view
`25.2
`
`4
`
`is offset
`value of pixel
`‘occluded’?
`2L4
`
`Yes
`
`Locate the closest
`unoccluded pixel to the
`le? of the occluded pixel
`gig
`
`Does
`the unocclude
`pixel exist?
`
`Yes
`
`Apply the
`noccluded pixel
`U
`color to the
`occluded pixel
`m
`A
`
`Locate the closest
`unoccluded pixel to the
`right of the occluded
`pixel
`ii?
`
`Does
`the unoccluded
`pixel exist?
`M
`
`No
`
`Yes
`
`Report Error
`26_6
`
`FIG. 98
`
`End
`
`PRIME FOCUS EX 1006-12
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Aug. 11,2009
`
`Sheet 11 0f 13
`
`US 7,573,475 B2
`
`11006
`
`1 1002
`
`A
`
`10001
`
`
`
`FIG. 105
`
`FIG. 10A
`
`W
`
`216-;
`
`PRIME FOCUS EX 1006-13
`PRIME FOCUS v LEGEND3D
`
`

`
`US. Patent
`
`Aug. 11,2009
`
`Sheet 12 0f 13
`
`US 7,573,475 B2
`
`Use le? eye camera view
`and uncorrected right
`camera view for next
`frame
`110.?
`
`l
`
`De?ne keystone
`correction plane (P) for left
`camera view
`1104
`
`1
`
`l
`
`Compute line L, which is
`the intersection of P with
`uncorrected right camera
`view
`1101?
`
`Compute left most point
`(A) of intersection L with
`the right camera view
`frustum
`1108
`
`1
`
`Compute point B of
`intersection of P with line
`from the optical center (C)
`of the right camera
`parallel to the optical axis
`of the left camera view
`1&0
`
`Yes
`
`More
`Frames?
`1122
`
`Linearly warp the offset
`using homography
`1120
`
`l
`
`Replace uncorrected right
`camera view with virtual
`right camera view
`1118
`
`l
`
`Compute homography
`between image planes
`1116
`
`T
`
`De?ne a “virtual” right
`camera view using f, C
`and a parallel movement
`relative to the left camera
`view
`iii
`
`l
`
`Compute the width (f) of
`the angle ACB
`1112
`
`j
`
`FIG. 11
`
`PRIME FOCUS EX 1006-14
`PRIME FOCUS v LEGEND3D
`
`

`
`PRIME FOCUS EX 1006-15
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,573,475 B2
`
`1
`2D TO 3D IMAGE CONVERSION
`
`CROSS REFERENCE TO RELATED CASES
`
`This application is related to US. patent application Ser.
`No. 11/446,576 ?led Jun. 1, 2006 by Davidson et al., and
`entitled “In?lling for 2D to 3D Image Conversion,” the
`entirety of Which is incorporated herein by reference.
`
`TECHNICAL FIELD
`
`This speci?cation relates to systems and methods for 2D to
`3D image conversion.
`
`BACKGROUND
`
`Moviegoers can vieW specially formatted ?lms in three
`dimensions (3D), Where the objects in the ?lm appear to
`project from or recede into the movie screen. Formatting the
`3D ?lms can include combining a stereo pair of left and right
`images that are tWo-dimensional. When the stereo pair of
`images is combined, a vieWer Wearing polariZed glasses per
`ceives the resulting image as three-dimensional.
`
`SUMMARY
`
`The present speci?cation relates to systems and methods
`for converting a 2D image into a stereoscopic image pair.
`In a ?rst general aspect a method of creating a complemen
`tary stereoscopic image pair is described. The method
`includes receiving a ?rst 2D image comprising image data,
`Where the ?rst 2D image is captured from a ?rst camera
`location. The method also includes projecting at least a por
`tion of the ?rst 2D image onto computer-generated geometry.
`The image data has depth values associated With the com
`puter-generated geometry. The system includes rendering,
`using the computer- generated geometry and a second camera
`location that differs from the ?rst camera location, a second
`2-D image that is stereoscopically complementary to the ?rst
`2-D image, and in?lling image data that is absent from the
`second 2-D image.
`In a second general aspect, a method of identifying an
`absence of image information When generating a comple
`mentary stereoscopic image pair is described. The method
`includes illuminating at least a portion of computer- generated
`geometry using a simulated light source positioned at a ?rst
`camera location, designating a second camera at a second
`camera location having a vieWing frustum that includes the at
`least a portion of the computer- generated geometry, and iden
`tifying an absence of valid image information at locations
`having shadoWs produced by the simulated light When the
`locations are vieWed from the second camera.
`In a third general aspect, a system for identifying an
`absence of valid pixel data in a stereoscopically complemen
`tary image is described. The system includes an interface to
`receive a ?rst 2D image captured from a ?rst camera position,
`a projector to project a portion of the ?rst 2D image onto
`computer-generated geometry that corresponds to the por
`tion, a rendering module to render from a second camera
`position a second 2D image that is stereoscopically comple
`mentary to the ?rst 2D image using the projected portion and
`the computer-generated geometry, and a depth calculator to
`determine depths of image information in the ?rst and second
`images for use in identifying an absence of valid image infor
`mation in the second 2D image.
`In another general aspect, a computer program product
`tangibly embodied in an information carrier is described. The
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`computer program product includes instructions that, When
`executed, perform operations including aligning computer
`generated geometry With a corresponding portion of a ?rst 2D
`image captured from a ?rst camera location, projecting the
`portion of the ?rst 2D image onto the geometry, designating a
`second camera location that differs from the ?rst camera
`location, using the geometry and the designated second cam
`era location to render a second 2D image that is stereoscopi
`cally complementary to the ?rst 2D image, and identifying an
`absence of valid image information in the second 2D image
`by comparing depth values of image information from the
`?rst and second 2D images.
`In yet another general aspect, a method of identifying an
`absence of image information When generating a comple
`mentary stereoscopic image pair is described. The method
`includes projecting at least a portion of a ?rst 2D image
`including image information onto computer- generated geom
`etry from a ?rst camera having a ?rst vieWing frustum at a ?rst
`camera location, designating a second camera having a sec
`ond vieWing frustum at a second camera location. The second
`vieWing frustum includes at least a portion of the computer
`generated geometry. The method also includes determining
`depth values associated With the image information in the ?rst
`vieWing frustum and in the second vieWing frustum, Wherein
`each of the depth values indicates a distance betWeen one of
`the cameras and a point in the projected ?rst 2D image in the
`vieWing frustum for the associated camera, and identifying an
`absence of valid image information by comparing the depth
`values for corresponding points of the projected ?rst 2D
`image in the ?rst and second vieWing frustums.
`The systems and techniques described here may provide
`one or more of the following advantages. First, a system may
`effectively associate image information elements With depth
`values that lack a description of their three-dimensional
`geometry or position, Which can increase the ?exibility of the
`system and the speed of 2D-to-3D conversion processes.
`Second, a system can produce a visually compelling stereo
`scopic image pair even When the focal axes of the stereo
`cameras are not parallel. Third, a system can produce a visu
`ally compelling second vieW using unoccluded portions of an
`image that are associated With other portions of the image that
`are geometrically occluded When vieWed from a ?rst camera.
`The details of one or more embodiments of systems and
`methods for 2D to 3D image conversion are set forth in the
`accompanying draWings and the description beloW. Other
`features and advantages of the embodiments Will be apparent
`from the description and draWings, and from the claims.
`
`DESCRIPTION OF DRAWINGS
`
`FIG. 1 is a schematic diagram of an example of a system for
`generating 3D image information.
`FIG. 2 is a block diagram of an example of a 2D-to-3D
`converter shoWn in the system of FIG. 1.
`FIG. 3 is a How chart illustrating a method for creating 3D
`image information.
`FIGS. 4A and 4B are schematic diagrams illustrating the
`generation of an offset map in a left camera vieW and a right
`camera vieW, respectively.
`FIG. 5 is a How chart illustrating a method for generating an
`offset map.
`FIG. 6A is a schematic diagram illustrating a perspective
`vieW of a difference betWeen a presence of data in a left
`camera vieW and a right camera vieW.
`FIG. 6B is a schematic diagram illustrating a top doWn
`vieW of a difference betWeen a presence of data in a left
`camera vieW and a right camera vieW.
`
`PRIME FOCUS EX 1006-16
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,573,475 B2
`
`3
`FIGS. 7A and 7B are block diagrams illustrating tWo
`approaches for ?lling gaps in pixel data.
`FIG. 8A is a block diagram illustrating a generation of
`pixel data in a right camera vieW Where a left camera vieW
`contained no prior valid pixel data.
`FIG. 8B is a block diagram illustrating an exploded vieW of
`the generation of pixel data shoWn in FIG. 8A.
`FIGS. 9A and 9B shoW methods for mapping pixel values
`to pixels included a right camera vieW.
`FIG. 10 is a schematic diagram illustrating a difference in
`camera orientation.
`FIG. 11 is a How chart illustrating a method for correcting
`differences betWeen camera orientations.
`FIG. 12 is a block diagram of a general computer system.
`Like reference symbols in the various draWings indicate
`like elements.
`
`DETAILED DESCRIPTION
`
`FIG. 1 illustrates a system 100 for converting 2D image
`information 102 to a stereoscopic pair of images that may be
`perceived by a vieWer as three-dimensional. The stereoscopic
`pair of images is referred to as 3D image information 104. The
`system 100 can generate a stereoscopic image pair by align
`ing computer-generated geometry that corresponds to objects
`shoWn in a ?rst 2D image included in the 2D image informa
`tion 102. The ?rst 2D image canbe projected on the geometry,
`and a second camera that is offset from the ?rst camera can
`render a second 2D image that is complementary to the ?rst
`2D image. Image data projected from the ?rst camera may not
`be visible from the second camera (e. g., due to occlusion the
`image data by geometry), can be generated using in?lling
`techniques described in more detail beloW. The ?rst 2D image
`and the second 2D image can be vieWed together as a stereo
`scopic image pair that can be consistent With an actual 3D
`scene.
`In certain embodiments, the 2D image information 102
`includes a ?rst camera vieW, such as a left camera vieW 106,
`Which is used to create a second camera vieW, such as a right
`camera vieW 108. Combining the left camera vieW 106 and
`the right camera vieW 108 can produce stereoscopic 3D image
`information 104. A 2D-to-3D converter 110 can receive 2D
`information 102 from a variety of sources, for example, exter
`nal or removable storage 112, optical media 114, another
`Workstation 116, or a combination thereof.
`The 2D-to-3D converter 110 can use the 2D information
`102 to generate the right camera vieW 108. In certain embodi
`ments, generation of the right camera vieW 108 can result in
`areas that lack valid data. In?lling, Which is illustrated in FIG.
`7 and described in more detail in FIG. 8, can use valid pixel
`data from surrounding areas to ?ll in the areas that lack valid
`data.
`The 2D-to-3D converter 110 can combine the left camera
`vieW 106 and the right camera 108 vieW to create 3D image
`information 104. The 3D image information 104 can be used
`to generate stereoscopic images that appear in three dimen
`sions When vieWed With stereoscopic glasses, such as Real
`D’ s Cinema glasses, manufactured by Real D, Inc. of Beverly
`Hills, Calif.
`FIG. 2 illustrates an example 200 of a 2D-to-3D converter
`110, Which receives 2D image information 102 and generates
`3D image information 104. The 2D image information 102,
`shoWn in FIG. 2, can include left camera information 202, any
`number of ?nal images 204, render elements 206, non-ren
`dered elements 212, and optionally, a compositing script 210,
`such as a script generated by Shake® developed by Apple
`Computer Inc. (Cupertino, Calif.). The 2D image information
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`102 may also include or be supplemented With three-dimen
`sional information, for example, the fundamental geometry
`208. The 2D-to-3D converter 110 includes a geometry ren
`derer 214, a left camera 216 that includes a projector 218, a
`depth calculator 220, a right camera 222 that includes a ren
`dering module 224, and a compositing script module 226.
`These subsystems 214-226 can be used by the 2D-to-3D
`converter 110 to generate a scene that includes 3D image
`information 104. Each 2D scene received by the 2D-to-3D
`converter 110 can include ?nal images 204. Final images can
`include visual components of an image scene, for example, a
`protagonist or antagonist of a ?lm, and the surrounding envi
`ronment, such as buildings, tress, cars, etc. Each ?nal image
`204 can be associated With pieces of fundamental geometry
`208. The fundamental geometry 208 can de?ne the location
`of a computer-generated object in three-space and can include
`one or more rendered elements 206.
`Each rendered element can include color and texture infor
`mation. For example, a ?nal image of a car can be associated
`With fundamental geometry describing the shape of the car.
`The fundamental geometry of the car can include rendered
`elements, such as a cube for the body and spheres for the
`Wheels. The rendered body element can include properties,
`for example a metallic red texture, and the rendered Wheel
`elements can include properties, such as a black matte rubber
`like texture.
`Final images can also be associated With non-rendered
`elements 212 that are not associated With fundamental geom
`etry (e.g., hair, or smoke), and thus may not be de?ned in
`three-dimensional space.
`The compositing script module 210 can use the ?nal
`images 204, the fundamental geometry 208, and non-ren
`dered elements 212 to generate timing information used to
`correctly composite a scene. Timing information can specify
`the rate at Which different objects move Within a scene and the
`obj ect’s position at various times. The systems and tech
`niques described in this speci?cation can improve the consis
`tency of the second, or right image, When vieWed over a series
`of sequential frames. This can decrease the appearance of
`jitter, or noise, betWeen the frames When the series of frames
`is vieWed as a ?lm.
`For example, a scene can include a person Walking to the
`left of the scene, a person Walking to the right of a scene, and
`an object falling from the top of scene. If a scene designer
`Wants both individuals simultaneously to arrive at the center
`of a frame so that the falling object hits them, the compositing
`script module 210 uses timing information included in the
`compositing script to coordinate the necessary placement of
`objects. If the object falls too quickly, the timing information
`associated With the falling object may be changed to sloW the
`descent of the object. The rate at Which either the person
`Walking to the right or the person Walking to the left can also
`be modi?ed so that they meet at the same time the falling
`object reaches them.
`In certain embodiments, a compositing script module 210,
`such as Shake® is used. The compositing script module 210
`can use a compositing script that can include a set of opera
`tions that, When executed, create the composited scene. The
`operations can include color correction, keying, and various
`image transform operations. Color correction is an operation
`that can alloW the artist to modify the color of a scene, for
`instance adding more blue to a sky scene. Keying is a com
`positing technique that can combine tWo images, for instance,
`the sky image can be combined With an image of a car in front
`of a “blue screen” or “green screen” to create an image of the
`car in front of the sky image. Image operations can include
`image rotation, image scaling, and image repositioning. For
`
`PRIME FOCUS EX 1006-17
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,573,475 B2
`
`5
`instance, an image transform operation can scale the keyed
`image of the car and sky to increase its siZe to tWo times its
`original siZe, or in another example, the image transform
`operation can rotate the keyed image 180 degrees such that
`the image is turned upside doWn.
`Operations can be vieWed in context, alloWing the end
`results of an operation to be vieWed While the user con?gures
`the properties of the operation. The compositing script mod
`ule 210 may permit the ?nal images 204 to be unaffected by
`changes in scene timing. For example, a scene consisting of
`tWo cars, Where the ?rst car is supposed to jump off a ramp,
`and the second car is supposed to simultaneously drive under
`neath the jumping car. In certain timing structures, the jump
`ing car may reach the ramp too quickly or too sloWly, throW
`ing off the desired effect of the scene. In cases Where the scene
`timing is incorrect, the compositing script may be changed
`using the neW timing information, and the ?nal images 204
`may not need to be re-rendered. Timing changes can come
`from a variety of sources, for example, at a request by a ?lm
`director.
`FIG. 3 illustrates a method 300 for creating 3D image
`information 104. The 2D-to-3D converter 110 can generate a
`left camera, as shoWn by step 302. In certain embodiments,
`this includes receiving the left camera information 202, such
`as the camera’s aspect ratio, ?eld of vieW, vieW vector, and
`position. The 2D-to-3D converter 110 can create the left
`camera 216 With the left camera information 202.
`The 2D-to-3D converter 110 may optionally build funda
`mental geometry 208 for the camera vieW, as shoWn in step
`304. In certain embodiments, hoWever, the fundamental
`geometry 208 can be received by the 2D-to-3D converter 110,
`and thus does not need to be built. The projector 218 can
`project the left camera vieW’s ?nal images 204 onto the left
`camera vieW’s fundamental geometry 208, as shoWn in step
`306.
`A right camera 222 can be generated by the 2D-to-3D 110
`converter and positioned at a location offset relative to the left
`camera 216. In certain embodiments, this distance is substan
`tially similar to the average interocular distance betWeen the
`human eyes, Which is approximately 6.5 cm. In other embodi
`ments, this distance can be set by an artist to achieve a desired
`3D effect, such as 1/3oth of the distance betWeen the camera
`and nearest object in the scene.
`The right camera 222 can use the rendering module 224 to
`render the 2D image information’s fundamental geometry
`208 using the right camera 222 information. Rendering can
`de?ne an object through various techniques that measure the
`outgoing light from the surface of an object. In certain
`embodiments, the rendering module 224 can render using a
`ray tracing technique that casts a ray to each pixel in the scene
`to generate an image. In other embodiments, the rendering
`module 224 can render using a primitive-by-primitive
`method, Where the object is de?ned as a set of primitives (e.g.,
`triangles, polygons, etc.) and each primitive is then rasteriZed,
`Which converts the vector data of the primitive into a form that
`can be displayed. For example, using vector information that
`de?nes a cube and generating red, green and blue (RGB)
`color values that are displayed on a monitor.
`In certain embodiments, a depth map can be created, as
`shoWn in step 308, to help identify pixels that may be
`occluded or missing When the stereoscopically complemen
`tary image (e.g., the right camera vieW 108) is created based
`on a given image (e.g., the left camera vieW 106). The depth
`maps can be created using rasteriZation and depth-buffering
`algorithms. A depth map can be a tWo-dimensional array that
`has the same resolution as the current image scene. For
`
`20
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`6
`example, for each pixel located at (X1, Y1) in the scene, there
`exists a location in the depth map referenced by the same Ci 1,
`Y1) coordinates.
`Each location of the depth map can contain depth informa
`tion for a pixel in the scene. This depth information can also
`be thought of as the pixel’s distance coordinate in a three
`dimensional polar coordinate system. For example, after ref
`erencing the depth map, a 2D pixel at Gil, Y1) can use the
`depth value Z l retrieved from the depth map indexed at Q( 1,
`Y1) to de?ne a 3D ray oriented from the projection center of
`the camera to the pixel, and a 3D point at distance Z 1 from the
`optical center along the positive direction of that line. In some
`embodiments, the depth information may contain more than
`one depth value for a pixel. For example, if a transparent
`object includes a pixel, that pixel may be associated With
`multiple depthsione depth for images vieWed through the
`object from one angle and another depth for images vieWed
`through the object at another angle.
`The depth calculator 220 can obtain the pixel depth value
`of a pixel in the left camera vieW 106 or the right camera vieW
`108 by referencing the tWo-dimensional depth map for the
`pixel in either the left camera vieW 106 or the right camera
`vieW 108, respectively.
`The tWo sets of depths values, from the left camera vieW
`106 depth map and the right camera vieW 108 depth map, can
`be used to identify a set of corresponding pixels from each
`camera vieW Where the pixels from one camera vieW corre
`spond to pixels from the alternate camera vieW. In some
`embodiments, the identi?cation proceeds as described in
`FIG. 5. Speci?cally, for each pixel of the left camera vieW
`Whose distance coordinate is less than a pre-de?ned “back
`groun ” threshold, a 3D point location is computed at step
`508. A corresponding pixel location in the right camera vieW
`is computed at step 510, and compared With the right-camera
`?eld of vieW at step 512. If it is inside that ?eld of vieW, and
`its distance from the right camera center is less than a pre
`de?ned “background” threshold, then the distance itself is
`compared at step 522 With the distance recorded at the closest
`2D pixel location in the depth map associated With the right
`camera. If the tWo distances are Within a pre-determined
`“occlusion” threshold, the tWo pixel locations are said to be
`corresponding, otherWise both pixel locations are said to be
`“occluded”.
`After corresponding pixels are identi?ed, the 2D-to-3D
`converter 110 can calculate an offset map, as shoWn in step
`310. The offset map can be calculated by determining the
`change in position betWeen a pixel in the left camera vieW 1 06
`and the corresponding pixel in the right camera vieW 108. For
`example, if the coordinates of point P in the left camera vieW
`106 are de?ned at (i, j), Where point P is not occluded, and the
`right camera vieW 108 de?nes the coordinates of the point
`corresponding to P as (x, y), the offset map may contain the
`2D vector o(x-j, y-i). This process is further explained in
`associated With FIG. 5.
`In certain instances, pixels in the left camera vieW 106 may
`not share similar depth values With pixels in the right camera
`vieW 108. If the pixels do not share similar depth values, it
`may indicate the valid pixel data is missing or occluded. In
`certain embodiments the 2D-to-3D converter 110 can execute
`a blurring algorithm to modify the offset map.
`In certain embodiments, the offset maps do not contain any
`valid depth information for the non-rendered elements 212,
`such as smoke. HoWever, the depth of the non-rendered ele
`ments 212 may be similar to the depth of nearby sections of
`fundamental geometry 208 or rendered elements 206. For
`
`PRIME FOCUS EX 1006-18
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,573,475 B2
`
`7
`example, the depth of a feather may be similar to the depth of
`a head if the feather is near the head in the left camera vieW’ s
`?nal image 204.
`If a pixel in the left camera vieW 106 is part of a non
`rendered 2D scene element 212, as shown in step 312, the
`2D-to-3D converter 110 can execute a blurring algorithm, as
`shoWn in step 314. The blurring algorithm can blur the offset
`map to correct any perceptual inaccuracies associated With
`the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket