`
`(12) United States Patent
`Tam et al.
`
`(10) Patent N0.2
`(45) Date of Patent:
`
`US 8,213,711 B2
`Jul. 3, 2012
`
`(54) METHOD AND GRAPHICAL USER
`INTERFACE FOR MODIFYING DEPTH MAPS
`
`(75) Inventors: VVa James Tam, Orleans (CA); Carlos
`Vazquez, Ganneau (CA)
`
`(73) Assignee: Her Majesty the Queen in Right of
`Canada as represented by the Minister
`flndustr Throu h the
`0
`_y’ _
`g
`Commlllllcatlons Research Centre
`Canada, Ottawa (CA)
`
`( * ) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 580 days.
`
`(21) App1_ NO; 125081208
`
`(22) Filed:
`
`Jul. 23, 2009
`
`(65)
`
`Prior Publication Data
`US 2010/0080448 A1
`Apr. 1, 2010
`
`4/2001 Ma et a1. ....................... .. 348/43
`6,215,516 B1
`6/2001 Dyer ~~_ ~~~~~~~ ~~
`345/419
`6,249,286 131*
`7/2003 Geshwind
`345/419
`6,590,573 B1
`9/2004 Luken et al.
`348/36
`6,791,598 B1 *
`6,927,769 B2,. 800% Roche, Jr‘ ““ “
`345/419
`7,035,451 B2
`4/2006 Harman et a1.
`382/154
`7,054,478 B2
`5/2006 Harman .... ..
`382/154
`7,148,889 B1 * 12/2006 Ostermann ...... ..
`345/419
`7,180,536 B2 *
`2/2007 WoloWelsky et al.
`348/42
`7,187,809 B2
`3/2007 Zhao et al. ....... ..
`382/285
`7,262,767 B2
`8/2007 Yamada ....... ..
`345/419
`7,477,779 B2
`1/2009 Graves et a1. ............... .. 382/162
`-
`(Continued)
`
`OTHER PUBLICATIONS
`_
`_
`_
`_
`_
`K. T. Kim, M. Siegel, & J. Y. Son, “Synthesis ofa high-resolution 3D
`stereoscopic image pair from a high-resolution monoscopic image
`and a low-resolution depth map,” Proceedings of the SPIE: Stereo
`scopic Displays and Applications IX, vol. 3295A, pp. 76-86, San
`Jose, Calif., U.S.A., 1998.
`
`(Continued)
`
`Related US. Application Data
`(63) Continuation-in-part of application No. 12/060,978,
`?led on Apr. 2, 2008.
`
`Primary Examiner i Jose Couso
`(74) Attorney, Age/1!, 01’ Firm *Teitelballm & MacLean;
`Neil Teitelbaum; Doug MacLean
`
`(60) Provisional application No. 61/129,869, ?led on Jul.
`25, 2008, provisional application No. 60/907,475,
`?led on Apr. 3, 2007.
`
`(51) Int- Cl-
`(2006-01)
`G06K 9/00
`US. Cl- ..................................................... ..
`(58) Field of Classi?cation Search ................ .. 382/ 100,
`382/154, 162, 165, 285, 294; 345/418*419,
`345/473; 348/36, 81; 352/ 57
`See application ?le for complete search history.
`
`(56)
`
`References Cited
`
`US. PATENT DOCUMENTS
`
`(57)
`
`ABSTRACT
`
`The invention relates to a method and a graphical user inter
`face formodif in ade thma fora di ital monosco ic color
`y g
`P
`P
`g
`P
`image. The method includes interactively selecting a region
`Ofthe de {h ma based on color Ofa {21 et re ion in the 00101‘
`P
`P
`T8
`8
`image, and modifying depth values in the thereby selected
`region ofthe depth map using a depth modi?cation rule. The
`color-based pixel selection rules for the depth map and the
`depth modi?cation rule selected based on one color image
`from a video sequence may be saved and applied to automati
`cally modify depths maps of other color images from the
`same sequence.
`
`4,641,177 A
`4,925,294 A
`
`2/1987 Ganss ............................. .. 358/3
`5/1990 Geshwind et a1. ............ .. 352/57
`
`19 Claims, 13 Drawing Sheets
`
`1.09
`
`161/ ROM
`
`RAM \162
`
`131
`
`155
`
`\
`
`Image Data
`Source
`
`Processor
`
`/
`166
`
`Mono
`Display
`
`3D
`Display
`
`\
`185
`
`Storage
`N
`168
`
`User Input
`.
`Devlce
`
`\ 169
`
`Legend3D, Inc.
`Exhibit 1006-0001
`
`
`
`US 8,213,711 B2
`Page 2
`
`US. PATENT DOCUMENTS
`
`3/2005 Curti et al. .................. .. 382/154
`2005/0053276 A1
`3/2006 Redert et al. ........ ..
`. 382/154
`2006/0056679 A1
`2006/0232666 A1 10/2006 Op de Beeck et a1.
`348/51
`2007/0024614 A1
`2/2007 Tam et al. ........ ..
`. 345/419
`2007/0146232 A1
`6/2007 Redert et al.
`345/6
`2008/0247670 A1 10/2008 Tam et al.
`. 382/298
`2008/0260288 A1 10/2008 Redert ........................ .. 382/285
`
`OTHER PUBLICATIONS
`
`J. Flack, P. Harman, & S. Fox, “Low bandwidth stereoscopic image
`encoding and transmission,” Proceedings of the SPIE: Stereoscopic
`Displays and Virtual Reality Systems X, vol. 5006, pp. 206-214,
`Santa Clara, Calif., USA, Jan. 2003.
`
`L. MacMillan, “An image based approach to three dimensional com
`puter graphics”, Ph. D. dissertation, University of North Carolina,
`1997.
`L. Zhang & W. J. Tam, “Stereoscopic image generation based on
`depth images for 3D TV,” IEEE Transactions on Broadcasting, vol.
`51, pp. 191-199, 2005.
`W.J. Tam, “Human Factors and Content Creation for Three-Dimen
`sional Displays”, Proceedings of The 14th International Display
`Workshops (IDW’07), Dec. 2007, vol. 3, pp. 2255-2258.
`Redert et a1. “Philips 3D solutions: from content creation to visual
`ization”, Proceeding of the Third International Symposium on 3D
`Data Processing, Visualization, and Transmission (3DPVT’06), Uni
`versity ofNorth Carolina, Chapel Hill, USA, Jun. 14-16, 2006.
`“Dynamic Digital dDepth (DDD) and Real-time 2D to 3D conversion
`on the ARM processor”, DDD Group plc., White paper, Nov. 2005.
`
`* cited by examiner
`
`Legend3D, Inc.
`Exhibit 1006-0002
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 1 0113
`
`US 8,213,711 B2
`
`1119.
`
`161 / ROM
`
`RAM ~\ 162
`
`155
`
`\
`
`Processor
`
`Image Data
`Source
`
`/
`166
`
`Mono
`Display
`
`3D
`Display
`
`185
`
`Storage
`N
`168
`
`User Input
`.
`D€VIC6
`
`\ 169
`
`FIG. 1
`
`Legend3D, Inc.
`Exhibit 1006-0003
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 2 0113
`
`US 8,213,711 B2
`
`l
`
`5 ~\ Receiving Color lmage
`
`1 1O
`S
`
`
`
`1O and Depth Map \ Dlsp'aymg Color image l
`
`
`
`
`
`
`
`
`
`Color image ............................. ........................ i.
`
`12
`
`L
`
`Depth Mag
`
`
`
`Identifying ?rst color in the _/—15 '- """"""" color image
`
`
`
`"""" 120
`
`User Selects
`Pixel
`
`User Defines Pixel
`Selection Rule
`
`v
`identifying
`like-coloured pixels
`
`f 2()
`
`130
`/
`
`n
`
`.
`
`.
`
`v
`
`.
`
`
`
`‘ ................................ a’
`
`"
`
`(j
`19
`
`25\ VQispleying Region
`isualization image
`
`3
`5
`
`RV!
`
`\50
`
`;
`
`No
`
`Yes
`
`User Selects RVl
`
`‘ Identifying Depth Map
`'
`Region
`
`\ 39
`
`140
`
`i,
`
`.......
`
`................... _,
`
`35 \ Generating Modified
`
`
`
`
`
`Depth Map l l
`
`Modified
`—>;
`57
`
`Depth Map iiiiiiiiiiiiiiiiiiiiiiiiiiiiiii
`
`1
`5
`
`l
`F_________y___________1
`I
`|
`i
`F‘- —
`
`D‘BR
`
`z
`
`I
`
`.................................
`Stereosoopi
`lmage Pair
`
`.z
`
`l ” ~ ~ _ ~ w “ . — a “I
`
`2' ‘ - . e . . . 1 1 2 . 1 . . 1 . . . 1 . 1 1 1 . 1 2 1 2 2 . 1 . . 1 . . 1 . . e X '
`
`Legend3D, Inc.
`Exhibit 1006-0004
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 3 0f 13
`
`US 8,213,711 B2
`
`m .UE
`
`Legend3D, Inc.
`Exhibit 1006-0005
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 4 0113
`
`US 8,213,711 B2
`
`128
`
`132
`
`O 4 41
`
`180
`
`1303
`
`F164
`
`Legend3D, Inc.
`Exhibit 1006-0006
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 5 0f 13
`
`US 8,213,711 B2
`
`1_3_5_
`
`1401
`
`22
`
`‘ ..................
`
`....... .,
`
`User Defines is‘
`
`Applying 1St Candidate DMR to
`
`i
`
`CDlVI 1
`
`1501
`
`
`
`Candidate Depth MOdl?CatiO? Rule
`
`_
`
`
`
`generate lsl Candidate Depth Map
`
`—-
`
`
`
`
`
`"""""""""""""""""" ............................... 3D image 1
`
`a
`
`User Defines 2nc
`Candidate Depth
`Modification Rule
`
`DMRZ
`
`23
`
`60
`
`................................ .,
`
`v
`/
`Applying Znd Candidate DMR
`to generate 2nd Candidate
`Depth Map
`
`V
`
`CDlVl 2
`
`1402
`
`
`
`........................ 3D image 2 '
`
`v
`
`................................ .~
`
`User Selects
`.
`candidate Depth
`M
`al3
`
`Selecting candidate DMR
`> corresponding to user-selected \ 65
`.
`candidate Depth Map
`
`62
`
`
`
`............. ............ Modified ‘[140
`Depth Map
`
`Legend3D, Inc.
`Exhibit 1006-0007
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 6 0f 13
`
`US 8,213,711 B2
`
`1_a
`
`Obtaining Sequence of
`Color Images and Depth
`
`
`
`Maps Selecting tSt Color Image
`
`_\
`
`and associated Depth Map \ 210
`
`V
`215
`interactively identifying Pixel
`Selection Rule (PSR) and ./
`associated DAR based on color
`
`1301
`
`
`
`
`
`
`
`v Selecting Depth Modification ‘ Rule
`
`
`
`
`
`............... _/ 22o___ ................... Modified Depth Map for 1St image
`
`V
`
`Saving PSR and DlVlR P 225
`
`V
`
`Applying PSR and DMR
`230\
`obtained for the 1St color image ‘
`to other color images in the
`sequence
`
`.....................
`
`3
`
`‘
`
`Maps
`
`'
`
`i
`
`V
`Outputtirig Sequence of Color
`Images and modified
`Depth Maps
`
`240 f
`
`l _ — _ _ _ _l _ — _ _ _ _i
`
`:
`
`DIBR
`
`:-- -y
`
`Stereoscopicéé
`Image Pairs
`
`l _ _ _ _ _ _ _ _ . . .. .4
`
`........................................ ‘ '
`
`Legend3D, Inc.
`Exhibit 1006-0008
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 7 0f 13
`
`US 8,213,711 B2
`
`m
`
`Frames
`
`Frame Sequence
`3% e.’ 1 @ 11:1 www
`E] E]
`First
`Last
`m1
`
`L392
`
`Adjustments
`[ Edit]
`[ Up ][Down][ Del ][Clear]
`
`\ 393
`
`*- - - Generate CDM ~ - -
`400 W
`
`A
`
`?
`
`FIG. 7
`
`Legend3D, Inc.
`Exhibit 1006-0009
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 8 0f 13
`
`US 8,213,711 B2
`
`3551 —\
`
`Mask selection
`Area
`
`[Whole image '
`@ inciusive
`0 Exclusive
`X1: 1
`X2: 720
`Y1: i
`V22 480
`
`/
`
`351
`
`It» 01 O
`
`356
`
`320
`
`Colour
`
`Red Green Blue
`L A.
`A.
`
`X1258
`
`‘W332
`
`To‘erance
`‘
`
`SeiectPixei I
`
`I
`
`I) “352
`
`I
`
`Zoom
`Colour
`
`/L
`
`353
`
`v
`192
`
`"'
`
`‘I’ v
`255 161
`
`FIG. 8
`
`Legend3D, Inc.
`Exhibit 1006-0010
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 9 0f 13
`
`US 8,213,711 B2
`
`w 363
`/
`
`Greyscale adjustmetns ‘
`
`Factor
`365 \ @ Uniform
`' @ Weighted @
`@ Advanced
`
`364 a
`
`Show Description
`
`V
`
`V361
`
`[
`
`Cancel ] [Add Changes]
`
`FIG. 9
`
`3 0
`
`Depth blur
`
`Grey hist. adjustments
`
`Blur
`Blur
`Slze Spread
`
`C) Invert
`0 Equa?ze
`
`381
`‘
`\\
`
`‘
`
`0 Normaiize
`0 Adjust
`Max 100
`0
`
`CLAHE
`
`—\
`383
`
`Min
`
`v v
`
`1
`
`8
`
`8
`
`0
`
`O
`
`CL Rows Cole I 385
`@ Uniform Q Rayleigh
`E] 0 Exponential
`
`FIG. 10
`
`Legend3D, Inc.
`Exhibit 1006-0011
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 10 0113
`
`US 8,213,711 B2
`
`l9
`
`RGB Histogram
`
`“10000 -
`
`'\
`5000 KM.
`
`. “an 372
`
`100
`O
`Histogram Equalization
`Green [:| Blue
`3751 f/Amf Red
`
`200
`
`Normalization and Adiuatmant
`Red
`Green
`Blue
`Max
`Max
`Max
`373 --~----
`“a 100
`100
`100
`O
`L 0
`0
`Min
`Min
`Min
`Normalize
`] L Adjust
`
`374 “a
`N
`\‘l
`
`]
`
`,
`_.» 371
`“p
`
`376
`
`378
`
`CLAl-IE
`xhlj Red
`Dlsll'lbullon
`@ Uniform
`\MHO Rayleigh
`l
`
`‘:1 Green D Blue
`Clip Limit
`Tile Size
`‘
`1
`3
`g
`Rows Cola
`@~-~
`
`FIG. 11
`
`Legend3D, Inc.
`Exhibit 1006-0012
`
`
`
`U.S. Patent
`
`31f011whS
`
`ooaS
`
`2B117,
`
`HJ.A-0x\Mx>.2omQmD...§oA..%__m
`
`
`
`comommmm1.Q5co_mcm>:oUgm.hgovamE«Emum._$mEm..8.nAmomvflcmcoafioo
`
`
`
`
`mwmmotsmmcfismmmmcztngmccmmEL£mc.,8.«mmmé5900user:
`
`
`
`
`omfimom\
`
`5o:zo:.<sEommz<E...m3o.5oD
`
`
`
`
`
`am$32mamEEa% $5cmamaacooSbm:o%.0mmw;m_EoZEmcoxaom_mmcm.aonmmwimcmw
`
`
`
`
`
`3...n.SGE.
`
`U,................................................................................................................................................................................................................................................................................................................................
`
`Legend3D, Inc.
`Exhibit 1006-0013
`
`Legend3D, Inc.
`Exhibit 1006-0013
`
`
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 12 0113
`
`US 8,213,711 B2
`
`2 .05
`
`Legend3D, Inc.
`Exhibit 1006-0014
`
`
`
`US. Patent
`
`Jul. 3, 2012
`
`Sheet 13 0113
`
`US 8,213,711 B2
`
`co
`
`an:
`
`.E QUE
`
`
`
`
`
`x Hi1; 5.3mm was SEQU QQE 539
`
`
`
`Q21" "1?
`
`Legend3D, Inc.
`Exhibit 1006-0015
`
`
`
`US 8,213,711 B2
`
`1
`METHOD AND GRAPHICAL USER
`INTERFACE FOR MODIFYING DEPTH MAPS
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`The present invention claims priority from US. Provi
`sional Patent Application No. 61/129,869 ?led Jul. 25, 2008,
`entitled “Method and Graphical User Interface for Modifying
`Depth Maps”, Which is incorporated herein by reference, and
`is a continuation in part of a US. patent application Ser. No.
`12/060,978, ?led Apr. 2, 2008, entitled “Generation Of A
`Depth Map From A Monoscopic Color Image For Rendering
`Stereoscopic Still And Video Images”, Which claims priority
`from US. Provisional Patent Application No. 60/907,475
`?ledApr. 3, 2007, entitled “Methods for Generating Synthetic
`Depth Maps from Colour Images for Stereoscopic and Mul
`tivieW Imaging and Display”, Which are incorporated herein
`by reference for all purposes.
`
`TECHNICAL FIELD
`
`The present invention generally relates to methods and
`systems for generating depth information for monoscopic
`tWo-dimensional color images, and more particularly relates
`to a computer implanted method and a computer program
`product for modifying depth maps based on color information
`contained in monoscopic images.
`
`20
`
`25
`
`BACKGROUND OF THE INVENTION
`
`30
`
`Stereoscopic or three-dimensional (3D) television (3D
`TV) is expected to be a next step in the advancement of
`television. Stereoscopic images that are displayed on a
`3D-TV are expected to increase visual impact and heighten
`the sense of presence for vieWers. 3D-TV displays may also
`provide multiple stereoscopic vieWs, offering motion paral
`lax as Well as stereoscopic information.
`A successful adoption of 3D-TV by the general public Will
`depend not only on technological advances in stereoscopic
`and multi-vieW 3D displays, but also on the availability of a
`Wide variety of program contents in 3D. One Way to alleviate
`the likely lack of program material in the early stages of
`3D-TV rollout is to ?nd a Way to convert tWo-dimensional
`(2D) still and video images into 3D images, Which Would also
`enable content providers to re-use their vast library of pro
`gram material in 3D-TV.
`In order to generate a 3D impression on a multi-vieW
`display device, images from different vieW points have to be
`presented. This requires either multiple input vieWs consist
`ing of camera-captured images or rendered images based on
`some 3D or depth information. This depth information can be
`either recorded, generated from multivieW camera systems or
`generated from conventional 2D video material. In a tech
`nique called depth image based rendering (DIBR), images
`With neW camera vieWpoints are generated using information
`from an original monoscopic source image and its corre
`sponding depth map containing depth values for each pixel or
`groups of pixels of the monoscopic source image. These neW
`images then can be used for 3D or multivieW imaging devices.
`The depth map can be vieWed as a gray-scale image in Which
`each pixel is assigned a depth value representing distance to
`the vieWer, either relative or absolute. Alternatively, the depth
`value of a pixel may be understood as the distance of the point
`of the three-dimensional scene represented by the pixel from
`a reference plane that may for example coincide With the
`plane of the image during image capture or display. It is
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`usually assumed that the higher the gray-value (lighter gray)
`associated With a pixel, the nearer is it situated to the vieWer.
`A depth map makes it possible to obtain from the starting
`image a second image that, together With the starting image,
`constitutes a stereoscopic pair providing a three-dimensional
`vision of the scene. The depth maps are ?rst generated from
`information contained in the 2D color images and then both
`are used in depth image based rendering for creating stereo
`scopic image pairs or sets of stereoscopic image pairs for 3D
`vieWing. In the rendering process, each depth map provides
`the depth information for modifying the pixels of its associ
`ated color image to create neW images as if they Were taken
`With a camera that is slightly shifted from its original and
`actual position. Examples of the DIBR technique are dis
`closed, for example, in articles K. T. Kim, M. Siegel, & J. Y.
`Son, “Synthesis of a high-resolution 3D stereoscopic image
`pair from a high-resolution monoscopic image and a loW
`resolution depth map,” Proceedings of the SPIE: Stereoscopic
`Displays and Applications IX, Vol. 3295A, pp. 76-86, San
`Jose, Calif., U.S.A., 1998; and J. Flack, P. Harman, & S. Fox,
`“LoW bandWidth stereoscopic image encoding and transmis
`sion,” Proceedings of the SPIE: Stereoscopic Displays and
`Virtual Reality Systems X, Vol. 5006, pp. 206-214, Santa
`Clara, Calif., USA, January 2003; L. Zhang & W. J. Tam,
`“Stereoscopic image generation based on depth images for
`3D TV,” IEEE Transactions on Broadcasting, Vol. 51, pp.
`191-199, 2005.
`Advantageously, based on information from the depth
`maps, DIBR permits the creation of a set of images as if they
`Were captured With a camera from a range of vieWpoints. This
`feature is particularly suited for multivieW stereoscopic dis
`plays Where several vieWs are required.
`One problem With conventional DIBR is that accurate
`depth maps are expensive or cumbersome to acquire either
`directly or from a 2D image. For example, a “true” depth map
`can be generated using a commercial depth camera such as
`the ZCamTM available from 3DV Systems, Israel, that mea
`sures the distance to objects in a scene using an infra-red (IR)
`pulsed light source and an IR sensor sensing the re?ected light
`from the surface of each object. Depth maps can also be
`obtained by projecting a structured light pattern onto the
`scene so that the depths of the various objects could be recov
`ered by analyZing distortions of the light pattern. Disadvan
`tageously, these methods require highly specialiZed hardWare
`and/or cumbersome recording procedures, restrictive scene
`lighting and limited scene depth.
`Although many algorithms exist in the art for generating a
`depth map from a 2D image, they are typically computation
`ally complex and often require manual or semi-automatic
`processing. For example, a typical step in the 2D-to-3D con
`version process may be to generate depth maps by examining
`selected key frames in a video sequence and to manually mark
`regions that are foreground, mid-ground, and background. A
`specially designed computer softWare may then be used to
`track the regions in consecutive frames to allocate the depth
`values according to the markings This type of approach
`requires trained technicians, and the task can be quite labori
`ous and time-consuming for a full-length movie. Examples of
`prior art methods of depth map generation Which involve
`intensive human intervention are disclosed in US. Pat. Nos.
`7,035,451 and 7,054,478 issued to Harman et al.
`Another group of approaches to depth map generation
`relies on extracting depth from the level of sharpness, or blur,
`in different image areas. These approaches are based on real
`iZation that there is a relationship betWeen the depth of an
`object, i.e., its distance from the camera, and the amount of
`blur of that object in the image, and that the depth information
`
`Legend3D, Inc.
`Exhibit 1006-0016
`
`
`
`US 8,213,711 B2
`
`3
`in a visual scene may be obtained by modeling the effect that
`a camera’s focal parameters have on the image. Attempts
`have also been made to generate depth maps from blur With
`out knowledge of camera parameters by assuming a general
`monotonic relationship betWeen blur and distance. However,
`extracting depth from blur may be a dif?cult and/ or unreliable
`task, as the blur found in images can also arise from other
`factors, such as lens aberration, atmospheric interference,
`fuZZy objects, and motion. In addition, a substantially same
`degree of blur arises for objects that are farther aWay and that
`are closer to the camera than the focal plane of the camera.
`Although methods to overcome some of these problems and
`to arrive at more accurate and precise depth values have been
`disclosed in the art, they typically require more than one
`exposure to obtain tWo or more images. A further disadvan
`tage of this approach is that it does not provide a simple Way
`to determine depth values for regions for Which there is no
`edge or texture information and Where therefore no blur can
`be detected.
`A recent U.S. patent application 2008/0247670, Which is
`assigned to the assignee of the current application and is by
`the same inventors, discloses a method of generation surro
`gate depth maps based on one or more chrominance compo
`nents of the image. Although these surrogate depth maps can
`have regions With incorrect depth values, the perceived depth
`of the rendered stereoscopic images using the surrogate depth
`maps has been judged to provide enhanced depth perception
`relative to the original monoscopic image When tested on
`groups of vieWers. It Was speculated that depth is enhanced
`because in the original colour images, different objects are
`likely to have different hues. Each of the hues has its oWn
`associated gray level intensity When separated into its com
`ponent color images and used as surrogate depth maps. Thus,
`the colour information provides an approximate segmenta
`tion of “objects” in the images, Which are characteriZed by
`different levels of grey in the color component image. Hence
`the color information provides a degree of foreground-back
`ground separation. In addition, slightly different shades of a
`given hue Would give rise to slightly different gray level
`intensities in the component images. Within an object region,
`these small changes Would signal small changes in relative
`depth across the surface of the object, such as the undulating
`folds in clothing or in facial features. Because using color
`information to substitute for depth can lead to depth inaccu
`racies, in some cases the visual perception of 3D images
`generated using these surrogate depth maps can be further
`enhanced by modifying these depth maps by changing the
`depth values in selected areas.
`Generally, regardless of the method used, depth maps gen
`erated from 2D images can contain objects and/or regions
`With inaccurate depth information. For example, a tree in the
`foreground could be inaccurately depicted as being in the
`background. Although this can be corrected by a user through
`the use of a photo editing softWare by identifying and select
`ing obj ect/regions in the image and then changing the depth
`contained therein, this task can be tedious and time-consum
`ing especially When this has to be done for images in Which
`there are many different minute objects or textures. In addi
`tion, the need to manually correct all similar frames in a video
`sequence can be daunting. Furthermore, even though com
`mercially available softWare applications for generating
`depth maps from standard 2D images can be used for editing
`of depth maps, they typically involve complex computations
`and require long computational time. For example, one com
`mercial softWare alloWs for manual seeding of a depth value
`Within an object of an image, folloWed by automatic expan
`sion of the area of coverage by the softWare to cover the region
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`4
`considered to be Within an “object,” such as the trunk of a tree
`or the sky; hoWever, Where and When to stop the region
`groWing is a computationally challenging task. Furthermore,
`for video clips the softWare has to track objects over consecu
`tive frames and this requires further complex computations.
`Furthermore, having an e?icient method and tools for
`modifying depth maps can be advantageous even When the
`original depth map suf?ciently re?ects the real depth of the
`actual scene from Which the image or video Was created, for
`example for creating striking visual effects. For example, just
`as a director might use sharpness to make a ?gure stand out
`from a blurred image of the background, a director might
`Want to provide more depth to a ?gure to make it stand out
`from a receded background.
`Accordingly, there is a need for e?icient methods and
`systems for modifying existing depth maps in selected
`regions thereof.
`In particular, there is a need to reduce computational time
`and complexity to enable the selection of pixels and regions to
`conform to object regions such that they can be isolated and
`their depth values adjusted, for improved contrast or accu
`racy. Being able to do that manually for one image frame and
`then automatically repeat the process for other image frames
`With similar contents is a challenge.
`An object of the present invention is to provide a relatively
`simple and computationally ef?cient method and a graphical
`user interface for modifying existing depth maps in selected
`regions thereof for individual monoscopic images and mono
`scopic video sequences.
`
`SUMMARY OF THE INVENTION
`
`Accordingly, one aspect of the invention provides a method
`for modifying a depth map of a tWo-dimensional color image
`for enhancing a 3D image rendered therefrom. The method
`comprises: A) obtaining a ?rst color image and a depth map
`associated thereWith containing depth values for pixels of the
`?rst color image; B) displaying at least one of the ?rst color
`image and the depth map on a computer display; C) selecting
`a depth adjustment region (DAR) in the depth map for modi
`fying depth values therein; and D) generating a modi?ed
`depth map by modifying the depth values in the DAR using a
`selected depth modi?cation rule. The step (D) of generating a
`modi?ed depth includes: a) receiving a ?rst user input iden
`tifying a ?rst pixel color Within a range of colors of a target
`region in the ?rst color image; b) upon receiving a second user
`input de?ning a pixel selection rule for selecting like-co
`loured pixels based on the ?rst pixel color, using said pixel
`selection rule for identifying a plurality of the like-coloured
`pixels in the ?rst color image; c) displaying a region visual
`iZation image (RVI) representing pixel locations of the plu
`rality of like-coloured pixels; d) repeating steps (b) and (c) to
`display a plurality of different region visualiZation images
`corresponding to a plurality of different color selection rules
`for selection by a user; and, e) identifying a region in the depth
`map corresponding to a user selected region visualiZation
`image from the plurality of different region visualiZation
`images, and adopting said region in the depth map as the
`DAR.
`An aspect of the present invention further provides a
`method for modifying depth maps for 2D color images for
`enhancing 3D images rendered thereWith, comprising: a)
`selecting a ?rst color image from a video sequence of color
`images and obtaining a depth map associated thereWith,
`Wherein said video sequence includes at least a second color
`image corresponding to a different frame from a same scene
`
`Legend3D, Inc.
`Exhibit 1006-0017
`
`
`
`US 8,213,711 B2
`
`5
`and having a different depth map associated therewith; b)
`selecting a ?rst pixel color in the ?rst color image within a
`target region;
`c) determining pixel locations of like-colouredpixels of the
`?rst color image using one or more color selection rules, the
`like-coloured pixels having a pixel color the same as the ?rst
`pixel color or in a speci?ed color tolerance range thereabout;
`d) applying a selected depth modi?cation rule to modify the
`depth map of the ?rst color image at depth map locations
`corresponding to the pixel locations of the like-coloured pix
`els to obtain a modi?ed depth map of the ?rst color image; e)
`applying the one or more color selection rules and the selected
`depth modi?cation rule to identify like-coloured pixels in the
`second color image of the video sequence and to modify the
`depth map of the second color image at depth map locations
`corresponding to the pixel locations of the like-coloured pix
`els in the second color image to obtain a modi?ed depth map
`of the second color image; f) outputting the ?rst and second
`color images and the modi?ed depth maps associated there
`with for rendering an enhanced video sequence of 3D images;
`and, wherein the one or more color selection rules and the
`selected depth modi?cation rule are obtained based on the
`?rst color image.
`One feature of the present invention provides a graphical
`user interface (GUI) for modifying depth maps of color
`images or sequences of color images, which provides GUI
`tools for displaying the ?rst color image, the region visual
`iZation image, the depth map, and the modi?ed depth map on
`the computer screen, and for receiving the ?rst and second
`user inputs, and for saving the pixel selection rule and the
`selected depth modi?cation rule obtained using the ?rst color
`image for use in modifying depth maps of other color images.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`20
`
`25
`
`30
`
`35
`
`6
`FIG. 14 is a view illustrating an appearance of the graphical
`user interface of FIG. 3 at the end of a depth map modi?cation
`process for an exemplary color image.
`
`DETAILED DESCRIPTION
`
`The invention will be described in connection with a num
`ber of exemplary embodiments. To facilitate an understand
`ing of the invention, many aspects of the invention are
`described in terms of sequences of actions to be performed by
`functional elements of a video-processing system. It will be
`recogniZed that in each of the embodiments, the various
`actions including those depicted as blocks in ?ow-chart illus
`trations and block schemes could be performed by special
`iZed circuits, for example discrete logic gates interconnected
`to perform a specialiZed function, by computer program
`instructions being executed by one or more processors, or by
`a combination of both. Moreover, the invention can addition
`ally be considered to be embodied entirely within any form of
`a computer readable storage medium having stored therein an
`appropriate set of computer instructions that would cause a
`processor to carry out the techniques described herein. Thus,
`the various aspects of the invention may be embodied in many
`different forms, and all such forms are contemplated to be
`within the scope of the invention.
`In the context of the present speci?cation the terms “mono
`scopic color image” and “2D color image” or “two-dimen
`sional color image” are used interchangeably to mean a pic
`ture, typically digital and two-dimensional planar, containing
`an image of a scene complete with visual characteristics and
`information that are observed with one eye, such as lumi
`nance intensity, colour, shape, texture, etc. Images described
`in this speci?cation are assumed to be composed of picture
`elements called pixels and can be viewed as two-dimensional
`arrays or matrices of pixels, where the term “array” is under
`stood herein to encompass matrices. A depth map is a two
`dimensional array of pixels each assigned a depth value indi
`cating the relative or absolute distance from a viewer or a
`reference plane to a part of an object in the scene that is
`depicted by the corresponding pixel or block of pixels. A
`depth map may be represented as a 2D grey-scale digital
`image wherein grey-level intensity of each pixel represents a
`depth value. The term “color component”, when used with
`reference to a color image, means a pixel array wherein each
`pixel is assigned a value representing a partial color content of
`the color image. A color component of a monoscopic color
`image can also be viewed as a gray-scale image. Examples of
`color components include any one or any combination of two
`of the RGB color components of the image, or a chrominance
`component of the image in a particular color space. The term
`“deviated image,” with respect to a source image, means an
`image with a different viewpoint from the source image of a
`given scene. A deviated image and a source image may form
`a stereoscopic image pair; two deviated images with different
`viewpoints may also form a stereoscopic pair. The larger the
`difference in deviation the larger the depth of objects will be
`depicted in the scene.
`The present invention provides a computer-implemented
`method for selecting pixels and regions within a depth map
`(DM) for depth modi?cation based on color, or a range of a
`color tone or shade, that is selected from an associated 2D
`color image of the same visual scene. The intensity values of
`pixels in thereby selected areas of the DM are then modi?ed,
`so as to either correct the depth information or to create a
`desired perceptual effect with stereoscopic images that are
`generated with the depth map. As an example, when there is
`an error in the depth map such that pixels associated with an
`
`The invention will be described in greater detail with ref
`erence to the accompanying drawings which represent pre
`ferred embodiments thereof, wherein:
`FIG. 1 is a general representation of a computer system that
`can be used in the present invention;
`FIG. 2 is a ?owchart of a method of modifying a depth map
`according to the present invention;
`FIG. 3 is a general view of the graphical user interface for
`modifying a depth map according to one embodiment of the
`present invention;
`FIG. 4 is a diagram illustrating the process of selecting the
`depth