throbber
111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US007181081B2
`
`c12) United States Patent
`San drew
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,181,081 B2
`Feb.20,2007
`
`(54)
`
`IMAGE SEQUENCE ENHANCEMENT
`SYSTEM AND METHOD
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`(75)
`
`Inventor: Barry B. Sandrew, Encinitas, CA (US)
`
`(73) Assignee: Legend Films Inc., San Diego, CA
`(US)
`
`( *) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 267 days.
`
`(21) Appl. No.:
`
`10/450,970
`
`(22) PCT Filed:
`
`May 6, 2002
`
`(86) PCTNo.:
`
`PCT/US02/14192
`
`§ 371 (c)(l),
`(2), ( 4) Date:
`
`Jun. 18, 2003
`
`(87) PCT Pub. No.: W002/091302
`
`PCT Pub. Date: Nov. 14, 2002
`
`(65)
`
`Prior Publication Data
`
`US 2004/0131249 Al
`
`Jul. 8, 2004
`
`Related U.S. Application Data
`
`(60) Provisional application No. 60/288,929, filed on May
`4, 2001.
`
`(51)
`
`Int. Cl.
`G06K 9140
`(2006.01)
`(52) U.S. Cl. ...................... 382/254; 382/165; 382/282;
`348/100; 348/586
`(58) Field of Classification Search ................ 382/103,
`382/104,107,171,282,283,284, 165,274;
`358/538, 540, 453, 464; 348/584, 100, 586,
`348/97
`See application file for complete search history.
`
`3,619,051 A
`3,705,762 A
`4,021,841 A
`4,149,185 A
`4,606,625 A
`4,642,676 A
`4,755,870 A
`4,903,131 A
`4,984,072 A
`5,038,161 A *
`5,050,984 A
`
`1111971 Wright
`12/1972 Ladd et a!.
`5/1977 Weinger
`4/1979 Weinger
`8/1986 Geshwind
`2/1987 Weinger
`7/1988 Markle eta!.
`2/1990 Lingemann et al.
`111991 Sandrew
`8/1991 Ki .............................. 396/340
`9/1991 Geshwind
`
`(Continued)
`
`Primary Examiner-Bhavesh M. Mehta
`Assistant Examiner-Yosef Kassa
`(74) Attorney, Agent, or Firm-Dalina Law Group P.C.
`
`(57)
`
`ABSTRACT
`
`Scenes from motion pictures to be colorized are broken up
`into separate elements, composed of backgrounds/sets or
`motion/onscreen-action. These background and motion ele(cid:173)
`ments are combined into single frame representations of
`multiple frames as tiled frame sets or as a single frame
`composite of all elements (i.e., both motion and back(cid:173)
`ground) that then becomes a visual reference database that
`includes data for all frame offsets which are later used for the
`computer controlled application of masks within a sequence
`of frames. Each pixel address within the visual reference
`database corresponds to a mask/lookup table address within
`the digital frame and X, Y, Z location of subsequent frames
`that were used to create the visual reference database. Masks
`are applied to subsequent frames of motion objects based on
`various differentiating image processing methods. The gray
`scale determines the mask and corresponding color lookup
`from frame to frame as applied in a keying fashion.
`
`49 Claims, 34 Drawing Sheets
`(21 of 34 Drawing Sheet(s) Filed in Color)
`
`PRIME FOCUS EX 1003-1
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,181,081 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`3/1992 San drew
`5,093,717 A
`10/1993 Sandrew et al.
`5,252,953 A
`7/1994 Blanding et a!.
`5,328,073 A
`7/1996 San drew
`5,534,915 A
`1111997 Palmer
`5,684,715 A
`3/1998 Jain eta!.
`5,729,471 A
`5/1998 Palm
`5,748,199 A
`6/1998 Coleman
`5,767,923 A
`7/1998 Coleman
`5,778,108 A
`7/1998 Lee
`5,784,175 A
`7/1998 Narita
`5,784,176 A
`1111998 Liou eta!.
`5,835,163 A
`1111998 Goodhill
`5,841,512 A
`5/1999 Friemel et al.
`5,899,861 A
`6/1999 Norton eta!.
`5,912,994 A
`7/1999 Coleman
`5,920,360 A
`9/1999 Coleman
`5,959,697 A
`5,982,350 A * 1111999 Hekmatpour et al. ....... 345/629
`5,990,903 A * 1111999 Donovan .................... 345/589
`6,014,473 A
`112000 Hossack et a!.
`6,025,882 A
`212000 Geshwind
`6,049,628 A
`4/2000 Chen eta!.
`
`6,056,691 A
`6,067,125 A
`6,086,537 A
`6,102,865 A
`6,119,123 A
`6,132,376 A
`6,141,433 A *
`6,201,900 B1
`6,211,941 B1 *
`6,222,948 B1
`6,226,015 B1
`6,228,030 B1
`6,263,101 B1
`6,271,859 B1
`6,360,027 B1
`6,364,835 B1
`6,373,970 B1
`6,390,980 B1
`6,416,477 B1
`6,445,816 B1 *
`6,707,487 B1 *
`
`5/2000 Urbano eta!.
`5/2000 May
`7/2000 Urbano eta!.
`8/2000 Hossack et a!.
`9/2000 Elenbaas et a!.
`10/2000 Hossack et a!.
`10/2000 Moed et al ................. 382/103
`3/2001 Hossack et a!.
`4/2001 Erland ......................... 352/45
`4/2001 Hossack et a!.
`5/2001 Danneels et a!.
`5/2001 Urbano eta!.
`7/2001 Klein eta!.
`8/2001 Asente
`3/2002 Hossack et a!.
`4/2002 Hossack et a!.
`4/2002 Dong eta!.
`5/2002 Peterson et al.
`7/2002 Jago
`9/2002 Pettigrew .................... 382/162
`3/2004 Aman eta!. ................ 348/169
`
`* cited by examiner
`
`PRIME FOCUS EX 1003-2
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 1 of 34
`
`US 7,181,081 B2
`
`14a
`
`Figure 1
`
`PRIME FOCUS EX 1003-3
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 2 of 34
`
`US 7,181,081 B2
`
`Figure 2
`
`Figure 4
`
`Isolated
`Background
`
`PRIME FOCUS EX 1003-4
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 3 of 34
`
`US 7,181,081 B2
`
`2
`
`H
`
`14
`
`20
`
`2fi
`
`32
`
`3
`
`4
`
`9
`
`15
`
`21
`
`27
`
`:n
`
`10
`............
`
`16
`
`22
`
`2~
`
`34
`
`5
`
`11
`
`17
`.........
`
`2.1
`
`29
`
`35
`
`fi
`
`!2
`
`18
`
`24
`
`30
`
`Jfi
`
`Floating
`Tool
`B<:~r
`
`Figure SA
`
`---... 1
`
`1
`f-- ..
`
`13
`
`19
`
`25
`
`31
`
`Key
`.is-
`frame
`the o
`nly
`frame
`fully
`ed
`mask
`with
`color
`p
`looku
`table
`s
`
`,......-.
`ft>·
`
`- s
`
`Fmme
`number
`
`
`
`-
`
`Figure 58
`
`PRIME FOCUS EX 1003-5
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 4 of 34
`
`US 7,181,081 B2
`
`D
`
`Figure 6A
`
`Floating child preview window OV<..'Tlaying sequential frames
`
`Controls for previewing und<..'Tlying images in real(cid:173)
`time motion or :single fmme stepping to assure
`""'2]PJHllonland quality wntrol
`
`Tool
`Bar
`
`Frame
`numbers
`
`3T T -:u· i .:U
`
`I j<!-
`
`. I
`
`~)) I j()
`
`I
`
`Figure 68
`
`PRIME FOCUS EX 1003-6
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 5 of 34
`
`US 7,181,081 B2
`
`Kcvfnune
`~ksor
`individual
`masks are
`automatically
`copied to
`~
`subs.::qucnt
`frames (2
`through 36)
`
`Frame
`numbers
`
`1
`
`7
`
`13
`
`19
`
`25
`
`2
`
`8.
`
`14
`
`20
`
`26
`
`31
`
`32
`
`3
`
`9
`
`15
`
`21
`
`27
`
`33
`
`4
`
`10
`
`16
`
`22
`
`28
`
`5
`
`11
`
`17
`
`23
`
`29
`
`6
`
`12
`
`18
`
`24
`
`30
`
`34
`
`35
`
`36
`
`Figure 7A
`
`F!oat~ng
`Tool
`Bar
`
`figure 78
`
`PRIME FOCUS EX 1003-7
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 6 of 34
`
`US 7,181,081 B2
`
`Figure 8
`
`PRIME FOCUS EX 1003-8
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 7 of 34
`
`US 7,181,081 B2
`
`Figure 9A
`
`Figure 9B
`
`PRIME FOCUS EX 1003-9
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 8 of 34
`
`US 7,181,081 B2
`
`Figure 10A
`
`Reference Image
`
`Figure 108
`
`Reference Box (xO, yO)
`
`Search Box (x, y)
`
`Figure 10C
`
`figure 100
`
`PRIME FOCUS EX 1003-10
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 9 of 34
`
`US 7,181,081 B2
`
`Search Image Gradient Descent
`
`Figure 11 B
`
`Search Box 1
`
`Search Box 2
`
`Figure 11A
`
`Figure 11C
`
`PRIME FOCUS EX 1003-11
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 10 of 34
`
`US 7,181,081 B2
`
`Figure 12
`
`PRIME FOCUS EX 1003-12
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 11 of 34
`
`US 7,181,081 B2
`
`Figure 13
`
`Figure 14
`
`PRIME FOCUS EX 1003-13
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 12 of 34
`
`US 7,181,081 B2
`
`Figure 15
`
`Figure 16
`
`PRIME FOCUS EX 1003-14
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 13 of 34
`
`US 7,181,081 B2
`
`Figure 17
`
`Figure 18
`
`PRIME FOCUS EX 1003-15
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 14 of 34
`
`US 7,181,081 B2
`
`Figure 19
`
`Figure 2 0
`
`PRIME FOCUS EX 1003-16
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 15 of 34
`
`US 7,181,081 B2
`
`Figure 21
`
`Figu~re 22
`
`.Figure 23
`
`PRIME FOCUS EX 1003-17
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 16 of 34
`
`US 7,181,081 B2
`
`Figure 24
`
`Figure 25
`
`PRIME FOCUS EX 1003-18
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 17 of 34
`
`US 7,181,081 B2
`
`Figure 26
`
`PRIME FOCUS EX 1003-19
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 18 of 34
`
`US 7,181,081 B2
`
`Figure 27
`
`Figure 28
`
`PRIME FOCUS EX 1003-20
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 19 of 34
`
`US 7,181,081 B2
`
`Figure 29
`
`Figure 30
`
`PRIME FOCUS EX 1003-21
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 20 of 34
`
`US 7,181,081 B2
`
`Figure 31
`
`Figure 32
`
`PRIME FOCUS EX 1003-22
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 21 of 34
`
`US 7,181,081 B2
`
`Figure 33
`
`Figure 34
`
`Figure 35
`
`PRIME FOCUS EX 1003-23
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 22 of 34
`
`US 7,181,081 B2
`
`Figure 36
`
`PRIME FOCUS EX 1003-24
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 23 of 34
`
`US 7,181,081 B2
`
`Figure 37A
`
`Mask Fit
`
`Initialize region and
`fit-grid parameters
`
`Calculate fit grid
`
`Interpolate mask on fit grid
`
`,,
`
`cleanup
`
`PRIME FOCUS EX 1003-25
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 24 of 34
`
`US 7,181,081 B2
`
`Calculate fit grid
`
`Figure 378
`
`Calculate
`FitValue=
`fit(x, y, xfit, yfit)
`(=mean_ squared_ difference
`(x, y, xfit, yfit, BoxRadius))
`
`Calculate
`(gradx, grady) =
`fit_gradient(x, y, xfit, yfit)
`
`Calculate
`fit(x, y, XX, yy)
`
`PRIME FOCUS EX 1003-26
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 25 of 34
`
`US 7,181,081 B2
`
`Figure 37C
`
`Interpolate mask on fit grid
`
`Get (i,j) for
`Fit Grid Cell (x, y)
`
`FitGridA[i, j] ----FitGridB[i+1, j]
`I
`I
`I
`I
`FitGridA[i, j+1] ----FitGridB[i+1, j]
`
`Calculate
`( xfit, yfit ) =
`bilinear interpolation
`(FitGridA, FitGridB,
`FitGridC, FitGridD )
`
`PRIME FOCUS EX 1003-27
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 26 of 34
`
`US 7,181,081 B2
`
`Figure 38A
`
`Extract Background
`
`Start:
`Initialize background tool
`& dialog
`
`generate
`FrameMask(i)
`
`increment
`progress bar
`
`data cleanup
`
`PRIME FOCUS EX 1003-28
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 27 of 34
`
`US 7,181,081 B2
`
`Figure 388
`
`GetFrameShift( DVx[i], DVy[i] )
`
`GetFrameMargin( IMarg[i], rMarg[i] )
`
`bs(DVx[i]) > DVxMa
`?
`AND/OR
`bs(DVy[i])> DVyMax?
`
`Initialize Composite Image
`Compositelnit( Frame[O],
`DVxMax, DV Max
`
`CompositeFrames(Frame[i], DVx[i], DVy[i])
`
`increment ro ress bar
`
`PRIME FOCUS EX 1003-29
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 28 of 34
`
`US 7,181,081 B2
`
`Figure 39A
`
`1. Snap Bezier/polygon point
`
`Snap Bezier/polygon point
`
`SnapToEdge(CTgalmg*lmage, POINT*ImgPoint,
`POINT*SnapPoint, int Range)
`~
`
`Initialization:
`
`No Edge Found
`
`;t::: >< w
`
`EdRect- snap range rectangle
`Edgelmage- image of edges found
`Rangel mage- image of distances from original point
`
`..
`I Define snap range rectangle within original image
`+
`
`1.1. Define image of edges found
`
`I
`
`EdgeDetect(image, EdRect, &Edgelmage)
`
`Define image of distances from original point
`
`+
`
`MakeRangel mage(&Rangel mage,Range))
`
`1.2. Find Point to snap to
`
`~
`
`GetSnapPoint(&Edgelmage,&Rangelmage,&NewPoint))
`~
`Clean up and Exit
`
`PRIME FOCUS EX 1003-30
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 29 of 34
`
`US 7,181,081 B2
`
`1.1. Define image of edges found
`
`Define image of edges found
`
`Figure 398
`
`EdgeDetect(l mage, Ed Rect, & Edge I mage)
`
`Initialization:
`
`•
`
`Ed Image- image of averaged original image
`Grad - gradient values array
`Gradlmg- image of normlaized gradient values
`~
`
`Average filter
`
`Fill Edlmage pixels with average values of original luminance
`
`Gradient filter
`
`..
`
`Fill Grad elements with gradient amplitudes of average luminance
`...
`Define dynamic range of Grad values
`
`I
`
`Fill gradient image with normalized gradient values and calculate
`Above/Below half level counters
`
`•
`
`No edge found
`
`.1
`
`ompare Above/Below counters
`ratio against threshold to
`determine is there edge found
`
`"0
`c:
`::J
`.E
`Q)
`0>
`
`Q)
`0
`z
`
`Set pixels with rows maximums From Gradlmg to Edgelmage
`
`I
`Set pixels with columns maximums From Gradlmg to Edgelmage I
`
`•
`•
`
`Exit
`
`PRIME FOCUS EX 1003-31
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 30 of 34
`
`US 7,181,081 B2
`
`1.2. Find Point to snap to
`
`Figure 39C
`
`1.2. Find Point to snap to
`
`GetSnapPoint(&Edgelmage,&Rangelmage,&NewPoint))
`
`-c ·a
`
`a.
`0 -Z X
`Q) z
`
`I!!
`
`Q)
`+-'
`Q)
`E
`~ ca a.
`
`x=O, y=O, MinDistance=255
`
`MinDistance = Rangelmage(x,y)
`BestSnapPoint=(x,y)
`
`en
`Q) >-
`
`NewPoint = BestSnapPoint
`
`Cleanup and Exit
`
`PRIME FOCUS EX 1003-32
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 31 of 34
`
`US 7,181,081 B2
`
`2. Relative bimodal threshold tool
`
`(
`
`relative bimodal threshold tool
`
`Figure 40A
`
`2.1. Create Image of LighUDark Cursor Shape
`
`MakelightShape(&LightShape,x,y,&Shape,&Originallmage, boollslight))
`+
`2.2. Apply LighUDark shape to mask I
`...
`Exit
`
`2. Relative bimodal threshold tool
`
`Figure 408
`
`Apply Shape on Mask
`
`x=O,y=O
`
`MaskPixel(lmgX,ImgY) set to
`Current mask index
`
`0 z
`
`Exit
`
`PRIME FOCUS EX 1003-33
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 32 of 34
`
`US 7,181,081 B2
`
`2.1. Create Image of Light/Dark Cursor Shape
`
`Figure 40C
`
`MakeLightShape(&NewShape,NewX,NewY,&Shape,&Originallmage,bool lsLight)
`
`..... e .....
`
`<!)
`c::
`0
`
`Jj
`
`Initialization
`Initialize New Shape Image
`
`x=O,y=O, Sum=O, Num=O
`
`0 z
`
`Threshold = Sum/Num
`
`Cleanup and Exit
`
`PRIME FOCUS EX 1003-34
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 33 of 34
`
`US 7,181,081 B2
`
`Calculate Fit Gradient
`
`Figure 41A
`
`Calculate
`FitValue1 =
`fit x, ,xfit + BoxRadius, 1t
`
`Calculate
`FitValue4 =
`fit(x,y,xfit, yfit- BoxRadius
`
`Calculate gradient:
`gradx =
`FitValue1 - FitValue2
`
`Calculate
`FitValue2 =
`,xfit- BoxRadius,
`
`Calculate
`FitValue3 =
`fit x,y,xfit, yfit + BoxRadius)
`
`Calculate gradient:
`grady=
`FitValue3 - FitValue4
`
`Rescale gradient
`length( gradx, grady)=
`ixels
`2
`
`PRIME FOCUS EX 1003-35
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.20,2007
`
`Sheet 34 of 34
`
`US 7,181,081 B2
`
`Figure 418
`
`Calculate Fit Value
`
`ii = -BoxRadius
`jj = -BoxRadius
`RMS_sum = 0
`
`xfit+ii, yfit+jj
`bounded by
`ref_image?
`
`2
`
`RMS_sum =
`RMS_sum + SquareDiff
`
`RMS_sum =
`SORT( RMS_sum I Sum_ count )
`
`return
`Fit Value = RMS_sum
`
`PRIME FOCUS EX 1003-36
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,181,081 B2
`
`1
`IMAGE SEQUENCE ENHANCEMENT
`SYSTEM AND METHOD
`
`This application is the national stage entry of PCT appli(cid:173)
`cation PCT/US02/14192, filed May 6'h, 2002, deriving
`priority from U.S. Provisional application 60/288,929, filed
`May 4'h, 2001.
`
`BACKGROUND OF THE INVENTION
`
`2
`tools such as relative bimodal thresholding in which masks
`are applied selectively to contiguous light or dark areas
`bifurcated by a cursor brush. After the key frame is fully
`designed and masked, all mask information from the key
`frame is then applied to all frames in the display-using mask
`fitting techniques that include:
`1. Automatic mask fitting using Fast Fourier Transform
`and Gradient Decent Calculations based on luminance and
`pattern matching which references the same masked area of
`10 the key frame followed by all prior subsequent frames in
`successwn.
`2. Bezier curve animation with edge detection as an
`automatic animation guide
`3. Polygon animation with edge detection as an automatic
`animation guide
`In another embodiment of this invention, these back(cid:173)
`ground elements and motion elements are combined sepa(cid:173)
`rately into single frame representations of multiple frames,
`as tiled frame sets or as a single frame composite of all
`elements (i.e., including both motion and backgrounds/
`foregrounds) that then becomes a visual reference database
`for the computer controlled application of masks within a
`sequence composed of a multiplicity of frames. Each pixel
`address within the reference visual database corresponds to
`mask/lookup table address within the digital frame and X, Y,
`Z location of subsequent "raw" frames that were used to
`create the reference visual database. Masks are applied to
`subsequent frames based on various differentiating image
`processing methods such as edge detection combined with
`pattern recognition and other sub-mask analysis, aided by
`operator segmented regions of interest from reference
`objects or frames, and operator directed detection of subse(cid:173)
`quent regions corresponding to the original region of inter(cid:173)
`est. In this manner, the gray scale actively determines the
`35 location and shape of each mask and corresponding color
`lookup from frame to frame that is applied in a keying
`fashion within predetermined and operator controlled
`regions of interest.
`Camera Pan Background and Static Foreground Ele-
`40 ments: Stationary foreground and background elements in a
`plurality of sequential images comprising a camera pan are
`combined and fitted together using a series of phase corre(cid:173)
`lation, image fitting and focal length estimation techniques
`to create a composite single frame that represents the series
`45 of images used in its construction. During the process of this
`construction the motion elements are removed through
`operator adjusted global placement of overlapping sequen(cid:173)
`tial frames.
`The single background image representing the series of
`50 camera pan images is color designed using multiple color
`transform look up tables limited only by the number of
`pixels in the display. This allows the designer to include as
`much detail as desired including air brushing of mask
`information and other mask application techniques that
`55 provide maximum creative expression. Once the back(cid:173)
`ground color design is completed the mask information is
`transferred automatically to all the frames that were used to
`create the single composited image.
`Image offset information relative to each frame is regis-
`60 tered in a text file during the creation of the single composite
`image representing the pan and used to apply the single
`composite mask to all the frames used to create the com(cid:173)
`posite image.
`Since the foreground moving elements have been masked
`65 separately prior to the application of the background mask,
`the background mask information is applied wherever there
`is no pre-existing mask information.
`
`Prior art patents describing methods for the colorizing of
`black and white feature films involved the identification of
`gray scale regions within a picture followed by the appli(cid:173)
`cation of a pre-selected color transform or lookup tables for
`the gray scale within each region defined by a masking 15
`operation covering the extent of each selected region and the
`subsequent application of said masked regions from one
`frame to many subsequent frames. The primary difference
`between U.S. Pat. No. 4,984,072, System And Method For
`Color Image Enhancement, and U.S. Pat. No. 3,705,762, 20
`Method For Converting Black-And-White Films To Color
`Films, is the manner by which the regions of interest (ROis)
`are isolated and masked, how that information is transferred
`to subsequent frames and how that mask information is
`modified to conform with changes in the underlying image 25
`data. In the U.S. Pat. No. 4,984,072 system, the region is
`masked by an operator via a one-bit painted overlay and
`operator manipulated using a digital paintbrush method
`frame by frame to match the movement. In the U.S. Pat. No.
`3,705,762 process, each region is outlined or rotoscoped by 30
`an operator using vector polygons, which are then adjusted
`frame by frame by the operator, to create animated masked
`ROis.
`In both systems the color transform lookup tables and
`regions selected are applied and modified manually to each
`frame in succession to compensate for changes in the image
`data which the operator detects visually. All changes and
`movement of the underlying luminance gray scale is sub(cid:173)
`jectively detected by the operator and the masks are sequen(cid:173)
`tially corrected manually by the use of an interface device
`such as a mouse for moving or adjusting mask shapes to
`compensate for the detected movement. In all cases the
`underlying gray scale is a passive recipient of the mask
`containing pre-selected color transforms with all modifica(cid:173)
`tions of the mask under operator detection and modification.
`In these prior inventions the mask information does not
`contain any information specific to the underlying lumi(cid:173)
`nance gray scale and therefore no automatic position and
`shape correction of the mask to correspond with image
`feature displacement and distortion from one frame to
`another is possible.
`
`SUMMARY OF THE INVENTION
`
`In the system and method of the present invention, scenes
`to be colorized are classified into two separate categories;
`either background elements (i.e. sets and foreground ele(cid:173)
`ments that are stationary) or motion elements (e.g., actors,
`automobiles, etc) that move throughout the scene. These
`background elements and motion elements are treated sepa(cid:173)
`rately in this invention similar to the manner in which
`traditional animation is produced.
`Motion Elements: The motion elements are displayed as
`a series of sequential tiled frame sets or thumbnail images
`complete with background elements. The motion elements
`are masked in a key frame using a multitude of operator
`interface tools common to paint systems as well as unique
`
`PRIME FOCUS EX 1003-37
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,181,081 B2
`
`10
`
`3
`Static Camera Scenes With and Without Film Weave,
`Minor Camera Following and Camera Drift: In scenes where
`there is minor camera movement or film weave resulting
`from the sprocket transfer from 35 mm or 16 mm film to
`digital format, the motion objects are first fully masked
`using the techniques listed above. All frames in the scene are
`then processed automatically to create a single image that
`represents both the static foreground elements and back(cid:173)
`ground elements, eliminating all masked moving objects
`where they both occlude and expose the background.
`Where ever the masked moving object exposes the back(cid:173)
`ground or foreground the instance of background and fore(cid:173)
`ground previously occluded is copied into the single image
`with priority and proper offsets to compensate for camera
`movement. The offset information is included in a text file
`associated with each single representation of the background
`so that the resulting mask information can be applied to each
`frame in the scene with proper mask offsets.
`The single background image representing the series of
`static camera frames is color designed using multiple color 20
`transform look up tables limited only by the number of
`pixels in the display. Where the motion elements occlude the
`background elements continuously within the series of
`sequential frames they are seen as black figure that are
`ignored and masked over. The black objects are ignored 25
`during the masking operation because the resulting back(cid:173)
`ground mask is later applied to all frames used to create the
`single representation of the background only where there is
`no pre-existing mask. This allows the designer to include as
`much detail as desired including air brushing of mask 30
`information and other mask application techniques that
`provide maximum creative expression. Once the back(cid:173)
`ground color design is completed the mask information is
`transferred automatically to all the frames that were used to
`create the single composited image.
`The patent of application file contains at least one drawing
`executed in color. Copies of this patent or patent application
`publication with color drawing(s) will be provided by the
`Office upon request and payment of the necessary fee.
`FIG. 1 shows a plurality of feature film or television film 40
`frames representing a scene or cut in which there is a single
`instance or perceptive of a background.
`FIG. 2 shows an isolated background processed scene
`from the plurality of frames shown in FIG. 1 in which all
`motion elements are removed using various subtraction and
`differencing techniques. The single background image is
`then used to create a background mask overlay representing
`designer selected color lookup tables in which dynamic
`pixel colors automatically compensate or adjust for moving
`shadows and other changes in luminance.
`FIG. 3 shows a representative sample of each motion
`object (M-Object) in the scene receives a mask overlay that
`represents designer selected color lookup tables in which
`dynamic pixel colors automatically compensate or adjust for 55
`moving shadows and other changes in luminance as the
`M-Object moves within the scene.
`FIG. 4 shows all mask elements of the scene are then
`rendered to create a fully colored frame in which M-Object
`masks are applied to each appropriate frame in the scene 60
`followed by the background mask, which is applied only
`where there is no pre-existing mask in a Boolean mauner.
`FIGS. SA and SB show a series of sequential frames
`loaded into display memory in which one frame is fully
`masked with the background (key frame) and ready for mask 65
`propagation to the subsequent frames via automatic mask
`fitting methods.
`
`4
`FIGS. 6A and 6B show the child window displaying an
`enlarged and scalable single image of the series of sequential
`images in display memory. The Child window enables the
`operator to manipulate masks interactively on a single frame
`or in multiple frames during real time or slowed motion.
`FIGS. 7A and 7B shows a single mask (flesh) is propa(cid:173)
`gated automatically to all frames in the display memory.
`FIG. 8 shows all masks associated with the motion object
`are propagated to all sequential frames in display memory.
`FIG. 9A shows a picture of a face.
`FIG. 9B shows a close up of the face in FIG. 9A wherein
`the "small dark" pixels shown in FIG. 9B are used to
`calculate a weighed index using bilinear interpolation.
`FIGS. lOA-D show searching for a Best Fit on the Error
`15 Surface: An error surface calculation in the Gradient
`Descent Search method involves calculating mean squared
`differences of pixels in the square fit box centered on
`reference image pixel (xO, yO), between the reference image
`frame and the corresponding (offset) location (x, y) on the
`search image frame.
`FIGS. llA--C show a second search box derived from a
`descent down the error surface gradient (evaluated sepa(cid:173)
`rately), for which the evaluated error function is reduced,
`possibly minimized, with respect to the original reference
`box (evident from visual comparison of the boxes with the
`reference box in FIGS. lOA, B, C and D).
`FIG. 12 depicts the gradient component evaluation. The
`error surface gradient is calculated as per definition of the
`gradient. Vertical and horizontal error deviations are evalu(cid:173)
`ated at four positions near the search box center position,
`and combined to provide an estimate of the error gradient for
`that position 12.
`FIG. 13 shows a propagated mask in the first sequential
`instance where there is little discrepancy between the under-
`35 lying image data and the mask data. The dress mask and
`hand mask can be clearly seen to be off relative to the image
`data.
`FIG. 14 shows that by using the automatic mask fitting
`routine, the mask data adjusts to the image data by refer(cid:173)
`encing the underlying image data in the preceding image.
`FIG. 15 shows the mask data in later images within the
`sequence show marked discrepancy relative to the underly(cid:173)
`ing image data. Eye makeup, lipstick, blush, hair, face, dress
`and hand image data are all displaced relative to the mask
`data.
`FIG. 16 shows that the mask data is adjusted automati(cid:173)
`cally based on the underlying image data from the previous
`mask and underlying image data.
`FIG. 17 shows the mask data from FIG. 16 is shown with
`appropriate color transforms after whole frame automatic
`mask fitting. The mask data is adjusted to fit the underlying
`luminance pattern based on data from the previous frame or
`from the initial key frame.
`FIG. 18 shows polygons that are used to outline a region
`of interest for masking in frame one. The square polygon
`points snap to the edges of the object of interest. Using a
`Bezier curve the Bezier points snap to the object of interest
`and the control points/curves shape to the edges.
`FIG. 19 shows the entire polygon or Bezier curve is
`carried to a selected last frame in the display memory where
`the operator adjusts the polygon points or Bezier points and
`curves using the snap function which automatically snaps
`the points and curves to the edges of the object of interest.
`FIG. 20 shows that if there is a marked discrepancy
`between the points and curves in frames between the two
`frames where there was an operator interactive adjustment,
`
`45
`
`50
`
`PRIME FOCUS EX 1003-38
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,181,081 B2
`
`6
`In this case, the single representation of the complete
`background has been masked with color transforms in a
`manner similar to the motion objects. Note that outlines of
`removed foreground objects appear truncated and unrecog(cid:173)
`nizable due to their motion across the input frame sequence
`interval., i.e., the black objects in the frame represent areas
`in which the motion objects (actors) never expose the
`background and foreground. The black objects are ignored
`during the masking operation because the resulting back-
`10 ground mask is later applied to all frames used to create the
`single representation of the background only where there is
`no pre-existing mask.
`FIG. 35 shows the sequential frames in the static camera
`scene cut after the background mask information has been
`applied to each frame with appropriate offsets and where
`there is no pre-existing mask information.
`FIG. 36 shows a representative sample of frames from the
`static camera scene cut after the background information has
`been applied with appropriate offsets and where there is no
`pre-existing mask information.
`FIGS. 37A--C show embodiments of the Mask Fitting
`functions, including calculate fit grid and interpolate mask
`on fit grid.
`FIGS. 38A-B show embodiments of the extract back-
`25 ground functions.
`FIGS. 39A-C show embodiments of the snap point func-
`tions.
`FIGS. 40A-C show embodiments of the bimodal thresh(cid:173)
`old masking functions.
`FIGS. 41A-B show embodiments of the calculate fit
`value functions.
`
`DETAILED DESCRIPTION OF A PREFERRED
`EMBODIMENT OF THE INVENTION
`
`5
`the operator will further adjust a frame in the middle of the
`plurality of frames where there is maximum error of fit.
`FIG. 21 shows that when it is determined that the poly(cid:173)
`gons or Bezier curves are correctly animating between the
`two adjusted frames, the appropriate masks are applied to all
`frames.
`FIG. 22 shows the resulting masks from a polygon or
`Bezier animation with automatic point and curve snap to
`edges. The brown masks are the color transforms and the
`green masks are the arbitrary color masks.
`FIG. 23 shows an example of two pass blending: The
`objective in two-pass blending is to eliminate moving
`objects from the final blended mosaic. This can be done by
`first blending the frames so the moving object is completely
`removed from the left side of the background mosaic. As 15
`shown in FIG. 23, the character can is removed from the
`scene, but can still be seen in the right side of the back(cid:173)
`ground mosaic.
`FIG. 24 shows the second pass blend. A second back(cid:173)
`ground mosaic is then generated, where the blend position 20
`and width is used so that the moving object is removed from
`the right side of the final background mosaic. As shown in
`FIG. 24, the character can is removed from the scene, but
`can still be seen the left side of the background mosaic. In
`the second pass blend as shown in FIG. 24, the moving
`character is shown on the left.
`FIG. 25 shows the final background corresponding to
`FIGS. 23-24. The two-passes are blended together to gen(cid:173)
`erate the final blended background mosaic with the moving
`object removed from the scene. As shown in FIG. 25, the 30
`final blended background with moving character is removed.
`FIG. 26 shows an edit frame pair window.
`FIG. 27 shows sequential frames representing a camera
`pan that are loaded into memory. The motion object (butler
`moving left to the door) has been masked with a series of 35
`color transform information leaving the background black
`and white with no masks or color transform information
`applied.

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket