throbber
111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US007333670B2
`
`c12) United States Patent
`San drew
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,333,670 B2
`*Feb.19,2008
`
`(54)
`
`(75)
`
`(73)
`
`IMAGE SEQUENCE ENHANCEMENT
`SYSTEM AND METHOD
`
`Inventor: Barry B. Sandrew, Encinitas, CA (US)
`
`Assignee: Legend Films, Inc., San Diego, CA
`(US)
`
`( *)
`
`Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 33 days.
`
`This patent is subject to a terminal dis(cid:173)
`claimer.
`
`(21) Appl. No.: 111324,815
`
`(22)
`
`Filed:
`
`Jan. 4, 2006
`
`(65)
`
`Prior Publication Data
`
`US 2006/0171584 Al
`
`Aug. 3, 2006
`
`Related U.S. Application Data
`
`( 62) Division of application No. 10/450,970, filed as appli(cid:173)
`cation No. PCTIUS02/14192 on May 6, 2002, now
`Pat. No. 7,181,081.
`
`(60) Provisional application No. 60/288,929, filed on May
`4, 2001.
`
`(51)
`
`Int. Cl.
`C06K 9140
`(2006.01)
`(52) U.S. Cl. ...................... 382/254; 382/184; 3821212;
`382/283; 358/506; 358/517
`(58) Field of Classification Search ................ 382/184,
`3821212, 254, 283, 293; 358/506, 517
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`6,263,101 B1 *
`6,364,835 B1 *
`6,445,816 B1 *
`6,707,487 B1 *
`7,181,081 B2 *
`
`Klein ......................... 3 82/162
`Hossack et a!.
`............ 600/443
`Pettigrew .................... 382/162
`Aman eta!. ................ 348/169
`Sandrew ..................... 382/254
`
`7/2001
`4/2002
`9/2002
`3/2004
`2/2007
`
`* cited by examiner
`
`Primary Examiner-Yosef Kassa
`(74) Attorney, Agent, or Firm-Dalina Law Group, P.C.
`
`(57)
`
`ABSTRACT
`
`Motion picture scenes to be colorized are broken into
`separate elements, backgrounds/sets or motion/onscreen(cid:173)
`action. Background and motion elements are combined
`separately into single frame representations of multiple
`frames which becomes a visual reference database that
`includes data for all frame offsets used later for the computer
`controlled application of masks within a sequence offrames.
`Each pixel address within the database corresponds to a
`mask/lookup table address within the digital frame and X, Y,
`Z location of subsequent frames. Masks are applied to
`subsequent frames of motion objects based on various
`differentiating image processing methods, including auto(cid:173)
`mated mask fitting of all masks or single masks in an entire
`frame, bezier and polygon tracing of selected regions with
`edge detected shaping and operator directed detection of
`subsequent regions. The gray scale actively determines the
`mask and corresponding color lookup that is applied in a
`keying fashion within regions of interest.
`
`29 Claims, 22 Drawing Sheets
`
`;;:;··"·. ·-.·.·~·r~>-·
`Floating child preview window overlaying sequential
`frames
`
`: .... , ···~:·:''· ol .• "!".' 0
`
`·"
`
`Tool
`Bar
`
`·":.·:~ '.!.,•'
`
`....... - . · · - ··.-:. .... , . . . . . . (.11'r;, ... ,...,, ...
`
`t
`
`Controls for previewing underlying images in
`real-time m,otion or single frame stepping to
`
`assurer alignment and quality control
`
`Frame
`numbers
`
`PRIME FOCUS EX 1004-1
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 1 of 22
`
`US 7,333,670 B2
`
`Figure 1
`
`PRIME FOCUS EX 1004-2
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 2 of 22
`
`US 7,333,670 B2
`
`"Figure 2
`
`Figure 3
`
`Y.rsure4
`
`PRIME FOCUS EX 1004-3
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 3 of 22
`
`US 7,333,670 B2
`
`Figure SA
`
`K ey
`fr ameis_
`-+
`eonly
`th
`e
`fram
`lly
`fu
`asked
`m
`"th
`Wl
`c olor
`okup
`lo
`bles
`ta
`
`...--
`
`e
`Fram
`numbers
`
`,____
`
`1
`
`7
`
`13
`
`19
`
`25
`
`31
`
`2
`
`R
`
`14
`
`,20
`
`2n
`
`32
`
`3
`
`9
`
`15
`
`21
`
`27
`
`33
`
`4
`
`10
`
`16
`
`22
`
`28
`
`34
`
`5
`
`11
`
`17
`
`23
`
`29
`
`35
`
`6
`
`12
`
`18
`
`24
`
`30
`
`36
`
`Floating
`Tool
`Bar
`
`PRIME FOCUS EX 1004-4
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 4 of 22
`
`US 7,333,670 B2
`
`?;·•" 0
`t
`
`· - · ' • o:•.-.-.. •
`
`•
`
`,:,•:.- '.l,•' .,.._, 0
`
`-
`
`. • • - 1',-~ o'l \'' .. , ,
`
`(.I 1'1':,
`
`'t" """'''
`
`: .... , ,.,~:o:l',
`
`• I
`
`0 "!",'
`
`,
`
`0
`
`,'f
`
`Floating child preview window overlaying sequential
`frames
`
`Tool
`Bar
`
`Controls for previewing underlying images in
`real-time m,otion or single frame stepping to
`
`assure lk alignment and quality control
`
`•· r: .. . ,
`i: r
`~ ...
`-~.
`~.
`I"-
`:~
`I:;
`'----.TT,.---.1---:~.,~
`
`r--
`
`Frame
`numbers
`
`,i;.,-J ~.,.-z--,--r---1_~,..,---.lr--o:.,~~-rj---:~.,~o"T'""J
`
`Figure6A
`
`II -~ ,_..,..,...,
`
`Figure 6B
`
`'""''"'"...,;·,-=-•-=
`
`'""""!....,-w
`
`n:-re,-..·
`
`PRIME FOCUS EX 1004-5
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 5 of 22
`
`US 7,333,670 B2
`
`r· ..... - •• ~c ••
`
`Floating
`Tool
`Bar
`
`~~
`
`!.·
`I
`r~
`!'.
`i\
`: ..
`•...
`r
`•. ?
`ti 1;1.
`~ ~
`~
`.~
`(;
`::.:·
`
`Key frame
`masks or
`individual
`I1UISks IU'C
`automatically
`copied to
`subsequent
`frames(:!
`thtough 36)
`
`·~ . ~ u
`
`I
`
`7
`
`13
`
`19
`
`25
`
`31
`
`FIGURE7A
`
`2
`
`8
`
`14
`
`20
`
`26
`
`3
`
`9
`
`15
`
`21
`
`27
`
`4
`
`10
`
`16
`
`22
`
`28
`
`5
`
`11
`
`17
`
`23
`
`29
`
`6
`
`12
`
`18
`
`24
`
`30
`
`32
`
`33
`
`34
`
`35
`
`36
`
`FIGURE7B
`
`PRIME FOCUS EX 1004-6
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 6 of 22
`
`US 7,333,670 B2
`
`Figure 8
`
`' ·~·----------~
`•t':, .:
`... :·
`·.: ..
`~--· •
`
`0
`
`PRIME FOCUS EX 1004-7
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 7 of 22
`
`US 7,333,670 B2
`
`Figure 9A
`
`Figure 9B
`
`PRIME FOCUS EX 1004-8
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 8 of 22
`
`US 7,333,670 B2
`
`Figure lOA
`
`Rafenmce Jmaaa
`
`Figure lOB
`
`Search :Saz (x, y)
`
`Figure lOC
`
`Figure lOD
`
`PRIME FOCUS EX 1004-9
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 9 of 22
`
`US 7,333,670 B2
`
`Figure liB
`
`SeanhBox 1
`
`SearchBax l
`
`Figure llA
`
`Figure llC
`
`PRIME FOCUS EX 1004-10
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 10 of 22
`
`US 7,333,670 B2
`
`Erro1· value (x, y+dy)
`
`En·or value (x-d:,, Yl
`
`Error value (x+dx, y)
`
`Error value (x, y-dy)
`
`Figure 12
`
`PRIME FOCUS EX 1004-11
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 11 of 22
`
`US 7,333,670 B2
`
`Figure 13
`
`PRIME FOCUS EX 1004-12
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 12 of 22
`
`US 7,333,670 B2
`
`Figure 15
`
`Figure 16
`
`PRIME FOCUS EX 1004-13
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 13 of 22
`
`US 7,333,670 B2
`
`Figure 17
`
`Figure 18
`
`PRIME FOCUS EX 1004-14
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 14 of 22
`
`US 7,333,670 B2
`
`Figure 19
`
`.
`.
`·. ~
`.-· .... :-•
`·· ... -~.,., .......... ..-
`~~; ,t· ~ \~.:~ <~~;~ ~.:: -~-:~.i :,::-
`~· .
`. . ·.
`
`..
`
`PRIME FOCUS EX 1004-15
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 15 of 22
`
`US 7,333,670 B2
`
`r•'·
`
`..
`... .....
`• - ....
`. ~ ·: : ;. :
`. ...
`. .
`. . ...
`
`;..·.,o,••-• o".r• /;.}=:·<;.-;·~:: ... ;.:;:,~:.:.:.: .. ~ ·;: ~ -~~~~~ :.:·~:·; ... ~
`
`~ ..... ~ ._·: .• -=
`
`:· '
`
`Figure21
`
`. ...
`..
`
`Figure 22
`
`Figure23
`
`PRIME FOCUS EX 1004-16
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 16 of 22
`
`US 7,333,670 B2
`
`Fig1,1re 24
`
`Figure25
`
`PRIME FOCUS EX 1004-17
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 17 of 22
`
`US 7,333,670 B2
`
`----- -·-·· ·-·-.. ------··---·-·===·---------·-·--... -·····
`
`Figure 26
`
`PRIME FOCUS EX 1004-18
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 18 of 22
`
`US 7,333,670 B2
`
`Figure27
`
`Figure 28
`
`PRIME FOCUS EX 1004-19
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 19 of 22
`
`US 7,333,670 B2
`
`Figure29
`
`Figure 30
`
`.,
`..
`. • · ..
`·.
`·:
`•• • •. 4·
`•••
`·~/;../~);.: !
`_ _,.,... __ -!,_... :. ';} ~,:: ·.:/;
`
`,. ... · ~ .. :'
`
`4 . · -
`
`.h
`
`• .d
`
`PRIME FOCUS EX 1004-20
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb. 19,2008
`
`Sheet 20 of 22
`
`US 7,333,670 B2
`
`Figure 31
`
`Figure 32
`
`•..
`~. ' . :
`,. ........
`. .
`. : ......
`
`•.
`
`. .....
`
`...,
`
`PRIME FOCUS EX 1004-21
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 21 of 22
`
`US 7,333,670 B2
`
`Figure33
`
`Figure 35
`
`/
`
`;
`
`'·
`
`.~ .
`
`. .'!
`
`...
`
`'•
`,•'
`
`-:.·. '.;
`
`.......
`...
`. . · .
`.•.· ..
`~~~~*~~·r§ \=pjl
`.. : ·~.. . ·: ~
`···-:
`,:""' o ..
`. .
`
`'oP
`
`PRIME FOCUS EX 1004-22
`PRIME FOCUS v LEGEND3D
`
`

`
`U.S. Patent
`
`Feb.19,2008
`
`Sheet 22 of 22
`
`US 7,333,670 B2
`
`Figure36
`
`PRIME FOCUS EX 1004-23
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,333,670 B2
`
`1
`IMAGE SEQUENCE ENHANCEMENT
`SYSTEM AND METHOD
`
`This application is a divisional of U.S. patent application
`Ser. No. 10/450,970, entitled "Image Sequence Enhance(cid:173)
`ment System and Method", filed Jun. 18, 2003 now U.S. Pat.
`No. 7,181,081, the specification of which is hereby incor(cid:173)
`porated herein by reference, which is a national stage entry
`of Patent Cooperation Treaty Application Serial No. PCT/
`US02/14192, filed May 6, 2002 the specification of which is
`hereby incorporated herein by reference, which takes prior(cid:173)
`ity from U.S. Provisional Patent Application 60/288,929
`filed May 4, 2001 the specification of which is hereby
`incorporated herein by reference.
`
`BACKGROUND OF THE INVENTION
`
`15
`
`2
`rately in this invention similar to the manner in which
`traditional animation is produced.
`Motion Elements: The motion elements are displayed as
`a series of sequential tiled frame sets or thumbnail images
`complete with background elements. The motion elements
`are masked in a key frame using a multitude of operator
`interface tools common to paint systems as well as unique
`tools such as relative bimodal thresholding in which masks
`are applied selectively to contiguous light or dark areas
`10 bifurcated by a cursor brush. After the key frame is fully
`designed and masked, all mask information from the key
`frame is then applied to all frames in the display-using mask
`fitting techniques that include:
`1. Automatic mask fitting using Fast Fourier Transform
`and Gradient Decent Calculations based on luminance
`and pattern matching which references the same
`masked area of the key frame followed by all prior
`subsequent frames in succession.
`2. Bezier curve animation with edge detection as an
`automatic animation guide
`3. Polygon animation with edge detection as an automatic
`animation guide
`In another embodiment of this invention, these back(cid:173)
`ground elements and motion elements are combined sepa-
`25 rately into single frame representations of multiple frames,
`as tiled frame sets or as a single frame composite of all
`elements (i.e., including both motion and backgrounds/
`foregrounds) that then becomes a visual reference database
`for the computer controlled application of masks within a
`30 sequence composed of a multiplicity of frames. Each pixel
`address within the reference visual database corresponds to
`mask/lookup table address within the digital frame and X, Y,
`Z location of subsequent "raw" frames that were used to
`create the reference visual database. Masks are applied to
`35 subsequent frames based on various differentiating image
`processing methods such as edge detection combined with
`pattern recognition and other sub-mask analysis, aided by
`operator segmented regions of interest from reference
`objects or frames, and operator directed detection of subse-
`40 quent regions corresponding to the original region of inter(cid:173)
`est. In this manner, the gray scale actively determines the
`location and shape of each mask and corresponding color
`lookup from frame to frame that is applied in a keying
`fashion within predetermined and operator controlled
`45 regions of interest.
`Camera Pan Background and Static Foreground Ele(cid:173)
`ments: Stationary foreground and background elements in a
`plurality of sequential images comprising a camera pan are
`combined and fitted together using a series of phase corre-
`50 lation, image fitting and focal length estimation techniques
`to create a composite single frame that represents the series
`of images used in its construction. During the process of this
`construction the motion elements are removed through
`operator adjusted global placement of overlapping sequen-
`55 tial frames.
`The single background image representing the series of
`camera pan images is color designed using multiple color
`transform look up tables limited only by the number of
`pixels in the display. This allows the designer to include as
`60 much detail as desired including air brushing of mask
`information and other mask application techniques that
`provide maximum creative expression. Once the back(cid:173)
`ground color design is completed the mask information is
`transferred automatically to all the frames that were used to
`65 create the single composited image.
`Image offset information relative to each frame is regis(cid:173)
`tered in a text file during the creation of the single composite
`
`Prior art patents describing methods for the colorizing of
`black and white feature films involved the identification of
`gray scale regions within a picture followed by the appli- 20
`cation of a pre-selected color transform or lookup tables for
`the gray scale within each region defined by a masking
`operation covering the extent of each selected region and the
`subsequent application of said masked regions from one
`frame to many subsequent frames. The primary difference
`between U.S. Pat. No. 4,984,072, System And Method For
`Color Image Enhancement, and U.S. Pat. No. 3,705,762,
`Method For Converting Black-And-White Films To Color
`Films, is the manner by which the regions of interest (ROis)
`are isolated and masked, how that information is transferred
`to subsequent frames and how that mask information is
`modified to conform with changes in the underlying image
`data. In the U.S. Pat. No. 4,984,072 system, the region is
`masked by an operator via a one-bit painted overlay and
`operator manipulated using a digital paintbrush method
`frame by frame to match the movement. In the U.S. Pat. No.
`3,705,762 process, each region is outlined or rotoscoped by
`an operator using vector polygons, which are then adjusted
`frame by frame by the operator, to create animated masked
`ROis.
`In both systems the color transform lookup tables and
`regions selected are applied and modified manually to each
`frame in succession to compensate for changes in the image
`data which the operator detects visually. All changes and
`movement of the underlying luminance gray scale is sub(cid:173)
`jectively detected by the operator and the masks are sequen(cid:173)
`tially corrected manually by the use of an interface device
`such as a mouse for moving or adjusting mask shapes to
`compensate for the detected movement. In all cases the
`underlying gray scale is a passive recipient of the mask
`containing pre-selected color transforms with all modifica(cid:173)
`tions of the mask under operator detection and modification.
`In these prior inventions the mask information does not
`contain any information specific to the underlying lumi(cid:173)
`nance gray scale and therefore no automatic position and
`shape correction of the mask to correspond with image
`feature displacement and distortion from one frame to
`another is possible.
`
`SUMMARY OF THE INVENTION
`
`In the system and method of the present invention, scenes
`to be colorized are classified into two separate categories;
`either background elements (i.e. sets and foreground ele(cid:173)
`ments that are stationary) or motion elements (e.g., actors,
`automobiles, etc) that move throughout the scene. These
`background elements and motion elements are treated sepa-
`
`PRIME FOCUS EX 1004-24
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,333,670 B2
`
`3
`image representing the pan and used to apply the single
`composite mask to all the frames used to create the com(cid:173)
`posite image.
`Since the foreground moving elements have been masked
`separately prior to the application of the background mask,
`the background mask information is applied wherever there
`is no pre-existing mask information.
`Static Camera Scenes with and without Film Weave,
`Minor Camera Following and Camera Drift: In scenes where
`there is minor camera movement or film weave resulting
`from the sprocket transfer from 35 mm or 16 mm film to
`digital format, the motion objects are first fully masked
`using the techniques listed above. All frames in the scene are
`then processed automatically to create a single image that
`represents both the static foreground elements and back(cid:173)
`ground elements, eliminating all masked moving objects
`where they both occlude and expose the background.
`Where ever the masked moving object exposes the back(cid:173)
`ground or foreground the instance of background and fore(cid:173)
`ground previously occluded is copied into the single image
`with priority and proper offsets to compensate for camera
`movement The offset information is included in a text file
`associated with each single representation of the background
`so that the resulting mask information can be applied to each
`frame in the scene with proper mask offsets.
`The single background image representing the series of
`static camera frames is color designed using multiple color
`transform look up tables limited only by the number of
`pixels in the display. Where the motion elements occlude the
`background elements continuously within the series of
`sequential frames they are seen as black figure that are
`ignored and masked over. The black objects are ignored
`during the masking operation because the resulting back(cid:173)
`ground mask is later applied to all frames used to create the
`single representation of the background only where there is
`no preexisting mask. This allows the designer to include as
`much detail as desired including air brushing of mask
`information and other mask application techniques that
`provide maximum creative expression. Once the back(cid:173)
`ground color design is completed the mask information is
`transferred automatically to all the frames that were used to
`create the single composited image.
`
`DETAILED DESCRIPTION OF A PREFERRED
`EMBODIMENT OF THE INVENTION
`
`Feature Film and TV series Data Preparation for Colori(cid:173)
`zation: Feature films are tele-cined or transferred from 35
`mm or 16 mm film using a high resolution scanner such as
`a 1 O-bit Spirit Data Cine or similar device to HDTV (1920
`by 1080 24P) or data-cined on a laser film scanner such as
`that manufactured by Imagica Corp. of America at a larger
`format 2000 lines to 4000 lines and up to 16 bits of
`grayscale. The high resolution frame files are then converted
`to standard digital files such as uncompressed TIF files or
`uncompressed TGA files typically in 16 bit three-channel
`linear format or 8 bit three channel linear format If the
`source data is HDTV, the 1 O-bit HDTV frame files are
`converted to similar TIF or TGA uncompressed files at either
`16-bits or 8-bit per channel. Each frame pixel is then
`averaged such that the three channels are merged to create
`a single 16 bit channel or 8 bit channel respectively.
`Digitization Telecine and Format Independence Mono(cid:173)
`chrome elements of either 35 or 16 mm negative or positive
`film are digitized at various resolutions and bit depth within 65
`a high resolution film scanner such as that performed with a
`Spirit Data Cine by Philips and Eastman Kodak which trans-
`
`4
`fers either 525 or 625 formats, HDTV, (TV) 1280x720/60
`Hz progressive, 2K, DTV (ATSC) formats like 1920x1080/
`24 Hz/25 Hz progressive and 1920x1080/48 Hz/50 Hz
`segmented frame or 1920x1080 501 as examples. The inven(cid:173)
`tion provides improved methods for editing film into motion
`pictures. Visual images are transferred from developed
`motion picture film to a high definition video storage
`medium, which is a storage medium adapted to store images
`and to display images in conjunction with display equipment
`10 having a scan density substantially greater than that of an
`NTSC compatible video storage medium and associated
`display equipment. The visual images are also transferred,
`either from the motion picture film or the high definition
`video storage medium to a digital data storage format
`15 adapted for use with digital nonlinear motion picture editing
`equipment. After the visual images have been transferred to
`the high definition video storage medium, the digital non(cid:173)
`linear motion picture editing equipment is used to generate
`an edit decision list, to which the motion picture film is then
`20 conformed. The high definition video storage medium will
`be adapted to store and display visual images having a scan
`density of at least 1080 horizontal lines. Electronic or optical
`transformation may be utilized to allow use of visual aspect
`ratios that make full use of the storage formats used in the
`25 method. This digitized film data as well as data already
`transferred from film to one of a multiplicity of formats such
`as HDTV are entered into a conversion system such as the
`HDTV Still Store manufactured by Avica Technology Cor(cid:173)
`poration. Such large scale digital buffers and data converters
`30 are capable of converting digital image to all standard
`formats such as 1080i HDTV formats such as 720p, and
`1 080p/24. An Asset Management System server provides
`powerful local and server back ups and archiving to standard
`SCSI devices, C2-level security, streamlined menu selection
`35 and multiple criteria data base searches.
`During the process of digitizing images from motion
`picture film the mechanical positioning of the film frame in
`the telecine machine suffers from an imprecision known as
`"film weave", which cannot be fully eliminated. However
`40 various film registration and ironing or flattening gate
`assemblies are available such as that embodied in Eastman
`Kodak Company's U.S. Pat. No. 5,328,073, Film Registra(cid:173)
`tion and Ironing Gate Assembly, which involves the use of
`a gate with a positioning location or aperture for focal
`45 positioning of an image frame of a strip film with edge
`perforations. Undersized first and second pins enter a pair of
`transversely aligned perforations of the film to register the
`image frame with the aperture. An undersized third pin
`enters a third perforation spaced along the film from the
`50 second pin and then pulls the film obliquely to a reference
`line extending between the first and second pins to nest
`against the first and second pins the perforations thereat and
`register the image frame precisely at the positioning location
`or aperture. A pair of flexible bands extending along the film
`55 edges adjacent the positioning location moves progressively
`into incrementally increasing contact with the film to iron it
`and clamp its perforations against the gate. The pins register
`the image frame precisely with the positioning location, and
`the bands maintain the image frame in precise focal position.
`60 Positioning can be further enhanced following the precision
`mechanical capture of images by methods such as that
`embodied in U.S. Pat. No. 4,903,131, Method For The
`Automatic Correction Of Errors In Image Registration Dur-
`ing Film Scanning By BTS Broadcast Television Systems.
`To remove or reduce the random structure known as grain
`within exposed feature film that is superimposed on the
`image as well as scratches or particles of dust or other debris
`
`PRIME FOCUS EX 1004-25
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,333,670 B2
`
`5
`which obscure the transmitted light various algorithms will
`be used such as that embodied in U.S. Pat. No. 6,067,125
`Structure And Method For Film Grain Noise Reduction and
`U.S. Pat. No. 5,784,176, Method Oflmage Noise Reduction
`Processing.
`Reverse Editing of the Film Element Preliminary to
`Visual Database Creation:
`The digital movie is broken down into scenes and cuts.
`The entire movie is then processed sequentially for the
`automatic detection of scene changes including dissolves, 10
`wipe-a-ways and cuts. These transitions are further broken
`down into camera pans, camera zooms and static scenes
`representing little or no movement All database references to
`the above are entered into an edit decision list (EDT) within
`the Legend Films database based on standard SMPTE time 15
`code or other suitable sequential naming convention. There
`exists, a great deal of technologies for detecting dramatic as
`well as subtle transitions in film content such as:
`U.S. Pat. No. 5,959,697 Sep. 28, 1999 Method And System
`For Detecting Dissolve Transitions In A Video Signal
`U.S. Pat. No. 5,920,360 Jul. 6, 1999 Method And System
`For Detecting Fade Transitions In A Video Signal
`U.S. Pat. No. 5,841,512 Nov. 24, 1998 Methods Of Pre(cid:173)
`viewing And Editing Motion Pictures
`U.S. Pat. No. 5,835,163 Nov. 10, 1998 Apparatus For 25
`Detecting A Cut In A Video
`U.S. Pat. No. 5,767,923 Jun. 16, 1998 Method And System
`For Detecting Cuts In A Video Signal
`U.S. Pat. No. 5,778,108 Jul. 6, 1996 Method And System
`For Detecting Transitional Markers Such As Uniform 30
`Fields In A Video Signal
`U.S. Pat. No. 5,920,360 Jun. 7, 1999 Method And System
`For Detecting Fade Transitions In A Video Signal
`All cuts that represent the same content such as in a dialog
`between two or more people where the camera appears to 35
`volley between the two talking heads are combined into one
`file entry for later batch processing.
`An operator checks all database entries visually to ensure
`that:
`1. Scenes are broken down into camera moves
`2. Cuts are consolidated into single batch elements where
`appropriate
`3. Motion is broken down into simple and complex
`depending on occlusion elements, number of moving
`objects and quality of the optics. (e.g., softness of the
`elements, etc).
`Pre-Production-Scene Analysis and Scene Breakdown
`for Reference Frame ID and Data Base Creation:
`Files are numbered using sequential SMPTE time code or
`other sequential naming convention. The image files are
`edited together at 24-frame/sec speed (without field related
`3/2 pull down which is used in standard NTSC 30 frame/sec
`video) onto a DVD using Adobe After Effects or similar
`programs to create a running video with audio of the feature
`film or TV series. This is used to assist with scene analysis
`and scene breakdown.
`Scene and Cut Breakdown:
`1. A database permits the entering of scene, cut, design,
`key frame and other critical data in time code format as
`well as descriptive information for each scene and cut
`2. Bach scene cut is identified relative to camera tech(cid:173)
`nique. Time codes for pans, zooms, static backgrounds,
`static backgrounds with unsteady or drifting camera
`and unusual camera cuts that require special attention.
`3. Designers and assistant designers study the feature film 65
`for color clues and color references. Research is pro(cid:173)
`vided for color accuracy where applicable.
`
`6
`4. Single frames from each scene are selected to serve as
`design frames. These frames will be color designed to
`represent the overall look and feel of the feature film.
`Approximately 80 to 100 design frames are typical for
`a feature film.
`5. In addition, single frames called key frames from each
`cut of the feature film are selected that contain all the
`elements within each cut that require color consider(cid:173)
`ation. There may be as many as 1,000 key frames.
`These frames will contain all the color transform infor(cid:173)
`mation necessary to apply color to all sequential frames
`in each cut without additional color choices.
`Color Selection:
`Historical reference, studio archives and film analysis
`provides the designer with color references. Using an input
`device such as a mouse, the designer masks features in a
`selected single frame containing a plurality of pixels and
`assigns color to them using an HSL color space model based
`on creative considerations and the grayscale and luminance
`20 distribution underlying each mask. One or more base colors
`are selected for image data under each mask and applied to
`the particular luminance pattern attributes of the selected
`image feature. Each color selected is applied to an entire
`masked object or to the designated features within the
`luminance pattern of the object based on the unique gray(cid:173)
`scale values of the feature under the mask.
`A lookup table or color transform for the unique lumi-
`nance pattern of the object or feature is thus created which
`represent the color to luminance values applied to the object.
`Since the color applied to the feature extends the entire range
`of potential grayscale values from dark to light the designer
`can insure that as the distribution of the gray-scale values
`representing the pattern change homogeneously into dark or
`light regions within subsequent frames of the movie such as
`with the introduction of shadows or bright light, the color for
`each feature also remains consistently homogeneous and
`correctly lighten or darken with the pattern upon which it is
`applied.
`Propagation of Mask Color Transform Information from
`40 One Frame to a Series of Subsequent Frames:
`The masks representing designed selected color trans(cid:173)
`forms in the single design frame are then copied to all
`subsequent frames in the series of movie frames by one or
`more methods such as auto-fitting bezier curves to edges,
`45 automatic mask fitting based on Fast Fourier Transforms and
`Gradient Descent Calculation tied to luminance patterns in
`a subsequent frame relative to the design frame or a suc(cid:173)
`cessive preceding frames, mask paint to a plurality of
`successive frames by painting the object within only one
`50 frame, auto-fitting vector points to edges and copying and
`pasting individual masks or a plurality of masks to selected
`subsequent frames.
`Single Frame Set Design and Colorization:
`In the present invention camera moves are consolidated
`55 and separated from motion elements in each scene by the
`creation of a montage or composite image of the background
`from a series of successive frames into a single frame
`containing all background elements for each scene and cut.
`The resulting single frame becomes a representation of the
`60 entire common background of a multiplicity of frames in a
`movie, creating a visual database of all elements and camera
`offset information within those frames.
`In this manner most set backgrounds can be designed and
`colorized in one pass using a single frame montage. Each
`montage is masked without regard to the foreground moving
`objects, which are masked separately. The background
`masks of the montage are then automatically extracted from
`
`PRIME FOCUS EX 1004-26
`PRIME FOCUS v LEGEND3D
`
`

`
`US 7,333,670 B2
`
`7
`the single background montage image and applied to the
`subsequent frames that were used to create the single
`montage using all the offsets stored in the image data for
`correctly aligning the masks to each subsequent frame.
`There is a basic formula in filmmaking that varies little
`within and between feature films (except for those films
`employing extensive hand-held or StediCam shots.) Scenes
`are composed of cuts, which are blocked for standard
`camera moves, i.e., pans, zooms and static or locked camera
`angles as well as combinations of these moves. Cuts are 10
`either single occurrences or a combination of cut-a-ways
`where there is a return to a particular camera shot such as in
`a dialog between two individuals. Such cut-a-ways can be
`considered a single scene sequence or single cut and can be
`consolidate in one image-processing pass.
`Pans can be consolidated within a single frame visual
`database using special panorama stitching techniques but
`without lens compensation. Bach frame in a pan involves:
`1. The loss of some information on one side, top and/or
`bottom of the frame
`2. Common information in the majority of the frame
`relative to the immediately preceding and subsequent
`frames and
`3. New information on the other side, top and/or bottom
`of the frame.
`By stitching these frames together based on common
`elements within successive frames and thereby creating a
`panorama of the background elements a visual database is
`created with all pixel offsets available for referencing in the
`application of a single mask overlay to the complete set of
`sequential frames.
`Creation of a Visual Database:
`Since each pixel within a single frame visual database of
`a background corresponds to an appropriate address within
`the respective "raw" (unconsolidated) frame from which it
`was created, any designer determined mas

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket