throbber
BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`
`One or more embodiments of the invention are related to the field of image analysis
`and image enhancement, (suggested class 382, subclass 254). More particularly, but
`not by way of limitation, one or more embodiments of the invention enable an image
`sequence depth enhancement system and method that allows for the rapid conversion
`of a sequence of two-dimensional images into three-dimensional images.
`
`2. Description of the Related Art
`
`Prior art patents describingKnown methods for the colorizing of black and white feature
`films involved the identification of gray scale regions within a picture followed by the
`application of a pre-selected color transform or lookup tables for the gray scale within each
`region defined by a masking operation covering the extent of each selected region and the
`subsequent application of said masked regions from one frame to many subsequent frames.
`The primary difference between U.S. Pat. No. 4,984,072, System And Method For Color
`Image Enhancement, and U.S. Pat. No. 3,705,762, Method For Converting
`Black-And-White Films To Color Films, is the manner by which the regions of interest
`(ROIs) are isolated and masked, how that information is transferred to subsequent frames
`and how that mask information is modified to conform with changes in the underlying
`image data. In the U.S. Pat. No. 4,984,072 system, the region is masked by an operator via
`a one-bit painted overlay and operator manipulated using a digital paintbrush method
`frame by frame to match the movement. In the U.S. Pat. No. 3,705,762 process, each
`region is outlined or rotoscoped by an operator using vector polygons, which are then
`adjusted frame by frame by the operator, to create animated masked ROIs.
`
`In both systems the color transform lookup tables and regions selected are applied and
`modified manually to each frame in succession to compensate for changes in the image
`data which the operator detects visually. All changes and movement of the underlying
`luminance gray scale is subjectively detected by the operator and the masks are
`sequentially corrected manually by the use of an interface device such as a mouse for
`moving or adjusting mask shapes to compensate for the detected movement. In all cases the
`underlying gray scale is a passive recipient of the mask containing pre-selected color
`transforms with all modifications of the mask under operator detection and modification.
`In these prior inventions the mask information does not contain any information specific to
`the underlying luminance gray scale and therefore no automatic position and shape
`correction of the mask to correspond with image feature displacement and distortion from
`one frame to another is possible.
`
`Existing systems that are utilized to convert two-dimensional images to
`three-dimensional images generally require the creation of wire frame models for
`objects in images. The creation of wire frame models is a large undertaking in terms
`of labor. These systems also do not utilize the underlying luminance gray scale of
`objects in the images to automatically position and correct the shape of the masks of
`the objects to correspond with image feature displacement and distortion from one
`frame to another. Hence, great amounts of labor are required to manually shape
`
`
`
`PRIME FOCUS EX 1015-1
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`masks for applying depth or Z-dimension data to the objects. Motion objects that
`move from frame to frame thus require a great deal of human intervention. In
`addition, there are no known solutions for enhancing two-dimensional images into
`three-dimensional images that utilize composite backgrounds of multiple images in a
`frame for spreading depth information to background and masked objects. Hence
`there is a need for an image sequence depth enhancement system and method.
`
`BRIEF SUMMARY OF THE INVENTION
`
`In the system and methodEmbodiments of the present invention, classify scenes to be
`colorized are classifiedand/or converted from two-dimensional to three-dimensional
`into movies into two separate categories; either background elements (i.e. sets and
`foreground elements that are stationary) or motion elements (e.g., actors, automobiles, etc)
`that move throughout the scene. These background elements and motion elements are
`treated separately in this invention similar to the manner in which traditional animation is
`produced.
`
`Motion Elements: The motion elements are displayed as a series of sequential tiled frame
`sets or thumbnail images complete with background elements. The motion elements are
`masked in a key frame using a multitude of operator interface tools common to paint
`systems as well as unique tools such as relative bimodal thresholding in which masks are
`applied selectively to contiguous light or dark areas bifurcated by a cursor brush. After the
`key frame is fully designed and masked, all mask information from the key frame is then
`applied to all frames in the display-using mask fitting techniques that include:
`
`1. Automatic mask fitting using Fast Fourier Transform and Gradient Decent Calculations
`based on luminance and pattern matching which references the same masked area of the
`key frame followed by all prior subsequent frames in succession.
`
`2. Bezier curve animation with edge detection as an automatic animation guide
`
`3. Polygon animation with edge detection as an automatic animation guide
`
`In another embodiment of this invention, these background elements and motion elements
`are combined separately into single frame representations of multiple frames, as tiled
`frame sets or as a single frame composite of all elements (i.e., including both motion and
`backgrounds/foregrounds) that then becomes a visual reference database for the computer
`controlled application of masks within a sequence composed of a multiplicity of frames.
`Each pixel address within the reference visual database corresponds to mask/lookup table
`address within the digital frame and X, Y, Z location of subsequent “raw” frames that were
`used to create the reference visual database. Masks are applied to subsequent frames based
`on various differentiating image processing methods such as edge detection combined with
`pattern recognition and other sub-mask analysis, aided by operator segmented regions of
`interest from reference objects or frames, and operator directed detection of subsequent
`regions corresponding to the original region of interest. In this manner, the gray scale
`actively determines the location and shape of each mask (and corresponding color lookup
`from frame to frame for colorization projects or depth information for
`
`
`
`PRIME FOCUS EX 1015-2
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`two-dimensional to three-dimensional conversion projects) that is applied in a keying
`fashion within predetermined and operator controlled regions of interest.
`
`Camera Pan Background and Static Foreground Elements: Stationary foreground and
`background elements in a plurality of sequential images comprising a camera pan are
`combined and fitted together using a series of phase correlation, image fitting and focal
`length estimation techniques to create a composite single frame that represents the series of
`images used in its construction. During the process of this construction the motion
`elements are removed through operator adjusted global placement of overlapping
`sequential frames.
`
`TheFor colorization projects, the single background image representing the series of
`camera pan images is color designed using multiple color transform look up tables limited
`only by the number of pixels in the display. This allows the designer to include as much
`detail as desired including air brushing of mask information and other mask application
`techniques that provide maximum creative expression. For depth conversion projects,
`(i.e., two-dimensional to three-dimensional movie conversion for example), the single
`background image representing the series of camera pan images is utilized to set
`depths of the various items in the background. Once the background color/depth design
`is completed the mask information is transferred automatically to all the frames that were
`used to create the single composited image. In this manner, color or depth is performed
`once per scene instead of once per frame, with color/depth information automatically
`spread to individual frames via embodiments of the invention.
`
`ImageIn one or more embodiments of the invention, image offset information relative
`to each frame is registered in a text file during the creation of the single composite image
`representing the pan and used to apply the single composite mask to all the frames used to
`create the composite image.
`
`Since the foreground moving elements have been masked separately prior to the
`application of the background mask, the background mask information is applied wherever
`there is no pre-existing mask information.
`
`Static Camera Scenes With and Without Film Weave, Minor Camera Following and
`Camera Drift: In scenes where there is minor camera movement or film weave resulting
`from the sprocket transfer from 35 mm or 16 mm film to digital format, the motion objects
`are first fully masked using the techniques listed above. All frames in the scene are then
`processed automatically to create a single image that represents both the static foreground
`elements and background elements, eliminating all masked moving objects where they
`both occlude and expose the background.
`
`Where ever the masked moving object exposes the background or foreground the instance
`of background and foreground previously occluded is copied into the single image with
`priority and proper offsets to compensate for camera movement. The offset information is
`included in a text file associated with each single representation of the background so that
`the resulting mask information can be applied to each frame in the scene with proper mask
`offsets.
`
`
`
`PRIME FOCUS EX 1015-3
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`The single background image representing the series of static camera frames is color
`designed using multiple color transform look up tables limited only by the number of
`pixels in the display. Where the motion elements occlude the background elements
`continuously within the series of sequential frames they are seen as black figure that are
`ignored and masked over. The black objects are ignored during the masking operation
`because the resulting background mask is later applied to all frames used to create the
`single representation of the background only where there is no pre-existing mask. This
`allows the designer to include as much detail as desired including air brushing of mask
`information and other mask application techniques that provide maximum creative
`expression. Once the background color design is completed the mask information is
`transferred automatically to all the frames that were used to create the single composited
`image. For depth projects, the distance from the camera to each item in the composite
`frame is automatically transferred to all the frames that were used to create the
`single composited image. By shifting masked background objects horizontally more
`or less, there precise depth is thus set in a secondary viewpoint frame that
`corresponds to each frame in the scene. Areas where no image data exists for a
`second viewpoint may be marked in one or more embodiments of the invention using
`a user defined color that allows for the creation missing data to ensure that no
`artifacts occur during the two-dimension to three-dimension conversion process. Any
`technique known may be utilized in embodiments of the invention to cover areas in
`the background where unknown data exists, i.e., that may not be borrowed from
`another scene/frame for example. After assigning depths to objects in the composite
`background, a second viewpoint image may be created for each image in a scene in
`order to produce a stereoscopic view of the movie, for example a left eye view where
`the original frames in the scene are assigned to the right eye viewpoint.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The patent of application file contains at least one drawing executed in color. Copies
`of this patent or patent application publication with color drawing(s) will be
`provided by the Office upon request and payment of the necessary fee.
`
`FIG. 1 shows a plurality of feature film or television film frames representing a scene
`or cut in which there is a single instance or perceptive of a background.
`
`FIG. 2 shows an isolated background processed scene from the plurality of frames
`shown in FIG. 1 in which all motion elements are removed using various subtraction
`and differencing techniques. The single background image is then used to create a
`background mask overlay representing designer selected color lookup tables in
`which dynamic pixel colors automatically compensate or adjust for moving shadows
`and other changes in luminance.
`
`FIG. 3 shows a representative sample of each motion object (M-Object) in the scene
`receives a mask overlay that represents designer selected color lookup tables in which
`dynamic pixel colors automatically compensate or adjust for moving shadows and
`other changes in luminance as the M-Object moves within the scene.
`
`
`
`PRIME FOCUS EX 1015-4
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 4 shows all mask elements of the scene are then rendered to create a fully
`colored frame in which M-Object masks are applied to each appropriate frame in the
`scene followed by the background mask, which is applied only where there is no
`pre-existing mask in a Boolean manner.
`
`FIGS. 5A and 5B show a series of sequential frames loaded into display memory in
`which one frame is fully masked with the background (key frame) and ready for
`mask propagation to the subsequent frames via automatic mask fitting methods.
`
`FIGS. 6A and 6B show the child window displaying an enlarged and scalable single
`image of the series of sequential images in display memory. The Child window
`enables the operator to manipulate masks interactively on a single frame or in
`multiple frames during real time or slowed motion.
`
`FIGS. 7A and 7B shows a single mask (flesh) is propagated automatically to all
`frames in the display memory.
`
`FIG. 8 shows all masks associated with the motion object are propagated to all
`sequential frames in display memory.
`
`FIG. 9A shows a picture of a face.
`
`FIG. 9B shows a close up of the face in FIG. 9A wherein the “small dark” pixels
`shown in FIG. 9B are used to calculate a weighed index using bilinear interpolation.
`
`FIGS. 10A-D show searching for a Best Fit on the Error Surface: An error surface
`calculation in the Gradient Descent Search method involves calculating mean
`squared differences of pixels in the square fit box centered on reference image pixel
`(x0, y0), between the reference image frame and the corresponding (offset) location
`(x, y) on the search image frame.
`
`FIGS. 11A-C show a second search box derived from a descent down the error
`surface gradient (evaluated separately), for which the evaluated error function is
`reduced, possibly minimized, with respect to the original reference box (evident from
`visual comparison of the boxes with the reference box in FIGS. 10A, B, C and D.).
`
`FIG. 12 depicts the gradient component evaluation. The error surface gradient is
`calculated as per definition of the gradient. Vertical and horizontal error deviations
`are evaluated at four positions near the search box center position, and combined to
`provide an estimate of the error gradient for that position. 12.
`
`FIG. 13 shows a propagated mask in the first sequential instance where there is little
`discrepancy between the underlying image data and the mask data. The dress mask
`and hand mask can be clearly seen to be off relative to the image data.
`
`FIG. 14 shows that by using the automatic mask fitting routine, the mask data
`adjusts to the image data by referencing the underlying image data in the preceding
`image.
`
`
`
`PRIME FOCUS EX 1015-5
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 15 shows the mask data in later images within the sequence show marked
`discrepancy relative to the underlying image data. Eye makeup, lipstick, blush, hair,
`face, dress and hand image data are all displaced relative to the mask data.
`
`FIG. 16 shows that the mask data is adjusted automatically based on the underlying
`image data from the previous mask and underlying image data.
`
`FIG. 17 shows the mask data from FIG. 16 is shown with appropriate color
`transforms after whole frame automatic mask fitting. The mask data is adjusted to fit
`the underlying luminance pattern based on data from the previous frame or from the
`initial key frame.
`
`FIG. 18 shows polygons that are used to outline a region of interest for masking in
`frame one. The square polygon points snap to the edges of the object of interest.
`Using a Bezier curve the Bezier points snap to the object of interest and the control
`points/curves shape to the edges.
`
`FIG. 19 shows the entire polygon or Bezier curve is carried to a selected last frame in
`the display memory where the operator adjusts the polygon points or Bezier points
`and curves using the snap function which automatically snaps the points and curves
`to the edges of the object of interest.
`
`FIG. 20 shows that if there is a marked discrepancy between the points and curves in
`frames between the two frames where there was an operator interactive adjustment,
`the operator will further adjust a frame in the middle of the plurality of frames
`where there is maximum error of fit.
`
`FIG. 21 shows that when it is determined that the polygons or Bezier curves are
`correctly animating between the two adjusted frames, the appropriate masks are
`applied to all frames.
`
`FIG. 22 shows the resulting masks from a polygon or Bezier animation with
`automatic point and curve snap to edges. The brown masks are the color transforms
`and the green masks are the arbitrary color masks.
`
`FIG. 23 shows an example of two pass blending: The objective in two-pass blending is
`to eliminate moving objects from the final blended mosaic. This can be done by first
`blending the frames so the moving object is completely removed from the left side of
`the background mosaic. As shown in FIG. 23, the character can is removed from the
`scene, but can still be seen in the right side of the background mosaic.
`
`FIG. 24 shows the second pass blend. A second background mosaic is then generated,
`where the blend position and width is used so that the moving object is removed from
`the right side of the final background mosaic. As shown in FIG. 24, the character can
`is removed from the scene, but can still be seen the left side of the background
`mosaic. In the second pass blend as shown in FIG. 24, the moving character is shown
`on the left.
`
`
`
`PRIME FOCUS EX 1015-6
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 25 shows the final background corresponding to FIGS. 23-24. The two-passes
`are blended together to generate the final blended background mosaic with the
`moving object removed from the scene. As shown in FIG. 25, the final blended
`background with moving character is removed.
`
`FIG. 26 shows an edit frame pair window.
`
`FIG. 27 shows sequential frames representing a camera pan that are loaded into
`memory. The motion object (butler moving left to the door) has been masked with a
`series of color transform information leaving the background black and white with
`no masks or color transform information applied.
`
`FIG. 28 shows six representative sequential frames of the pan above are displayed for
`clarity.
`
`FIG. 29 shows the composite or montage image of the entire camera pan that was
`built using phase correlation techniques. The motion object (butler) included as a
`transparency for reference by keeping the first and last frame and averaging the
`phase correlation in two directions. The single montage representation of the pan is
`color designed using the same color transform masking techniques as used for the
`foreground object.
`
`FIG. 30 shows that the sequence of frames in the camera pan after the background
`mask color transforms the montage has been applied to each frame used to create the
`montage. The mask is applied where there is no pre-existing mask thus retaining the
`motion object mask and color transform information while applying the background
`information with appropriate offsets.
`
`FIG. 31 shows a selected sequence of frames in the pan for clarity after the color
`background masks have been automatically applied to the frames where there is no
`pre-existing masks.
`
`FIG. 32 shows a sequence of frames in which all moving objects (actors) are masked
`with separate color transforms.
`
`FIG. 33 shows a sequence of selected frames for clarity prior to background mask
`information. All motion elements have been fully masked using the automatic
`mask-fitting algorithm.
`
`FIG. 34 shows the stationary background and foreground information minus the
`previously masked moving objects. In this case, the single representation of the
`complete background has been masked with color transforms in a manner similar to
`the motion objects. Note that outlines of removed foreground objects appear
`truncated and unrecognizable due to their motion across the input frame sequence
`interval., i.e., the black objects in the frame represent areas in which the motion
`objects (actors) never expose the background and foreground. The black objects are
`ignored during the masking operation because the resulting background mask is
`later applied to all frames used to create the single representation of the background
`only where there is no pre-existing mask.
`
`
`
`PRIME FOCUS EX 1015-7
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 35 shows the sequential frames in the static camera scene cut after the
`background mask information has been applied to each frame with appropriate
`offsets and where there is no pre-existing mask information.
`
`FIG. 36 shows a representative sample of frames from the static camera scene cut
`after the background information has been applied with appropriate offsets and
`where there is no pre-existing mask information.
`
`FIGS. 37A-C show embodiments of the Mask Fitting functions, including calculate fit
`grid and interpolate mask on fit grid.
`
`FIGS. 38A-B show embodiments of the extract background functions.
`
`FIGS. 39A-C show embodiments of the snap point functions.
`
`FIGS. 40A-C show embodiments of the bimodal threshold masking functions.
`
`FIGS. 41A-B show embodiments of the calculate fit value functions.
`
`FIG. 42 shows two image frames that are separated in time by several frames, of a
`person levitating a crystal ball wherein the various objects in the image frames are to
`be converted from two-dimensional objects to three-dimensional objects.
`
`FIG. 43 shows the masking of the first object in the first image frame that is to be
`converted from a two-dimensional image to a three-dimensional image.
`
`FIG. 44 shows the masking of the second object in the first image frame.
`
`FIG. 45 shows the two masks in color in the first image frame allowing for the
`portions associated with the masks to be viewed.
`
`FIG. 46 shows the masking of the third object in the first image frame.
`
`FIG. 47 shows the three masks in color in the first image frame allowing for the
`portions associated with the masks to be viewed.
`
`FIG. 48 shows the masking of the fourth object in the first image frame.
`
`FIG. 49 shows the masking of the fifth object in the first image frame.
`
`FIG. 50 shows a control panel for the creation of three-dimensional images, including
`the association of layers and three-dimensional objects to masks within an image
`frame, specifically showing the creation of a Plane layer for the sleeve of the person in
`the image.
`
`FIG. 51 shows a three-dimensional view of the various masks shown in FIGS. 43-49,
`wherein the mask associated with the sleeve of the person is shown as a Plane layer
`that is rotated toward the left and right viewpoints on the right of the page.
`
`FIG. 52 shows a slightly rotated view of FIG. 51.
`
`
`
`PRIME FOCUS EX 1015-8
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 53 shows a slightly rotated view of FIG. 51.
`
`FIG. 54 shows a control panel specifically showing the creation of a sphere object for
`the crystal ball in front of the person in the image.
`
`FIG. 55 shows the application of the sphere object to the flat mask of the crystal ball,
`that is shown within the sphere and as projected to the front and back of the sphere
`to show the depth assigned to the crystal ball.
`
`FIG. 56 shows a top view of the three-dimensional representation of the first image
`frame showing the Z-dimension assigned to the crystal ball shows that the crystal ball
`is in front of the person in the scene.
`
`FIG. 57 shows that the sleeve plane rotating in the X-axis to make the sleeve appear
`to be coming out of the image more.
`
`FIG. 58 shows a control panel specifically showing the creation of a Head object for
`application to the person's face in the image, i.e., to give the person's face realistic
`depth without requiring a wire model for example.
`
`FIG. 59 shows the Head object in the three-dimensional view, too large and not
`aligned with the actual person's head.
`
`FIG. 60 shows the Head object in the three-dimensional view, resized to fit the
`person's face and aligned, e.g., translated to the position of the actual person's head.
`
`FIG. 61 shows the Head object in the three-dimensional view, with the Y-axis
`rotation shown by the circle and Y-axis originating from the person's head thus
`allowing for the correct rotation of the Head object to correspond to the orientation
`of the person's face.
`
`FIG. 62 shows the Head object also rotated slightly clockwise, about the Z-axis to
`correspond to the person's slightly tilted head.
`
`FIG. 63 shows the propagation of the masks into the second and final image frame.
`
`FIG. 64 shows the original position of the mask corresponding to the person's hand.
`
`FIG. 65 shows the reshaping of the mask, that can be performed automatically
`and/or manually, wherein any intermediate frames get the tweened depth
`information between the first image frame masks and the second image frame masks.
`
`FIG. 66 shows the missing information for the left viewpoint as highlighted in color
`on the left side of the masked objects in the lower image when the foreground object,
`here a crystal ball is translated to the right.
`
`FIG. 67 shows the missing information for the right viewpoint as highlighted in color
`on the right side of the masked objects in the lower image when the foreground
`object, here a crystal ball is translated to the left.
`
`
`
`PRIME FOCUS EX 1015-9
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`FIG. 68 shows an anaglyph of the final depth enhanced first image frame viewable
`with Red/Blue 3-D glasses.
`
`FIG. 69 shows an anaglyph of the final depth enhanced second and last image frame
`viewable with Red/Blue 3-D glasses, note rotation of person's head, movement of
`person's hand and movement of crystal ball.
`
`FIG. 70 shows the right side of the crystal ball with fill mode “smear”, wherein the
`pixels with missing information for the left viewpoint, i.e., on the right side of the
`crystal ball are taken from the right edge of the missing image pixels and “smeared”
`horizontally to cover the missing information.
`
` DETAILED DESCRIPTION OF A PREFERRED EMBODIMENT OF THE
`INVENTION
`
`Feature Film and TV series Data Preparation for Colorization/Depth enhancement:
`Feature films are tele-cined or transferred from 35 mm or 16 mm film using a high
`resolution scanner such as a 10-bit Spirit Data Cine or similar device to HDTV (1920 by
`1080 24P) or data-cined on a laser film scanner such as that manufactured by Imagica
`Corp. of America at a larger format 2000 lines to 4000 lines and up to 16 bits of grayscale.
`The high resolution frame files are then converted to standard digital files such as
`uncompressed TIFTIP files or uncompressed TGA files typically in 16 bit three-channel
`linear format or 8 bit three channel linear format. If the source data is HDTV, the 10-bit
`HDTV frame files are converted to similar TIF or TGA uncompressed files at either 16-bits
`or 8-bit per channel. Each frame pixel is then averaged such that the three channels are
`merged to create a single 16 bit channel or 8 bit channel respectively. Any other scanning
`technologies capable of scanning an existing film to digital format may be utilized.
`Currently, many movies are generated entirely in digital format, and thus may be
`utilized without scanning the movie.
`
`Digitization Telecine and Format Independence Monochrome elements of either 35 or 16
`mm negative or positive film are digitized at various resolutions and bit depth within a high
`resolution film scanner such as that performed with a Spirit DataCine by Philips and
`Eastman Kodak which transfers either 525 or 625 formats, HDTV, (TVHDTV)
`1280×720/60 Hz progressive, 2K, DTV (ATSC) formats like 1920×1080/24 Hz/25 Hz
`progressive and 1920×1080/48 Hz/50 Hz segmented frame or 1920×1080 50 I as
`examples. The invention provides improved methods for editing film into motion pictures.
`Visual images are transferred from developed motion picture film to a high definition
`video storage medium, which is a storage medium adapted to store images and to display
`images in conjunction with display equipment having a scan density substantially greater
`than that of an NTSC compatible video storage medium and associated display equipment.
`The visual images are also transferred, either from the motion picture film or the high
`definition video storage medium to a digital data storage format adapted for use with
`digital nonlinear motion picture editing equipment. After the visual images have been
`transferred to the high definition video storage medium, the digital nonlinear motion
`picture editing equipment is used to generate an edit decision list, to which the motion
`picture film is then conformed. The high definition video storage medium will beis
`generally adapted to store and display visual images having a scan density of at least 1080
`
`
`
`PRIME FOCUS EX 1015-10
`PRIME FOCUS v LEGEND3D
`
`

`
`
`
`horizontal lines. Electronic or optical transformation may be utilized to allow use of visual
`aspect ratios that make full use of the storage formats used in the method. This digitized
`film data as well as data already transferred from film to one of a multiplicity of formats
`such as HDTV are entered into a conversion system such as the HDTV Still Store
`manufactured by Avica Technology Corporation. Such large scale digital buffers and data
`converters are capable of converting digital image to all standard formats such as 1080i
`HDTV formats such as 720p, and 1080p/24. An Asset Management System server
`provides powerful local and server back ups and archiving to standard SCSI devices,
`C2-level security, streamlined menu selection and multiple criteria data base searches.
`
`During the process of digitizing images from motion picture film the mechanical
`positioning of the film frame in the telecine machine suffers from an imprecision known as
`“film weave”, which cannot be fully eliminated. However various film registration and
`ironing or flattening gate assemblies are available such as that embodied in Eastman Kodak
`Company's U.S. Pat. No. 5,328,073, Film Registration and Ironing Gate Assembly, which
`involves the use of a gate with a positioning location or aperture for focal positioning of an
`image frame of a strip film with edge perforations. Undersized first and second pins enter a
`pair of transversely aligned perforations of the film to register the image frame with the
`aperture. An undersized third pin enters a third perforation spaced along the film from the
`second pin and then pulls the film obliquely to a reference line extending between the first
`and second pins to nest against the first and second pins the perforations thereat and
`register the image frame precisely at the positioning location or aperture. A pair of flexible
`bands extending along the film edges adjacent the positioning location moves
`progressively into incrementally increasing contact with the film to iron it and clamp its
`perforations against the gate. The pins register the image frame precisely with the
`positioning location, and the bands maintain the image frame in precise focal position.
`Positioning can be further enhanced following the precision mechanical capture of im

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket