throbber
United States Patent [19]
`Geshwind et al.
`
`4,925,294
`[11] Patent Number:
`[45] Date of Patent: May 15, 1990
`
`‘[54] METHOD TO CONVERT TWO
`DIIVIENSIONAL MOTION PICTURES FOR
`THREE-DIMENSIONAL SYSTEMS
`
`76 In
`1
`[
`]
`ventors
`
`-
`
`-
`
`.
`
`1 4_ 4 -
`811 4132Mldland
`Anthony H. Handal, Blue Chip La‘,
`Westport, Conn. 06880
`
`[21] Appl. No.:
`[22] Filed:
`
`Dec. 17, 1986
`
`[51] Int. Cl.5 ............................................ .. G03B 35/14
`[52] US. Cl. ...................................... .. 352/57; 352/86;
`355/40
`[58] Field of Search ............. .. 355/40, 52, 77; 352/43,
`352/57, 85, 86, 129; 358/88, 89, 22, 81, 160
`
`[56]
`
`References Cited
`
`_ U's' PATENT DOCUMENTS
`3,772,465 11/1973 Vlahos et a1. ....................... .. 355/40
`3,824,336 7/1974 Gould et a1. .
`355/52
`4,606,625 8/1986 Geswind ..... ..
`352/85 X
`4,809,065 2/1989 Harris et a1. ........................ .. 358/88
`Primary Examiner-L. T. Hix
`Assistant Examiner-D. Rutledge
`[57]
`ABSTRACI,
`The present invention relates to the computer-assisted
`processing of standard two-dimensional motion pictures
`to generate processed image sequences which exhibit
`some three-dimensional depth effects when viewed
`under appropriate conditions.
`
`44 Claims, 2 Drawing Sheets
`
`2-0~ FILH OR
`VIDEO INPUT
`1o
`
`/
`\
`
`I
`1
`F a l SCANNER | orsrunr
`R u
`1n |
`q2
`MIL“- -----
`n F I
`HEHDRY
`E E
`In:
`R l--—-"--——-
`l COMPUTER INTERFACE
`no:
`115
`
`CRT VIDEO
`DISPLAY
`2o
`
`E
`
`PEN as
`
`V
`
`COHPUTERICPU
`so
`
`TABLET
`30
`
`/ KEYBOARD
`55
`
`oumn
`INTERFACE
`60
`
`3-0 VTR
`7o
`
`I
`1
`.
`|
`5-0 run
`L) RECORDER
`7s
`
`Legend3D, Inc. Ex. 2022-0001
`IPR2016-01243
`
`

`

`US. Patent May 15, 1990
`
`Sheet 1 of 2
`
`4,925,294
`
`LID
`
`FIGURE 1
`
`Legend3D, Inc. Ex. 2022-0002
`IPR2016-01243
`
`

`

`US. Patent May 15,1990
`
`Sheet 2 0f'2
`
`4,925,294 ,
`
`DD
`2v
`_I
`
`E
`
`m m w s _ w w MW IILIIPIII
`
`HI BUFFER 0
`F0 FRAME U:
`
`E u- _ M _ E E
`A _ P P
`
`D _ Y H. _ T U
`
`C _ M M
`
`/\ .l|l.l..l_un _ A I
`
`\ _mq m w
`R41 E R R
`M _m V m
`
`E
`
`0 m N
`
`DVI E
`
`V L E 3 P L TS B RI A CD T
`IA P T0
`
`5 3
`
`> _
`
`Au. _Cu.
`Y2_ _Es <
`
`H _ m
`m _ _ m
`
`50
`
`KEYBOARD
`55
`
`OUTPUT
`INTERFACE
`60
`
`3-D VTR
`70
`
`l-»
`
`3-D FILM
`RECORDER
`75
`
`FIGURE 2
`
`Legend3D, Inc. Ex. 2022-0003
`IPR2016-01243
`
`

`

`1
`
`4,925,294
`
`METHOD TO CONVERT TWO DIIVIENSIONAL
`MOTION PICTURES FOR THREE-DIMENSIONAL
`SYSTEMS
`
`TECHNICAL FIELD
`The invention relates to a method for converting
`existing ?lm or videotape motion pictures to a form that
`can be used with three-dimensional systems for broad
`cast or exhibition.
`
`10
`
`2
`sion receiver that can be viewed as a normal 2-D picture
`without glasses. Very inexpensive glasses, with one
`light and one dark lens, accentuate the differential pro
`cessing of the image, as viewed by each eye, to produce
`a 3-D depth effect.
`Practical, inexpensive, compatible (with standard
`TV) 3-D‘ television may now become widespread. In
`addition to materials speci?cally produced for the new
`system (or other 3-D systems) conversion of standard
`2-D programs to 3-D format would provide additional
`product to broadcast using the new compatible system
`(or for other 3-D projection systems).
`
`BACKGROUND ART
`With the advent of stereophonic sound, various tech
`niques were developed to convert or ‘re-process’ exist
`ing monophonic programs for stereophonic broadcast
`or recording systems. These included modifying the
`equalization phase or tonal qualities of separate copies
`of the monophonic program for the left and right chan
`nels. While true stereophonic or binaural effects may
`20
`not have been achieved, the effects were much im
`proved over feeding the identical monophonic signal to
`both channels.
`Similarly, with the almost universal use of color pro
`duction, exhibition and broadcast systems for motion
`pictures and television, systems have been developed to
`convert existing monochrome or black and white mate
`rials to color programs. Such a systems is described in _
`applicant Geshwind’s Pat. No. 4,606,625, issued Aug.
`19, 1986. The results of these colorized products, while
`not always identical to true color motion pictures, are
`more suitable than black and white for color systems.
`There have been a number of systems for exhibition
`or display of left- and right-eye pairs of binocular mo
`tion pictures. Early systems required two completely
`redundant projection or display systems; e.g. two ?lm
`projectors or CRT television displays, each routed to
`one eye via mirrors. Other systems require either com
`plicated and expensive projection or display systems, or
`expensive ‘glasses’ to deliver two separate images. For
`example:
`red- and green-tinted monochrome images are both
`projected or displayed to be viewed through
`glasses with left and right lenses tinted either red or
`green;
`two full-color images are projected through mutually
`perpendicular polarized ?lters and viewed through
`glasses with lenses that are also polarized in the
`same manner;
`.
`left and right images are displayed on alternate odd
`and even ?elds (or frames) of a standard (or high
`scan rate) television CRT and are viewed through
`‘glasses’ with shutters (either rotating blades or
`?ickering LCDs, for example) that alternate the
`view of left and right eyes in synchrony with the
`odd or even ?elds of the CRT.
`Of the above systems, the second is not at all usable
`with standard home television receivers, the third re
`quires very expensive ‘glasses’ and may ?icker with
`standard home receivers, and the ?rst produces only
`strangely tinted monochrome images. Further, none of 60
`the systems may be broadcast over standard television
`for unimpeded viewing without special glasses.
`Thus, until now, compatible (i.e., viewable as two-di
`mensional, without glasses) home reception of b. 3-D
`images was not possible. However, a new system,
`which takes advantage of differential processing of left
`and right-eye images in the human perceptual system,
`delivers a composite image on a standard home televi
`
`SUMMARY OF THE INVENTION
`Applicant Handal’s previous application, No.
`479,679, ?led Mar. 28, 1983 and now abandoned, relates
`a process and apparatus for deriving left- and right-eye
`pairs of binocular images, from certain types of two
`dimensional ?lm materials. In particular, the materials
`must consist of separate foreground and background
`elements, such as cartoon animation cells and back
`ground art. By taking into account the parallax between
`scenes as viewed by the left and right eyes, two images
`are prepared where foreground elements are shifted
`with respect to background elements by an amount that
`indicates their depth in the third dimension. Two-di
`mensional motion pictures that consist of a series of
`single composite images could not be converted to
`three-dimensional format, by this technique, without
`?rst being separated into various background and fore
`ground elements.
`Once committed to two~dimensional motion pictures,
`the separation and depth information for various scene
`elements, in the third dimension, are lost. Thus, the
`separation of two-dimensional image sequences into
`individual image elements and the generation of three
`dimensional depth information for each such image
`element are not simple or trivial tasks, and are the fur
`ther subject of the instant invention.
`In accordance with the invention, standard two-di
`mensional motion picture ?lm or videotape may be
`converted or processed, for use with three-dimensional
`exhibition or transmission systems, so as to exhibit at
`least some three—dimensional or depth characteristics.
`Separation of a single 2-D image stream into diverse
`elements is accomplished by a computer assisted,
`human operated system. (If working from discrete 2-D
`?lm sub-components, such as animation elements, the
`separation step may be omitted.) Depth information is
`assigned to various elements by a combination of human
`decisions and/or computer analyses and resulting im
`ages of three-dimensional format are produced under
`computer control.
`
`BRIEF DESCRIPTION OF DRAWINGS
`A method for carrying out the invention is described
`in the accompanying drawings in which: ‘
`FIG. 1 is a diagram illustrating the relationship of a
`2-D source ?lm image, left and right image pairs, and a
`composite 3-D image frame.
`FIG. 2 is a schematic diagram of a system for carry
`ing out an implementation of the present invention.
`
`25
`
`30
`
`45
`
`55
`
`65
`
`DETAILED DESCRIPTION
`’
`An immense amount of standard, 2-D motion picture
`material exists in the form of ?lm and videotape. In -
`addition, certain materials exist in the form of discrete
`
`Legend3D, Inc. Ex. 2022-0004
`IPR2016-01243
`
`

`

`15
`
`20
`
`35
`
`4,925,294
`3
`4
`2-D image sub-components, such as animated cell and
`to objects as they get closer. Alternately, attention may
`background paintings. As the use of 3-D exhibition and
`be centered in the mid-ground with no parallax offset to
`broadcast systems becomes more widespread, the con
`mid-range objects, some parallax offset to close-range
`version of existing 2-D programs to a format that will
`objects and reverse parallax offset to far-range objects.
`The particular placement of objects and attention point
`exhibit at least some 3-D or depth effects, when used
`with 3-D systems, is desired.
`in the 3-D scene is as much an art as a science and is
`Extracting individual image elements or 3-D depth
`critical to the enjoyment of 3-D programs and, in any
`information from a 2-D film frame, or synthesizing 3-D
`event, is not meant to be the subject of this invention.
`information for those elements, entirely by computer
`FIG. 2 shows a schematic of a system to implement
`equipped with arti?cial intelligence, is not now practi
`the instant invention. A 2-D ?lm or video image 10 is
`cal. Therefore, the embodiment of the invention as
`input to a video monitor 20 and to the scanning portion
`described herein employs a high degree of human inter
`41 of frame buffer 40. Video monitor 20 is capable of
`action with the computer. However, as arti?cial intelli
`displaying either the 2-D image being input 10 or the
`gence progresses, a predominantly or completely auto
`output from display portion 42 of frame buffer 40.
`mated system may become practical and is within the
`Frame buffer 40 consists of an image scanner 41
`intended scope of the invention.
`which can convert the input image into digital form to
`FIG. 1 shows a frame 20 of a standard 2-D ?lm of
`be stored in a portion of frame buffer memory section
`video motion picture, consisting of a cross-hatched
`44, a display section 42 which creates a video image
`background plane 21, a large white circle 22 in the
`from the contents of a portion of memory section 44,
`mid-ground, and a small black square 23 in the fore
`and a computer interface section 43 which allows the
`ground. It is a 2-D representation of original 3-D scene
`computer CPU 50 to read from and write to the mem
`10 comprising elements 11, 12 and 13, which is not
`ory section 44.
`available for direct 3-D photography. After human
`Graphic input tablet 30 and stylus 35 allow the human
`identi?cation of individual image elements and depth
`operator to input position information to the computer
`assignment, the computer generates, from frame 10, a
`50 which can indicate the outline of individual image
`25
`pair of coordinated left 30 and right 40 images with
`elements, choice of a specific image element or part of
`backgrounds 31 and 41, circles 32 and 42, and squares 33
`an image element, depth speci?cation, choice of one of
`and 43 respectively. Note that the relative positions of '
`a number of functions offered on a ‘menu’, or other
`the square and circle are different in the left and right
`information. An image cursor can be displayed on mon
`images; this situation is similar to the parallax that might
`itor 20 by frame buffer 40 to visually indicate the loca
`result between left- and right-eye views if one were to
`tion or status of the input from the tablet 30 and pen 35.
`have viewed the original scene 10 directly. Three-di
`Text and numeric information can also be input by the
`mensional format frame 50 is generated by encoding the
`operator on keyboard 55.
`information in the left 30 and right 40 image pair, in a
`Computer CPU 50 is equipped with software which
`manner consistent with any one of a number of existing
`allows it to interpret the commands and input from the
`3-D systems. The speci?c operation of these various
`human operator, and to process the digitized 2-D infor
`3-D systems is not the subject of the instant invention.
`mation input from 2-D frame 10 into digital 3-D frame
`Alternately, the steps of generating and encoding 3-D
`information, based on said human commands and input.
`information may be combined such that 3-D format
`Said digital 3-D frame information is then output by
`frame 50 may be processed directly from 2-D frame 20
`output interface 60 (which may be similar to frame
`without generating left 30 and right 40 image pairs. In
`buffer 40 or of some other design) to a videotape re
`either case, 3-D format frame 50 when viewed by
`corder 70 or to a ?lm recorder 75, capable of recording
`human 60 through 3-D glasses 70 is perceived as 3-D
`3-D format frames.
`scene 80, containing elements 81, 82 and 83, which has
`The system as described above operates as follows. A
`at least some of the 3-D characteristics of original 3-D
`frame of the 2-D program is displayed on the video
`scene 10.
`monitor for viewing by the human operator. With the
`Various systems for .the encoding, display, projec
`tablet and stylus the operator outlines various 2-D
`tion, recording, transmission or viewing of 3-D images
`image areas to indicate to the computer the boundaries
`exist, and new systems may be developed. Speci?cally,
`of various image elements to be separately processed.
`various techniques for specifying, encoding and view
`(For materials, such as animation components, that
`ing 3-D information may now, or come to, exist, which
`already exist as separate elements, the previous stage of
`do not make use of parallax offset and/or left and right
`the process may be skipped.) Depth position informa
`image pairs and/or viewing glasses, or which embody
`tion, in the third dimension, is determined for each
`new techniques or changes and improvements to cur
`image element, by a combination of operator input and
`rent systems. Further, such systems may integrate infor
`computer analysis. Left and right image pairs or a 3-D
`mation from more than one 2-D source frame 20 into a
`composite image is processedby the computer, from the
`single resultant 3-D frame 50. The speci?cs of operation
`2-D input image, based on the computer software and
`of such systems is not the subject of the instant inven
`operator instructions. Depending upon the particular
`tion, however, preparation of 2-D program material for
`3-D system to be used, left- and right-image pairs may
`such systems is.
`or may not be the ?nal stage or an intermediate stage or
`The offsets shown for elements 31, 32 and 33 in left
`bypassed entirely. Further, for some 3-D systems, infor
`frame 30 and elements 41, 42 and 43 in right frame 40
`mation from more than one 2-D source frame may be
`are meant to be illustrative and do not necessarily fol
`combined into one 3-D frame (or frame pair). In any
`low the correct rules for image parallax. In fact, de
`event, the ?nal 3-D information is then collected on a
`pending upon where viewer attention is meant to be
`videotape or film recorder. The process is then repeated
`centered, different rules may apply. For example, one
`for additional 2-D frames.
`technique is to give no parallax offset to far background
`Each image element may be given a uniform depth
`elements and to give progressively more parallax offset
`designation which may cause the perception of ‘card
`
`30
`
`45
`
`50
`
`55
`
`60
`
`65
`
`Legend3D, Inc. Ex. 2022-0005
`IPR2016-01243
`
`

`

`15
`
`20
`
`4,925,294
`5
`6
`board cut-out’ characters. Alternately, different por
`late. However, there are well known techniques, that
`tions of a single image element may be given different
`have been developed for computer image synthesis, that
`depth designations with the computer interpolating
`allow for the sorting, intersection and display of di
`depth coordinates over the entire element. For example,
`verse, overlapping 3-D image elements.
`an image element positioned diagonally in a frame may
`As image elements are separated from background
`have its right edge designated to be closer than its left
`scenes or each other, holes may develop. The computer
`software will compensate for this by filling in missing
`edge. Alternately, one feature of an image element, say
`a person’s nose in a close-up, might be designated as
`information in one particular 2-D frame with the equiv
`being closer than the rest of the image element. In such
`alent part of the image element from earlier or later
`frames whenever possible. Alternately, material to ‘plug
`manner, the computer would be instructed to interpo
`late and process depth information over the entire
`holes’ may be created by the operator or by the com
`image element. Such processing of the image element,
`puter from adjacent areas or may be newly synthesized.
`in accordance with varying depth information, may
`Further, as two image elements are separated from each
`result in the stretching, skewing or other distortion of
`other, the forwardmost of the two may be moved closer
`the image element.
`to the viewer; hence, it may be appropriate to make it
`Depth interpolation may also be carried out over
`larger. Thus, the increased size may cover the gaps in
`time, between frames. For example, an image element
`the rearmost element of the two.
`might be designated to be close in one frame and to be
`As part of the processing to be performed on the 2-D
`far away in a later frame. The depth of the image ele
`source image, individual image elements or composite
`ment may be interpolated for the frames between the
`images, additional effects may be programmed into the
`two designated frames. Further, the position of the
`computer to heighten the sense of depth. For example,
`image element in the two dimensions of the film frame,
`shadows cast by one image element on another element
`and the steps of the outline separating the image ele
`or the background may be calculated and inserted; far
`ment from the rest of the source image, may also be
`distant landscape elements may be made hazy or bluish
`interpolated over time, between frames Linear and non
`to indicate remoteness; different image elements may be
`25
`linear interpolation techniques are well known and may
`blurred or sharpened to simulate depth-of-?eld; or, the
`be readily applied.
`distance between close objects may be exaggerated to
`It is also possible to add random noise to the depth '
`emphasize their separation. There are, of course, other
`information to eliminate the appearance of ?at objects
`technical or artistic techniques that can be used to indi
`moving in space and to help achieve greater realism.
`cate depth in an image and which may also be incorpo
`30
`For a particular image element, depth position may
`rated into the image processing programs and would
`be derived by the computer, alone or with operator
`therefore be part of the invention as described herein.
`input, by measuring the size of the image element in a '
`Therefore, the above examples are illustrative and
`frame. For example, once a person were outlined, his
`should not be construed as limiting the scope of the
`overall height in the 2-D film frame might be extracted
`invention. Alternately, depth information may be inten
`by the computer as an indication of depth distance.
`tionally distorted for effect or for artistic purposes.
`using knowledge, or making assumptions, about the
`Improvements may be made to the ?nal product by
`object’s real size, and the characteristics of the camera
`including new image elements that were not part of the
`and lens, the distance of the object from the camera may
`original 2-D source image. These could include 2-D
`be calculated by applying well known principles of
`image elements that are then assigned depth values, 3-D
`40
`perspective and optics. Alternately, if overall size can
`image elements created by 3-D photography and then
`not be easily extracted by the computer, key points
`entered into the computer as left- and right-image pairs,
`might be indicated by the operator for the computer to
`for example, or synthetic 3-D computer generated
`graphics. In particular, since computer generated image
`measure. Comparing the change of size of an image
`element in several frames will provide information
`elements can be created with depth information, they
`45
`about the movement of the object in the third dimen
`can be easily integrated into the overall 3-D scene with
`vivid effect. For example, a 3-D laser blast could be
`son.
`created by computer image synthesis such that it would
`It should be noted that horizontal parallax offsets are,
`in turn obscure'and be obscured by other image ele
`by far, more obvious, due to the fact that our eyes are
`separated in the horizontal direction. However, vertical
`ments in an appropriate manner and might even be
`50
`offset differences between left- and right-eye views may
`created so as to appear to continue beyond the front of
`also be appropriate.
`the screen into ‘viewer space’.
`As various image elements are separated and assigned
`Animated ?lm components usually have relatively
`depth values, a situation develops where diverse objects
`few background paintings, which are kept separate
`exist in a ‘three-dimensional space’ within the computer.
`from the animated characters in the foreground. For
`It should be noted that, in order to display a realistic
`these, or for live 2-D filmed scenes, once the fore
`representation of the entire scene, forwardmost objects
`ground elements have been separated from the’ back
`ground, the flat 2-D backgrounds may be ‘replaced by
`must obscure all or part of rearmost objects with which
`3-D backgrounds. The 3-D backgrounds might consist
`they overlap (except in the case where a forwardmost
`object were transparent). When generating left- and
`of computer generated graphics, in which case depth
`right-eye views, the pattern of overlap of image ele
`information for the various elements of the background
`ments, and thus the pattern of obscuring of image ele
`would be available at the time of the background cre
`ments will, in general, be different. Further, for image
`ation. Alternatively, 3-D backgrounds might be created
`elements that have been assigned non-uniform depth
`using 3-D photography, in which case depth informa
`values (e.g., image elements that are not flat or not in a
`tion for the background elements may be derived, by
`plane perpendicular to the third dimension) the inter
`the computer, from the comparison of the left- and
`right-image pairs of the 3-D background photographs
`section between these image elements, for the purpose
`and applying known image processing and pattern rec
`of one obscuring the other, may be non-trivial to calcu
`
`35
`
`65
`
`Legend3D, Inc. Ex. 2022-0006
`IPR2016-01243
`
`

`

`LII
`
`15
`
`20
`
`25
`
`ments.
`
`'
`
`4,925,294
`7
`8
`ognition techniques. Alternately, depth information to
`4. A method as in claim 2 comprising the additional
`create 3-D backgrounds may be speci?ed otherwise by
`step of:
`operator input and/or computer processing. Once
`g. encoding said left and right processed image pair
`for viewing, by coloring each of said pair different
`depth information is available for the various back~
`ground elements, it may compared to depth information
`colors.
`for the other image elements with which it is to be
`5. A method as in claim 2 comprising the additional
`combined in each frame. Thus, the 3-D background
`step of: ‘
`g. encoding said left and right processed image pair
`may itself live some depth and, in effect, be a ‘set’ within
`for viewing, by passing each of said pair through
`which other image elements may be positioned in front
`mutually perpendicularly polarized ?lters.
`of, behind or intersecting various background elements
`6. A method as in claim 2 comprising the additional
`for added realism.
`step of:
`It should be noted that, intended purpose and soft
`g. encoding said left and right processed image pair
`ware algorithms aside, the design and operation of the
`for viewing, by incorporating said image pair into a
`3-D conversion system described herein has many simi
`video signal suitable for displaying each of said pair
`larities with the black and white footage colorization
`alternately on a video display.
`system described in applicant Geshwind’s Pat. No.
`7. As method as in claim 1, wherein said step f results
`4,606,625. They are both computer-aided systems
`in a processed image frame such that, when viewed
`which display standard ?lm images to an operator,
`through glasses with one dark and one light lens, 3
`allow the operator to input information separating vari
`dimensional effects are perceived.
`ous image elements within the frame, allow the operator
`8. A method as in claim 1 comprising the additional
`to specify attributes (color in one case, depth in the
`step of:
`other) for the image elements, and cause the computer
`g. recording said processed image frame.
`to process new image frames from the original, based on
`9. A method as in claim 7 comprising the additional
`the operator input. Thus, the processing of 2~D black
`step of:
`and white footage to add both color and depth informa
`g. recording said processed image frame.
`tion would be much more ef?cient than implementing
`10. A method as in claim 1 wherein said steps are
`each process separately.
`applied to successive frames in a motion picture se
`The scope of the instant invention is the conversion
`quence.
`or processing of 2-D program material for use with any
`11. A method as in claim 2 wherein said steps are
`3-D exhibition or distribution system now in use or later
`applied to successive frames in a motion picture se
`developed, but not the speci?c method of operation of
`quence.
`.
`any particular 3-D system.
`12. A product produced by the method described in
`It will thus be seen that the objects set forth above,
`claim 7.
`among those made apparent from the proceeding de
`13. A product produced by the method described in
`scription, are efficiently attained and certain changes
`claim 10.
`may be made in carrying out the above method and in
`14. A product produced by the method described in
`the construction set forth. Accordingly, it is intended
`claim 11.
`that all matter contained in the above description or
`15. A product produced by the method described in
`shown in the accompanying ?gures shall be interpreted
`claim 1.
`as illustrative and not in a limiting sense.
`16. A method as in claim 1 wherein at least one of said
`Now that the invention has been described, what is
`processed image elements produced in step e is a
`claimed as new and desired to be secured by Letters
`shadow element.
`Patent is:
`17. A method as in claim 1 wherein at least one of said
`1. A method of converting a two-dimensional image
`image elements in step e is processed to include addi
`frame into a three-dimensional image frame consisting
`tional two-dimensional image information not con
`of the steps of:
`tained in the original unprocessed two-dimensional im
`a. inputing a frame of a two-dimensional image into a
`age.
`computer;
`18. A method as in claim 17 wherein said additional
`b. specifying at least two individual image elements in
`two-dimensional image information is derived from
`the two-dimensional image;
`another image.
`c. separating the two-dimensional image into said
`19. A method as in claim 1 wherein said processed
`image elements;
`image elements in step f are combined with at least one
`d. specifying three-dimensional information for at
`additional 3-D image element not derived from the
`least one of said image elements;
`source image to create said processed image frame.
`e. processing at least one of said image elements to
`20. A method as in claim 19 wherein said additional
`incorporate said three-dimensional information and
`3-D image element is derived from a 3A-D photograph.
`create at least one processed image element;
`21. A method as in claim 19 wherein said additional
`f. generating at least one processed image frame com
`3-D image element is derived from a computer gener
`prising at least'one of said processed image ele
`ated 3-D image.
`22. A method as in claim 1 wherein said three-dimen
`sional information for at least one of said image ele
`ments in step d is speci?ed only at certain points and is
`interpolatively derived for other points on said image
`element.
`23. A method as in claim 10 wherein said three-di
`mensional information for at least one of said image
`elements in step d is speci?ed only for certain frames
`
`2. A method as in claim 1 wherein said step f results
`in the generation of a left and right pair of processed
`image frames.
`'
`3. A method as in claim 2 comprising the additional
`step of:
`g. combining said left and right image pair into a
`single processed image.
`
`35
`
`45
`
`50
`
`55
`
`65
`
`Legend3D, Inc. Ex. 2022-0007
`IPR2016-01243
`
`

`

`4,925,294
`10
`and is temporally interpolated for frames between said
`d. a means for specifying three-dimensional informa
`tion for said individual image elements;
`certain frames.
`e. a means for processing said individual image ele
`24. A method as in claim 10 wherein said speci?cation
`ments to create processed image elements;
`of at least one of said image elements in step b is speci
`f. a means for combining said processed image ele
`?ed only for certain frames and is temporally interpo
`ments into a processed image sequence;
`lated for frames between said certain frames.
`g. a means for outputing said three-dimensional image
`25. A method as in claim 1 wherein random noise is
`added to the three-dimensional information speci?ed in
`sequence;
`'
`h. a means for recording said three-dimensional image
`step d.
`sequence.
`26. A method as in claim 1 wherein at least some of
`37. A method of converting a two-dimensional image
`said three-dimensional information speci?ed in step d is
`frame into a three-dimensional image frame consisting
`derived from the measurement of at least one aspect of
`of the steps of:
`an image element.
`a. inputing frames of a two-dimensional image se
`27. A method as in claim 26 wherein said aspect per
`quence into a computer;
`tains to geometry.
`b. specifying at least two individual image elements in
`28. A method as in claim 26 wherein said aspect per
`the two-dimensional image sequence;
`tains to illumination.
`0. separating the two-dimensional images into said
`29. A method as in claim 10 wherein at least some of
`image elements;
`said three~dimensional information speci?ed in step dis
`d.
`20
`' specifying three-dimensional information for at
`derived from the measurement of the change of at least
`least one of said image elements;
`one aspect of an image element in successive frames.
`e. processing at least one of said image elements to
`30. A method as in claim 29 wherein said aspect per
`incorporate said three-dimensional information and
`tains to geometry.
`create at least one processed image elements;
`31. A method as in claim 29 wherein said aspect per
`f. generating a sequence of processed image frames
`tains to illumination.
`comprising at least one of said processed image
`32. A method as in claim 1 wherein said two-dimen
`elements said generation to be of such a nature so as
`sional image frame is a black and white image frame, '
`to exhibit three-dimensional depth characteristics
`said image elements are black and white image ele
`when viewed using a complementary display sys
`ments, and said processing includes the process of add
`tem.
`ing color to at least one of said black and white image
`38. A method as in claim 37 comprising the additional
`step:
`elements.
`g. transmission of said three-dimensional image se
`33. A method as in claim 32 wherein said steps are
`quence.
`applied to successive frames in a motion picture se
`39. A method as in claim 33 comprising the additional
`quence.
`step:
`34. A product produced by the method of claim 33.
`g. recording of said three-dimensional image se
`35. An apparatus for converting a two-dimensional
`quence.
`image frame into a three-dimensional image frame com
`40. A method as in claim 37 wherein said two

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket