throbber
(19) United States
`(2) Patent Application Publication (10) Pub. No.: US 2005/0151743 A1
`Sitrick
`(43) Pub. Date:
`Jul. 14, 2005
`
`US 20050151743A1
`
`(54) IMAGE TRACKING AND SUBSTITUTION
`SYSTEM AND METHODOLOGY FOR
`AUDIO-VISUAL PRESENTATIONS
`(76) Inventor: David H. Sitrick, Highland Park, IL
`(US)
`
`Related U.S. Application Data
`(62) Division of application No. 09/723,169, filed on Nov.
`27, 2000, now abandoned.
`
`Publication Classification
`
`-
`sºck
`8340 N LINCOLN AVENUE SUITE 201
`SKOKIE, IL 60077 (US)
`
`(21) Appl. No.:
`
`11/045,949
`
`(22) Filed:
`
`Jan. 28, 2005
`
`(51) Int. Cl." ............................. G06T 15/70; G09G 5/00
`(52) U.S. Cl. ......................... 345/473; 345/629; 34.5/624;
`345/636; 345/670, 345/474
`ABSTRACT
`(57)
`The present invention relates to a system and method for
`processing a video input signal providing for tracking a
`selected portion in a predefined audiovisual presentation and
`integrating selected user images into the selected portion of
`the predefined audiovisual presentation.
`
`800
`
`830
`
`}
`
`840
`
`850
`
`860
`
`É.
`
`870
`^^^
`
`|
`
`|
`
`|
`
`|
`
`L - - - - - - - - - - - =l
`
`TRACKING
`
`|
`
`|
`
`|
`
`|
`
`SUBSYSTEM
`FRAME
`|
`l
`|
`| DATABASE/
`| STORAGE .
`| SUBSYSTEM :
`} |(} {} | .
`i? \?):
`} |() { } | .
`i? \?)];
`:
`|U
`| @ 4) :
`?º ? !
`
`811
`
`810
`
`815
`
`820
`
`1
`
`Petitioner Samsung 1007
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 1 of 8
`
`US 2005/0151743 A1
`
`Fig. 1
`
`123
`
`115
`[][][T][]
`© {} 127
`120
`
`?º
`E*
`
`110 -100
`
`170
`
`180 182
`El--184
`Nx
`
`Gee ejY--187
`190
`
`O),Q)
`
`135
`
`137
`
`130
`
`194
`
`196
`
`
`
`2
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 2 of 8
`
`US 2005/0151743 A1
`

`
`
`
`:
`
`3
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 3 of 8
`
`US 2005/0151743 A1
`
`
`
`4
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 4 of 8
`
`US 2005/0151743 A1
`
`
`
`COMFOSTEM
`AND MASK
`SUBSYSTEM j) OUTPUT
`
`Fig. 6
`
`671,672,673,674,675,676,677
`
`- - - - - - - -
`
`OUTPUT
`
`
`
`
`
`
`
`
`
`5
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 5 of 8
`
`US 2005/0151743 A1
`
`Fig. 7
`
`700
`
`TRACKING
`SUBSYSTEM
`
`720 POSITION
`INFORMATION
`
`730 ROTATION/
`ORIENTATION
`INFORMATION
`
`740 MESH
`GEOMETRY
`INFORMATION
`
`750 VIDEO
`COMPOSITE
`INFORMATION
`
`2.760 OTHER DATA
`
`
`
`
`
`Fig. 8
`
`
`
`
`
`
`
`800
`
`830
`
`
`
`
`
`
`
`TRACKING
`SUBSYSTEM
`
`DATABASE/
`STORAGE
`SUBSYSTEM
`
`820
`
`6
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 6 of 8
`
`US 2005/0151743 A1
`
`|
`|
`|
`|
`|
`|
`|
`|
`|
`I
`I
`1.
`
`TRACKING SUBSYSTEM * † º º º º º
`
`REGION OF INTEREST
`
`-
`Fig. 9A
`
`DATABASE OF PREVIOUSLY IDENTIFIED
`POSES/EXPRESSIONS
`Fig. 9B
`
`
`
`
`
`ATTRIBUTE/FEATURE
`RECOGNITION AND
`IDENTIFICATION
`Fig. 9C
`
`FRAME DIFFERENCING
`& MOTION DETECTION
`SUBSYSTEM
`Fig. 10
`
`7
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 7 of 8
`
`US 2005/0151743 A1
`
`Fig. 11
`
`1101 FINAL
`COMPOSITED
`OUTPUT
`
`1105
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`STORAGE
`
`1115 VIDEO INPUT
`
`Fig. 12
`
`1201 FINAL
`COMPOSITED
`OUTPUT
`
`VIDEO
`INPUT
`
`STORAGE
`
`8
`
`

`

`Patent Application Publication Jul. 14, 2005 Sheet 8 of 8
`
`US 2005/0151743 A1
`
`1315 VIDEO
`
`
`
`
`
`
`
`: .
`
`WIRE FRAME
`MODELS
`
`1399 FINAL
`COMPOSITE OUTPUT
`
`
`
`
`
`
`
`DISPLAY
`
`1344
`
`Fig. 13
`
`9
`
`

`

`US 2005/0151743 A1
`
`Jul. 14, 2005
`
`HVIAGE TRACKING AND SUBSTITUTION
`SYSTEM AND METHODOLOGY FOR
`AUDIO-VISUAL PRESENTATIONS
`
`RELATED APPLICATIONS
`
`[0001] This application is a divisional application of co-
`pending US. patent application Ser. No. 09/723,169, filed
`Nov. 27, 2000.
`
`FEDERAIIY SPONSORED RESEARCH OR
`DEVELOPMENT
`
`[0002] Not Applicable.
`BACKGROUND ()1' THE INVENTION
`
`[0003] This invention relates to predefined video and
`audiovisual presentations such as movies and video games,
`and more particularly to a system and process for image
`tracking and smooth integration substitution of user—created
`images into a predefined video or audiovisual presentation,
`including, but not
`limited to, character
`image tracking
`system and methodology for smooth integration of user
`created video graphics into a predefined video, movie, or
`game system, etc. The system which provides for the utili-
`zation of a user selected visual
`image as a prc-selected
`character segment, such that the user selected Visual image
`is incorporated into the audiovisual presentation of the
`movie or video game in place of a tracked image within the
`predefined presentation,
`[0004] Subsequent to the invention of US. Pat. No. 4,521,
`014, video games have been created which utilized pre-
`defined digitized images in the video game which supple-
`ment the otherwise cartoon-like character and imagery of the
`game. Additionally, digital and analog video data have been
`merged with video games and movies to get broadcast
`quality video for certain aspects of the video display.
`[0005]
`It is therefore an object of the present invention to
`rovide a system which tracks an image within the pre-
`defined presentation and then utilizes an image generated by
`an external source (of video and/or audio and/or computer
`generated), and integrates the image into and as part of a
`are-existing audiovisual work (such as from a video game
`system or a movie or animation) in place of the tracked
`image which utilizes the user’s image in the video game play
`or movie or as a synthetic participating user image in the
`aredefined audiovisual presentation.
`[0006]
`It is an additional object to provide a system and
`methodology for orderly tracking of a selected portion of the
`redefined presentation and integration of the user selected
`or created visual
`image, or images,
`into the predefined
`audiovisual presentation.
`[0007]
`It is a further object of the present invention to
`rovide various means for selecting, tracking, and substi-
`uting portions of the predefined audiovisual presentation
`with the user’s selected visual image.
`[0008] User image integration into a predefined audiovi-
`sual presentation has had limited usage, such as in video
`games. For example, some amusement parks provide video
`entertainment by playing old movie clips incorporating
`select audience members, A live camera captures the audi-
`ence member in front of a blue background. The blue color
`
`
`
`is filtered out of the signal from the audience member
`camera and the signal is combined with the video signal of
`the old movie clip. This gives the impression that
`the
`audience member is acting in the old movie clip. All of this
`is typically done in real-time.
`[0009] A problem with this approach is that a complete
`set—up is needed (a video camera, a blue -screen, a compos-
`iting computer system, etc.) and, the incorporation of the
`audience member is crude in that the audience member’s
`image overlays the movie clip and is not blended into the
`movie. Using this approach, there can be no realistic inter-
`action between the audience member and the cast in the
`movie clip. Plus, there is no continuity in the integration
`within the presentation and there is no tracking for substi-
`tution. There is a resulting need for an entertainment system
`that facilitates realistically integrating a user’s image into a
`video or audiovisual presentation.
`SUMMARY OF THE INVENTION
`
`[0010] An audiovisual source provides audio and video
`signals received by a controller for integration into an
`audiovisual presentation. The controller analyzes the audio
`and video signals and modifies the signals to integrate the
`user image into the audiovisual presentation. This enables
`the user image to participate in the audiovisual presentation
`as a synthetic actor.
`[0011]
`In accordance with one aspect of the present inven-
`tion, a user selected image is selectively integrated into a
`predefined audiovisual presentation in place of a tracked
`portion of the predefined audiovisual presentation. A user
`can create a video or other image utilizing any one of a
`plurality of input device means. The user created image is
`provided in a format and through a medium by which the
`user created or selected image can be communicated and
`integrated into the predefined audiovisual presentation, The
`tracking and integration means provide for tracking and
`mapping the user image data into the predefined audiovisual
`presentation structure such that the user image is integrated
`into the presentation in place of the tracked image.
`[0012] The user image can be provided by any one of a
`number of means, such as by original creation by the user by
`any means (from audio analysis to a graphics development
`system, by user assembly of predefined objects or segments,
`by digitization scan of an external object such as of a person
`by video camera or a photograph or document (by a scanner,
`etc.) or supplied by a third party to the user). The user image
`creation system creates a mappable absolute or virtual) link
`of the user defined images for integration into other graphics
`and game software packages, such as where the user defined
`or created visual images are utilized in the video presenta-
`tion of a movie or of the video game as a software function
`such as one or more of the pre-selected character imagery
`segment(s) associated with the user’s play of the game or as
`a particular character or other video game software function
`in the game (e.g., hero, villain, culprit, etc.) and/or a
`particular portion and/or perspective view of a particular
`character, such that one or more of the user Visual images
`and/or sounds is incorporated into the audiovisual presen-
`tation and play of the resulting video game.
`[0013] An analysis system analyzes the signals associated
`with the selected portion of the predefined audiovisual
`presentation and associates it with the user selected images
`
`10
`
`10
`
`

`

`US 2005/0151743 A1
`
`Jul. 14, 2005
`
`and selectively tracks the selected portion to substitute
`therefor the data signals for user selected images, whereby
`the user selected image is associated with the selected
`portion so that the user selected image is incorporated into
`the otherwise predefined audiovisual presentation.
`[0014] These and other aspects and attributes of the
`present invention will be discussed with reference to the
`following drawings and accompanying specification.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`[0015] FIG. 1 is a system block diagram of the present
`invention;
`[0016] FIG. 2 represents a system block diagram of an
`alternate embodiment of the present invention;
`[0017] FIG. 3 is a system block diagram of another
`alternate embodiment of the present invention;
`[0018] FIG. 4 is a system block diagram of another
`alternate embodiment of the present invention;
`[0019] FIG. 5 is a system block diagram of a user image
`video processing and image integration subsystem of the
`present invention;
`[0020 FIG. 6 shows a system block diagram of an alter
`nate embodiment of the user image video processing and
`integration subsystem of the present invention;
`[0021] FIG. 7 is a block diagram of a tracking subsystem
`of a preferred embodiment of the present invention;
`[0022] FIG. 8 is a block diagram of a tracking subsystem
`of another preferred embodiment of the present invention;
`[0023] FIG. 9A is a representation of a region of interest
`of a preferred embodiment of the present invention;
`[0024] FIG. 9B is a representation of a database as in
`another preferred embodiment of the present invention;
`[0025] FIG. 9C is a representation of a reference object as
`in a preferred embodiment of the present invention;
`[0026] FIG. 10 is a representation of a frame difference as
`in another preferred embodiment of the present invention;
`[0027] FIG. 11 is a detailed block diagram of a preferred
`embodiment of the system of the present invention com
`prising a compositing means within a three dimensional
`(3D) graphics engine;
`[0028] FIG. 12 is a detailed block diagram of a preferred
`embodiment of the system of the present invention com
`prising a compositing means within a frame buffer; and
`[0029] FIG. 13 is a detailed block diagram of a preferred
`embodiment of the system of the present invention imple
`mented with a general purpose computer performing the
`compositing.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`[0030] While this invention is susceptible of embodiment
`in many different forms, there is shown in the drawing, and
`will be described herein in detail, specific embodiments
`thereof with the understanding that the present disclosure is
`to be considered as an exemplification of the principles of
`
`the invention and is not intended to limit the invention to the
`specific embodiments illustrated.
`[0031] FIG. 1 is a system block diagram of the present
`invention, showing a user image video processing and
`integration subsystem 100. Coupled to the subsystem 100 is
`an external source of program content 110 and an external
`source of user image content 130. The external source of
`program content 100 is further comprised of other program
`data 115 and program video 120. In the figure, representa
`tions of two people, a first person 123 and a second person
`127, are visible in the program video 120. In the external
`source of user image content 130 is further comprised of
`other user data 132 and user image data 135, the user image
`data 135 is further comprised of a user specified image 137.
`In the figure, 137 appears as a single image of a face. The
`subsystem 100 processes the sources 110 and 130 producing
`the output content 170. The output content 170 is comprised
`of other output data 180 and output video 190. The other
`output data 180 is further comprised of data from the other
`program data 115 output as 182, data from the other user
`data 132 output as 184, and processed data produced by the
`subsystem 100 output as data 187. The output video 190
`consists of a processed version of the program video 120
`selectively processed by the subsystem 100 such that the
`representation 123 has been replaced by the user specified
`image 137 producing the output 194. The input image 127
`is unmodified by the system and output as representation
`196 in the output video 190.
`[0032] Of note is that not all data present in other program
`data 115 or other user data 132 is necessarily present in
`output other data 180. Further, data generated by the sub
`system 100 or processed by the subsystem 100 may be
`additionally output within the other output data 180.
`[0033] FIG. 2 represents a system block diagram of an
`alternate embodiment of this invention. With respect to FIG.
`2, there is a user image video processing and integration
`subsystem 200. Coupled to said subsystem is an external
`source of program content 210 and a plurality of external
`sources of user image content 230, 240, and 250. The
`external source of program content 210 is further comprised
`of other program data 215 and program video 220. In the
`figure, the program video 220 contains representations of
`three persons 222, 225, and 227. The first external source of
`user image content 230 is comprised of other user data 232
`and user image data 235. The user image data 235 is further
`comprised of user specified image 237, indicated as a
`representation of a face. External source of user image
`content 240 is comprised of other user data 242 and user
`image data 245. The user image data 245 is further again
`comprised of user specified image 247, representing a dif
`ferent face. In an analogous manner, external source of user
`image content 250 is further comprised of other user data
`252 and user image data 255. The user image data 255 is
`further comprised of a user specified image 257, represent
`ing a still different face.
`[0034] The subsystem 200 processes the inputs 210, 230,
`240, and 250, producing the output content 270. The output
`content 270 is comprised of other output data 280 and output
`video 290. The other output data 280 is further comprised of
`data elements 281, representing data originally supplied to
`the subsystem by the external source program content 210 as
`other user data 215. Additionally, output 283 represents
`
`11
`
`

`

`US 2005/0151743 A1
`
`Jul. 14, 2005
`
`other user data 232 supplied to the subsystem via external
`source of user image content 230. Output 285 represents
`other user data 242 supplied to the subsystem via external
`source of user image content 240, and in an analogous
`manner, output 287 represents other user data 252 supplied
`to the subsystem via external source of user image content
`250. Also output as part of other data 280 are data elements
`289; the elements 289 being data generated by or processed
`by the subsystem 200. The figure shows representations of
`persons 222, 225 and 227 being selectively processed by the
`subsystem 200, producing representations 292, 295, and
`297, respectively. In the illustrated example, representation
`222 has been replaced by the user specified image 247.
`Representation 227 has been replaced by the user specified
`image 237. In the illustrated example, representation 225 is
`not modified by the subsystem and is output as representa
`tion 295. In the illustrated example, the user specified image
`257 is not used. This figure shows that not all of the external
`sources of user image content are necessarily used simulta
`neously or continuously. At specific times in the operation of
`the subsystem 200, selected ones of the external source of
`user image contents 230, 240, and 250 may be used to
`produce the outputs 270.
`[0035] FIG. 3 is a system block diagram of another
`alternate embodiment of the present invention. FIG. 3
`shows a first user image video processing and integration
`subsystem 300 and a second user image video processing
`and integration subsystem 370. The first subsystem 300
`accepts an external source of program content 310 and a
`plurality of external sources of user image content 320, 324,
`and 328. The external source of program content 310 is
`further comprised of other program data 311 and program
`video 312. The program video has representations of persons
`315,316, and 317. The external source of user image content
`320 is further comprised of other user data 321 and user
`image data 322. User image data 322 is shown to be
`comprised of a user specified image 323. In an analogous
`manner, external sources of user image content 324 and 328
`are comprised of other user data 325 and 329 and user image
`data 326 and 330, each said user image data further com
`prised of a user specified image 327 and 331, respectively.
`Inputs 310,320, 324, and 328 are supplied to the subsystem
`300 producing output content 330. Output content 330 is
`comprised of other output data 335 and output video 340.
`The output video 340 is further comprised of representations
`of persons 345, 346, and 347. The output content 330 is
`coupled to the second subsystem 370 as an external source
`of program content. Additionally coupled to the second
`subsystem 370 are a plurality of new external sources of user
`image content 350, 360 comprised respectively of, and in a
`manner analogous to previous examples, other user data
`352, 362 and user image data 354, 364, comprised of user
`specified images 357, 367. The output content 330 and the
`new inputs 350 and 360 are all coupled to the subsystem 370
`producing output content 380. The output content 380 is
`further comprised of other output data 385 and output video
`390. The output video 390 is further comprised of repre
`sentations of persons 395, 396, and 397. As shown in the
`figure, the processing performed by the first user subsystem
`300 selectively replaces representations 315 and 317 with
`the user supplied images 327 and 323, producing respec
`tively representations 345 and 347. Representation 316 is
`not modified by the subsystem 300 and is output as repre
`sentation 346.
`
`[0036] The second subsystem 370 then accepts represen
`tations 345, 346, and 347 and performs further processing.
`The further processing in the example illustrated selectively
`replaces the representation 345 with the user specified image
`357, producing an output representation 395. The represen
`tation 346 is output unmodified as representation 396, and
`the representation 347 is output unmodified as representa
`tion 397. Elaborating further on the representations of output
`content of other output data 335 and 385, it should be noted
`that data element 336, part of other data output 335, can be
`and in this example is discarded by the second subsystem
`370. Additionally, the subsystem 370 produces or synthe
`sizes or processes additional output data 387, as well as
`coupling selected portions of the other user data 352 and
`362, respectively, as the outputs 386. FIG. 3 shows that the
`output of a first subsystem 330 may be used as an input to
`a second subsystem 370, wherein any processing performed
`by first subsystem 300 is subsequently and serially addition
`ally processed by second subsystem 370.
`[0037] FIG. 4 is a system block diagram of another
`alternate embodiment of the present invention. In FIG. 4,
`user image video processing and integration subsystem 400
`has coupled to it an external source of program content 405
`and a external source of user image content 407, and an
`optional plurality of additional external sources of user
`image content 408. The subsystem 400 produces output
`content 410 comprised of other output data 415 and output
`video 420, in this example having representations of people
`425, 426, and 427. The output content 410 is coupled to
`second level user image video processing and integration
`subsystems 430 and 450. Each of the subsystems 430 and
`450 have coupled to them respective external sources of user
`image content 440 and 460, and optional additional plurality
`of respective external sources of user image content 449 and
`469. The subsystems 430 and 450 produce outputs 470 and
`480, respectively. As shown in the figure, the output content
`470 comprises other output data 472 and output video 473,
`the output video 473 further comprised of the representa
`tions 474, 475, and 476. In an analogous manner, output
`content 480 is comprised of other output data 482 and output
`video 483, said output video comprised of representations
`484, 485, and 486. It should be noted that, in this example,
`representation 427 passes unmodified through both sub
`systems 430 and 450, producing respectively representations
`476 and 486. However, representation 486 is only passed
`unmodified through subsystem 430, producing representa
`tion 475. In the illustrated example of the representation 426
`being processed by the subsystem 450, the user specified
`image 467 is used to provide the representation 485 in the
`output content. Further of note is that representation 425 is
`processed by the subsystem 430 using the user specified
`image 447 producing the representation 474.
`[0038] External source of user image content 440 is fur
`ther comprised of other user data 442 and user image data
`444, said user image data is further comprised of a user
`specified image 447. In an analogous manner, external
`source of user image content 460 is further comprised of
`other user data 462 and user image data 464, said user image
`data further comprised of user specified image 467.
`[0039] In general, of note in FIG. 4 is that each level of
`user image video processing and integration subsystem that
`operates on program content produces a derivative version
`of that program content, which can then be further processed
`
`12
`
`

`

`US 2005/0151743 A1
`
`Jul. 14, 2005
`
`by additional and serially subsequent user image video
`processing and integration subsystems. As illustrated in
`FIG. 4, this technique can be extrapolated indefinitely by
`including subsequent processing stages 490, consisting of
`additional user image video processing and integration sub
`systems 493, each including additional external sources of
`user image content 495 and producing output contents 497.
`Thus, the number of simultaneously created derivative
`works of the original program content 405 is limited only by
`the number and arrangement of processing subsystems and
`the number and assignment of the external sources of user
`image content to those subsystems.
`[0040] FIG. 5 is a system block diagram of a user image
`video processing and image integration subsystem. The
`processing subsystem 500 is comprised of a transform mesh
`subsystem 510, a wrap texture subsystem 520, and a com
`posite and mask subsystem 530. The transform mesh sub
`system is coupled to the wrap texture subsystem via the bus
`515, the wrap texture subsystem is coupled to the composite
`and map subsystem via the bus 525. The output of the user
`image video processing and integration subsystem 540 is
`comprised of the output of the composite and mask sub
`system 580 and the output of the transform mesh subsystem
`590. The inputs to the subsystem 500 are comprised of other
`program data 550 and program video 560. The other pro
`gram data 550 is further comprised of various kinds of
`information, including position information 552, rotation
`and orientation information 554, mesh geometry informa
`tion 556, and maskinformation 558. Other program data 550
`is coupled to the transform mesh subsystem 510. Addition
`ally, mask information 558 is coupled to the composite and
`mask subsystem 530. The program video 560 is also coupled
`to the composite and mask subsystem 530. An external
`source of user image content 570 is coupled to the wrap
`texture subsystem 520. In FIG. 5, the external source of user
`image content is shown representative of user image data
`comprising a texture map. The operation of the system
`shown in FIG. 5 is to use the position, rotation and orien
`tation, and mesh geometry information present in the exter
`nal program content to transform the mesh geometry infor
`mation in the subsystem 510, producing a transformed mesh
`output on buses 515 and 590. The transformed mesh is
`supplied to the wrap texture subsystem 520, where the
`texture map 570 is applied to the transformed mesh, pro
`ducing a rendered image output on bus 525. The rendered
`image supplied to the composite and mask subsystem is then
`composited or combined with the program content 560 and
`masked by the mask data 558, producing a video output 580.
`The use of the transform mesh subsystem coupled with the
`wrapping texture subsystem allows the subsystem 500 to
`recreate the appearance of the user from virtually any
`orientation or position by mapping the texture map onto the
`transformed mesh geometry. The compositing and masking
`operation replaces a selected portion of the program video
`560 with the rendered image 525.
`[0041] The transform mesh operation is a straightforward
`process documented in numerous texts on computer graph
`ics and easily implemented on a general purpose computer
`programmed to do the task. The mesh geometry primary
`consists of coordinates of points which may be used to
`describe polygons, or triangles, or a wire frame representa
`tion, or patches, based on B-splines or NURBS, all of which
`can be used to describe the 3-dimensional geometry of all or
`a portion of a user’s body or head. Once this geometric
`
`description is created, the transformation process is very
`straightforward—the system takes the coordinates of the
`points that define those various entities and produces trans
`formed versions that are correctly rotated, positioned, and
`have perspective or aspect ratio or field of view operations
`applied to them. The equations and example programs for
`implementing the transform mesh subsystem’s functions on
`a general purpose computer are published in such places as
`Foley, Van Damm Computer Graphics and Applications,
`also standard computer graphics texts by Hearn or Watt.
`[0042] Similarly, once the mesh geometry has been trans
`formed appropriately, the process of taking a texture map
`and wrapping that texture map around the transformed mesh
`is a straightforward process that is documented in the same
`literature. The wrap texture subsystem can also easily be
`implemented on a general purpose computer programmed to
`do the task. In addition to just about any standard commod
`ity personal computer available from the usual vendors
`(Apple, IBM, etc.), there are also special purpose hardware
`and hardware/software combinations that are sold by ven
`dors to accommodate doing the operations of the transform
`mesh subsystem and the wrap texture subsystem in a hard
`ware assisted manner to produce a very cost effective and
`rapid result. These devices fall generically in the category of
`3-D accelerators, commonly sold for personal computers by
`vendors such as Apple, Matrox, Diamond, S3, and a multi
`tude of others.
`[0043] The operations of the composite and mask sub
`system are easily performed by a general purpose computer,
`insofar as the process necessary to implement the operation
`is documented in the literature, including Keith Jack’s Video
`Demystified sections on luma- and chroma-keying. How
`ever, the amount of data that has to be processed generally
`implies that this step needs to be performed by a hardware
`assisted or special purpose circuit. Such circuits are readily
`available from a variety of vendors, including for example,
`the Ultimatte compositing and masking subsystem which is
`available from Ultimatte Corporation.
`[0044] The creation of the texture map is not a necessary
`part of this invention. The texture map may be simply
`supplied to the invention from an external source. Systems
`to assist the generation of the texture map are available from
`a number of different vendors that specialize in scanning of
`three dimensional objects. The scanning process can be
`anywhere from fractions of a second to tens of seconds with
`most commercial systems and the resulting texture maps are
`generally produced once and then used over and over.
`[0045] FIG. 6 shows a system block diagram of an alter
`nate embodiment of the user image video processing and
`integration subsystem. In FIG. 6, the subsystem 600 is
`comprised of a transform model subsystem 610, a image
`selection subsystem 620, a morphing subsystem 630, and a
`composite and masking subsystem 640. A collection of other
`program data 650 is further comprised of position informa
`tion 652, rotation and orientation information 654; mesh
`geometry information 656, and masking information 658.
`The other program data 650 is supplied to the transform
`model subsystem 610 producing outputs 612, 614, and 616.
`Output 612 is coupled to the image selection subsystem 620.
`Database 670 is comprised of a series of individual images
`671, 672, 673, 674, 675, 676, 677. These images are
`supplied to the image selection subsystem 620. On the basis
`
`13
`
`

`

`US 2005/0151743 A1
`
`Jul. 14, 2005
`
`of information supplied to the image selection subsystem
`620 via the bus 612, one or more of the plurality of images
`670 are then supplied via bus 625 to the morph subsystem
`630.
`[0046] The morph subsystem 630 uses image data sup
`plied on 625, and transform model information supplied on
`614 to produce a morphed image on output 632, which is
`then supplied to the composite and mask subsystems 640.
`The program video 660 is also supplied to the composite and
`mask subsystem 640, and the mask 658 is also supplied to
`the composite mask subsystem. The composite mask sub
`system then produces an output 680 consisting of output
`video. Additionally, data outputs from the transform model
`subsystem and from the morph subsystem are output respec
`tively on 616 and 634, producing output bus 690. Outputs
`680 and 690 collectively are the output content bus 645. In
`this alternate embodiment, one or more of the selection of
`images in the database 670 is morphed or otherwise blended
`by the morph subsystem together to produce an output
`image 632 representative of what the mesh geometry infor
`mation would look like. This is a less computationally
`intensive way of producing the appearance of a rendered
`texture map wrapped around a mesh geometry at the
`expense of having to store a plurality of individual images
`in the database 670. The selection of best images can be
`performed by a general purpose computer running a very
`simple algorithm, such as selecting the best fit image or the
`two physically adjacent best fit images, and blending the two
`images together, or performing a morph operation on them,
`wherein the output image is akin to a linear interpolation
`between the two input images from the database. The
`transform model subsystem can be implemented on a gen
`eral purpose computer, using algorithms as previously dis
`closed as being part of standard computer graphics text.
`Similarly

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket