throbber
Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 1 of 18 PageID #: 6094
`
`Exhibit T
`
`

`

`USOO8878949B2
`
`(12) United States Patent
`Pryor
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 8,878,949 B2
`*Nov. 4, 2014
`
`(54) CAMERA BASED INTERACTION AND
`INSTRUCTION
`
`(71) Applicant: Gesture Technology Partners, LLC,
`Sylvania, OH (US)
`(72) Inventor: Timothy R. Pryor, Sylvania, OH (US)
`
`(*) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`This patent is Subject to a terminal dis
`claimer.
`
`(21) Appl. No.: 13/961,452
`(22) Filed:
`Aug. 7, 2013
`
`(65)
`
`Prior Publication Data
`US 2014/OO28855A1
`Jan. 30, 2014
`
`Related U.S. Application Data
`(63) Continuation of application No. 13/459,670, filed on
`Apr. 30, 2012, now Pat. No. 8,654,198, which is a
`continuation of application No. 12/891.480, filed on
`Sep. 27, 2010, now Pat. No. 8,189,053, which is a
`continuation of application No. 1 1/376,158, filed on
`Mar. 16, 2006, now Pat. No. 7,804,530, which is a
`continuation of application No. 09/568,552, filed on
`May 11, 2000, now Pat. No. 7,015,950.
`(60) Provisional application No. 60/133,671, filed on May
`11, 1999.
`
`(51) Int. Cl.
`H04N 5/232
`G06F 3/0
`G06F 3/038
`H04N 5/222
`(52) U.S. Cl.
`CPC ............ H04N 5/23296 (2013.01); G06F 3/017
`
`(2006.01)
`(2006.01)
`(2013.01)
`(2006.01)
`
`
`
`(2013.01); G06F 3/0386 (2013.01); H04N
`5/222 (2013.01); H04N 5/232 (2013.01); H04N
`5/23219 (2013.01)
`USPC ................................... 348/211.99; 348/211.4
`(58) Field of Classification Search
`CPC. H04N 5/23238; H04N 5/247; H04N 5/3415
`USPC .......... 348/2114, 211.5, 211.8, 211.9, 222.1,
`348/239
`See application file for complete search history.
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`9/1975 Levy
`3,909,002 A
`8/1980 Pinkney et al.
`4,219,847. A
`7/1982 Hedges et al.
`4,339,798 A
`4,631,676 A 12/1986 Pugh
`4,791,589 A 12/1988 Blazo et al.
`4,843,568 A
`6/1989 Krueger et al.
`(Continued)
`Primary Examiner — Tuan Ho
`(74) Attorney, Agent, or Firm — Warner Norcross & Judd
`LLP
`
`ABSTRACT
`(57)
`Disclosed are methods and apparatus for instructing persons
`using computer based programs and/or remote instructors.
`One or more video cameras obtain images of the student or
`other participant. In addition images are analyzed by a com
`puter to determine the locations or motions of one or more
`points on the student. This location data is fed to computer
`program which compares the motions to known desired
`movements, or alternatively provides such movement data to
`an instructor, typically located remotely, who can aid in ana
`lyzing student performance. The invention preferably is used
`with a Substantially life-size display, such as a projection
`display can provide, in order to make the information dis
`played a realistic partner or instructor for the student. In
`addition, other applications are disclosed to sports training,
`dance, and remote dating.
`18 Claims, 7 Drawing Sheets
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 2 of 18 PageID #: 6095
`
`

`

`US 8,878,949 B2
`Page 2
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`3/1990 Fujioka et al.
`4,908,704 A
`1/1991 Zimmerman et al.
`4,988,981 A
`4, 1991 Ando
`5,008.946. A
`2, 1992 Chan
`5,088938 A
`7, 1993 Yokota et al.
`5,227,986 A
`9, 1993 Jain
`5,249,053 A
`3, 1994 Dementhon et al.
`5,297,061 A
`5,365,597 A 1 1/1994 Holeva
`5,376,796 A 12/1994 Chan et al.
`5,388,059 A
`2f1995 DeMenthon
`5.454.043 A
`9, 1995 Freeman
`5.49507 A
`2f1996 Umezawa et al.
`5,534,921 A
`7, 1996 Sawanobori
`5,572,251 A 1 1/1996 Ogawa
`5,581,276 A 12/1996 Cipolla et al.
`5,594,469 A
`1/1997 Freeman et al.
`5,616,078 A
`4, 1997 Oh
`5,624,117 A
`4/1997 Ohkubo et al.
`5,781,647 A
`7, 1998 Fishbine et al.
`5,781,650 A
`7, 1998 Lobo et al.
`5,828,770 A 10, 1998 Leis et al.
`5,845,006 A 12/1998 Sumi et al.
`5,853,327 A 12, 1998 Gilboa
`5,878,174 A
`3, 1999 Stewart et al.
`5,904,484 A
`5, 1999 Burns
`
`7, 1999 Fan
`5,926, 168 A
`8, 1999 Kimura
`5,940,126 A
`5,982,352 A 11/1999 Pryor
`5.999,840 A 12/1999 Grimson et al.
`6,052,132 A
`4/2000 Christian et al.
`6,098.458 A
`8, 2000 French et al.
`6,108,033. A
`8, 2000 Ito et al.
`6,148,100 A 1 1/2000 Anderson et al.
`6, 160,899 A 12/2000 Lee et al.
`6,204.852 B1
`3/2001 Kumar et al.
`6.252,598 B1
`6/2001 Segen
`6,342,917 B1
`1/2002 Amenta
`6,346,929 B1
`2/2002 Fukushima et al.
`6,359,647 B1
`3/2002 Sengupta et al.
`6,363,160 B1
`3/2002 Bradski et al.
`6,373,472 B1
`4/2002 Palalau et al.
`6,442.465 B2
`8, 2002 Breed et al.
`6,508,709 B1
`1, 2003 Karmarkar
`6,529,617 B1
`3/2003 Prokoski
`6,597,817 B1
`7/2003 Silverbrook
`6,663,491 B2 12/2003 Watabe et al.
`6,750,848 B1
`6/2004 Pryor
`6,775,361 B1
`8/2004 Arai et al.
`6,788,336 B1
`9, 2004 Silverbrook
`6,911,972 B2
`6/2005 Brinjes
`7,489,863 B2
`2/2009 Lee
`7,564,476 B1* 7/2009 Coughlan et al. .......... 348, 14.08
`* cited by examiner
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 3 of 18 PageID #: 6096
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 1 of 7
`
`US 8,878,949 B2
`
`
`
`
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 4 of 18 PageID #: 6097
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 2 of 7
`
`US 8,878,949 B2
`
`
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 5 of 18 PageID #: 6098
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 3 of 7
`
`US 8,878,949 B2
`
`COVER {OSE ANAYSS CAVRA CON.
`220
`250
`255
`
`
`
`
`
`2.
`
`2
`
`N-265
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 6 of 18 PageID #: 6099
`
`
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 4 of 7
`
`US 8,878,949 B2
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 7 of 18 PageID #: 6100
`
`
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 5 Of 7
`
`US 8,878,949 B2
`
`
`
`
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 8 of 18 PageID #: 6101
`
`COVER
`
`6OO
`
`660
`
`
`
`

`

`U.S. Patent
`
`Nov. 4, 2014
`
`Sheet 6 of 7
`
`US 8,878,949 B2
`
`
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 9 of 18 PageID #: 6102
`
`

`

`U.S. Patent
`
`US 8,878,949 B2
`
`
`
`
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 10 of 18 PageID #: 6103
`
`

`

`1.
`CAMERA BASED INTERACTION AND
`INSTRUCTION
`
`US 8,878,949 B2
`
`Method and apparatus are disclosed to enhance the quality
`and usefulness of picture taking for pleasure, commercial, or
`other business purposes. In a preferred embodiment, stereo
`photogrammetry is combined with digital image acquisition
`to acquire or store scenes and poses of interest, and/or to
`interact with the subject in order to provide data to or from a
`computer. Other preferred embodiments illustrate applica
`tions to control of display systems.
`
`5
`
`10
`
`BACKGROUND
`
`15
`
`Representative of USA patents on Digital cameras are U.S.
`Pat. Nos. 5,534,921, 5,249,053 and many others which
`describe use of matrix array (CCD or otherwise) based cam
`eras to take pictures of humans or other objects. The images
`taken are generally comprised of 400,000 or more pixels
`which are often compressed to smaller record sizes for data
`storage, for later retrieval and display. Video cameras or Cam
`corders are also increasingly able to take still photographs as
`well, and record or transmit them to computers.
`Aside from exposure control (to keep the light reaching the
`detector array within the dynamic range of same), and range
`finding (to effect the best lens focus given the object distance
`in question) there are few cases known to the inventor where
`the camera taking the picture actually determines some vari
`able in the picture and uses it for the process of obtaining the
`picture.
`One such example that does not take a picture of humans
`but rather of data, is exemplified by U.S. Pat. No. 4,791,589,
`where a certain wave form signature on an oscilloscope is
`searched for by processing the digital camera image, and
`when it is seen, the image stored.
`More apropos the function of “Picture Taking as the gen
`eral public knows it and of interest as the primary focus of the
`instant invention, is U.S. Pat. No. 5,781,650 by Lobo, et al
`which describes analysis after the fact of recorded images to
`determine facial content and thus the age of the subject. This
`disclosure also alludes to a potential point and shoot capabil
`ity also based on the age classification of the individuals
`whose picture is desired.
`There is no known picture taking reference based on object
`position and orientation with respect to the camera, or other
`objects that I am aware of.
`
`25
`
`30
`
`35
`
`40
`
`45
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 11 of 18 PageID #: 6104
`
`SUMMARY OF THE INVENTION
`
`High Resolution Digital still cameras employing matrix
`photodetector array chips to scan the image produced by the
`camera lens are now commonplace, and will be even more so
`in a few years as chips and memories become very inexpen
`sive, and pixel density approaches 2000x2000 pixels, rivaling
`photographic film. Even today Camcorders having 700x500
`pixel image chips are common for video based data and stills.
`This invention is aimed at improvements in utilization of
`these cameras and others which make use of a computer based
`camera's ability to analyze, in real time if desired, the images
`obtained. Indeed a picture taking system may be composed of
`a combination of cameras, some used for purposes other than
`the recording of the picture proper.
`It is a goal of the invention to provide a method for taking
`pictures when certain poses of objects, sequences of poses,
`motions of objects, or any other states or relationships of
`objects are represented. It is also a goal to allow this to be done
`in a self timer like mode, when desired scene situations or
`
`50
`
`55
`
`60
`
`65
`
`2
`specific dates or other circumstances exist. In some cases,
`information as to what is desired may be entered remotely,
`even over the internet, or radio telephone.
`It is also a goal of the invention to provide a method for
`selecting from a digital or other picture memory, pictures
`obtained when certain pre programmed poses of objects,
`sequences of poses, or relationships of objects are repre
`sented.
`It is a further goal of the invention to provide means by
`which users engaged in digital camera based activities, or
`other activities, using a computer can have their pictures
`taken.
`It is a still further goal to provide all such functions in a 2D
`or 3D context, and using simple equipment capable of wide
`spread use.
`It is another goal of the invention to feedback data to a
`Subject or Subjects having his or her, or their picture taken, in
`order that they assume another pose or engage in another
`activity, or juxtaposition of subject positions.
`While this invention is primarily aimed at the general pic
`ture taking public at large, it is realized that commercial
`photographers and cine-photographers, for example in the
`coming trend to digital “Hollywood' movie making, may
`benefit greatly from the invention herein, as it potentially
`allows more cost effective film production by giving the
`director the ability to expose the camera to the presence of
`masses of data, but only saving or taking that data which is
`useful, and if desired, to signal the creation of further data
`based on data obtained. All this with little or no human inter
`vention as desired, thus saving on the cost of direction, film
`crews, and other labor or venue related costs.
`
`DRAWINGS DEPICTING PREFERRED
`EMBODIMENTS OF THE INVENTION
`
`FIG. 1 illustrates means by which users engaged in digital
`camera based activities, or other activities, using a computer
`can have their pictures taken.
`FIGS. 2A-2D illustrate a method for taking pictures when
`certain pre programmed poses of objects, sequences of poses,
`or relationships of objects are represented.
`FIG. 3 illustrates a self timer like mode, or when specific
`dates or other circumstances exist, including a system
`embodiment for taking pictures in shopping malls or other
`locales and providing instant print or other hardcopy capabil
`ity (e.g. on a tee shirt).
`FIG. 4 illustrates means to provide all such functions in a
`2D or 3D context, using simple equipment capable of wide
`spread use. Various retroreflective artificialtarget configura
`tions are also disclosed.
`FIG. 5 illustrates a method to feedback data to a subject
`having his or her picture taken, in order that the Subject
`assumes another pose or engage in another activity.
`FIG. 6 illustrates a commercial version of the invention
`useful for police departments and real estate agents, among
`others.
`FIG. 7 illustrates an embodiment of the invention used for
`photography of stage performances.
`FIG. 8 illustrates an embodiment of the invention used for
`ballet instruction and other teaching and interaction activities
`also with remotely located instructors or players.
`
`EMBODIMENTS OF THE INVENTION
`
`FIG. 1
`
`Illustrated in FIG. 1 of the invention is means by which
`users engaged in digital camera based activities, or other
`
`

`

`3
`activities, using a computer can have their pictures taken, and
`in this context, FIG. 1 resembles that of co-pending refer
`enced application 9 above. A single camera, or a set, Such as
`a stereo pair are employed to see portions of an object, such as
`a person, a part of a person Such as a hand, leg, foot, fingers,
`or head, and/or to view datums on an object, portion of an
`object, or an object held by the person or with which the
`person interacts. In addition, multiple persons and objects can
`be seen.
`Where a single camera is employed, 2D measurements of
`object location relative to the camera (X and y perpendicular
`to the camera axis) are all that is possible, unless datums of
`known shape or spacing are used on the object viewed. Where
`a stereo pair or more of cameras are employed, 3D (xyZ) data
`of a single point can be provided, for example retro-reflector
`50 on the head 52 of person 51. In both cases where 3 or more
`datums are used on an object, 6 Degree of freedom data can be
`obtained, allowing objectorientation in 3 angular axes as well
`as range in 3 axes to be obtained. With two or more cameras,
`such 3D data may also be obtained using other features of
`objects such as edges of arms and the likely using known
`photogrammetric techniques.
`The cameras used may also be used to take pictures of an
`object, or another specialized camera used for that purpose in
`conjunction with those used to determine the location of
`object features. Both examples are illustrated in this applica
`tion.
`As shown in this figure, two cameras 101 and 102 are used
`as a stereo pair, with each camera located at opposite sides of
`a TV monitor 105, used for either computer or Television
`display or both. This is a desirable configuration commer
`cially and discussed the co-pending application references
`above. In this particular case, an additional camera 110 is
`shown in the middle of the other two, said added camera used
`for picture taking, internet telephony and/or other purposes.
`An optional auxiliary LED light source 115 (or 116 or 117)
`for illuminating a user 60 or other object is also shown.
`All three cameras are connected to the computer 130 by
`means of a USB (Universal Serial Bus) daisy chain, or IEEE
`1394 firewire connections (faster). Each is accessed, as
`needed for position and orientation determination, or picture
`taking.
`Even using a single camera in two dimensions (as is normal
`today). Some position and orientation data or sequences of
`45
`same can be achieved using modern image processing tech
`niques. (See for example the invention disclosed in U.S. Pat.
`No. 4,843.568 of Myron Krueger). However, accurate sens
`ing and control of systems, such as cameras herein is difficult
`today with processors cost effective enough to be used by the
`public at large, and artificial target augmentation of image
`points is often desirable.
`It is thus possible using the invention to be taking pictures
`of users of interactive computer systems for whatever pur
`pose. This allows one to automatically capture images of
`children at play, for example with a computer system Such as
`a computer game. It also enables many other functions which
`are described below. And it can be used in the field, where the
`computer, Stereo position sensing and picture taking camera,
`may be co-located together in the same housing.
`It is noted that where retro-reflectors are used, (as opposed
`to choosing for example less contrasting datums, for example
`natural object features such as edges of fingers, or clothing
`features, or targets such as colored dots) then each of the two
`cameras for stereo location determination needs lights to
`illuminate retro-reflectors substantially co-located with the
`camera axes. These lights can alternatively provide general
`
`25
`
`30
`
`35
`
`40
`
`50
`
`55
`
`60
`
`65
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 12 of 18 PageID #: 6105
`
`US 8,878,949 B2
`
`10
`
`15
`
`4
`lighting for any other camera or cameras to use in taking
`photographs or other purposes.
`It is noted that cameras 101 and 102 need not have the
`image of the retro-reflector or other discernable target be in
`precise focus, indeed it is often helpful to have a some blur
`due to defocusing so as to aid Sub pixel position solution of
`datum location. If the LEDs or other light sources are in the
`near infrared, and the camera lenses are focused in the visible,
`this occurs naturally, unless the lens is also near infrared
`chromatic corrected.
`An optional laser pointer (or other suitable illumination
`source), comprised of diode laser and collimating optics 150
`is also usable with the invention to illuminate object portions
`from which 3D data is desired (such as the neck region of
`person 51 as shown), or in the simpler case to designate which
`areas of a picture are to be focused, or Zoomed in on or
`transmitted or recorded with or without consideration of
`3-D position data of the object. This can be fixed as shown, or
`optionally hand held by the user, for example in left hand
`(dotted lines) and used by him or her to designate the point to
`be measured in 3D location. (see also references above). In
`addition a person taking pictures, such as a photography can
`without looking through the viewfinder of the camera, point
`to appoint on the subject, which is then dealt with by camera
`typically by focusing the lens system such that the point is in
`the desired state of focus (usually but not necessarily when
`the laser spot on the Subject appears Smallest in diameter
`and/or of highest contrast). Such as system is particularly
`useful for cameras with wide fields of view, or those mounted
`on pan tilt mechanisms, where the mechanism can also be
`activated to position the camera axis to take the picture with
`the laser spot for example centered in the camera field.
`In the laser designated case, it is generally the laser spot or
`other indication on the Surface that is imaged, (although one
`can also instruct, for example using Voice recognition soft
`ware in computer 130 inputted via voice activated micro
`phone 135, the camera processor to obtain and store if desired
`the image of the area around the spot projected onto the object
`as well or alternatively), and if the spot is desired, it is often
`useful that cameras 101 and 102 have band-pass filters which
`pass the laser wavelength, and any led illumination wave
`lengths used for retro-reflector illumination for example, but
`block other wavelengths to the extent possible at low cost. It
`is noted that the discrimination in an image can also be made
`on color grounds—i.e. with red diode lasers and red LEDs,
`the system can analyze the image areas containing reds in the
`image, for example—with the knowledge that the answer
`can’t lie at any shorter wavelengths (e.g. green, yellow, blue).
`By using two cameras 101 and 102, a Superior ranging
`system for the laser spot location on the Subject results, since
`the baseline distance “BL separating the cameras for trian
`gulation based ranging purposes can be sufficient to provide
`accurate measurement of distance to the object.
`
`FIGS 2A-2D
`
`As we begin to consider the apparatus of FIG. 1, it is clear
`one could do much more to enhance picture taking ability
`than hereto fore described and contained in the prior art. And
`it can be done with apparatus capable of field use.
`FIGS. 2A-2D for example, illustrates a method for taking
`pictures when certain pre programmed or otherwise desired
`poses of objects, sequences of poses, or relationships of
`objects are represented. No such ability is available to pho
`tographers today.
`Consider still camera system 201, patterned after that of
`FIG. 1 and comprising 3 cameras and associated image scan
`
`

`

`US 8,878,949 B2
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`40
`
`45
`
`5
`ning chips. The central camera, 202, is for picture taking and
`has high resolution and color accuracy. The two cameras on
`either side, 210 and 211, may be lower resolution (allowing
`lower cost, and higher frame rate, as they have less pixels to
`scan in a given frame time), with little or no accurate color
`capability, as they are used to simply see object positions or
`special datum positions on objects (which may be distin
`guished however by taught colors for example as taught in
`Some of my co-pending inventions).
`Cost wise the distinction between cameras is important.
`Today low cost CMOS chips and lenses capable of the pro
`viding stereo measurements as described above are S15 or
`less. High quality CCD color detector arrays and lenses for
`high quality photo images are over S100, and in many cases
`S1000 or more.
`An optical viewfinder 215 is one of many ways to indicate
`to the user what scene information is being gathered by the
`camera system. The user can in this invention specify with a
`viewfinder based readout, the area of the field that is desired.
`Use of the viewfinder in this manner, whether looked through
`or displayed on a screen, is for example an alternative to
`designating an area on the actual object using a laser pointer
`for the purpose.
`The camera system 201 further contains a computer 220
`which processes the data from cameras 210 and 211 to get
`various position and/or orientation data concerning a person
`(or other object, or persons plural, etc). Integral light sources
`as described in FIG. 1 above may also be provided such as
`LED arrays 240 and 245 and xenon flash 246.
`In general, one can use the system to automatically "shoot
`pictures for example, when any or all of the following occur,
`as determined by the position and orientation determining
`system of the camera of the invention:
`1. Subject in a certain pose.
`2. Subject in a sequence of poses.
`3. Portion of Subject in a sequence of poses (e.g. gestures).
`4. Subject orportion(s) in a specific location or orientation.
`5. Subject in position relative to another object or person.
`For example, this could be bride and groom kissing in a
`wedding, boy with respect to cake on birthday, and sports
`events sequences of every description (where the camera can
`even track the object datums in the field and if desired adjust
`shutter speed based on relative velocity of camera to subject).
`6. Ditto all of above with respect to both persons in certain
`poses or gesture situations.
`7. When a Subject undertakes a particular signal compris
`ing a position or gesture—i.e. a silent command to take the
`picture (this could be programmed, for example, to corre
`spond to raising one’s right hand).
`In addition it is noted that the invention acts as a
`rangefinder, finding range to the Subject, and even to other
`Subjects around the Subject, or to all parts of interest on an
`extensive subject. This allows a desired lens focus to be set
`based on any or all of this data, as desired. It also allows a
`sequence of pictures to be taken of different objects or object
`portions, at different focal depths, or focus positions. The
`same holds true for exposure of these locations as well.
`It is also possible to use the above criteria for other pur
`poses, such as determining what to record (beyond the record
`ing that is implicit in taking pictures), or in determining what
`to transmit. The latter is important vis a vis internet activity,
`where available internet communication bandwidth limits
`what can be transmitted (at least today). In this case video
`telephony with the invention comprehends obtaining only
`those images you really care about in real time. So instead of
`transmitting low resolution image data at 20 frames a second,
`you can transmit say 5 (albeit asynchronously gathered)
`
`Case 2:21-cv-00040-JRG Document 149-5 Filed 12/03/21 Page 13 of 18 PageID #: 6106
`
`50
`
`55
`
`60
`
`65
`
`6
`frames of high resolution preferred data. (This doesn’t solve
`flicker problems, but it does mean that poor quality or extra
`neous material isn't sent). Criteria Such as degree of image
`motion blur or image focus can also be used in making trans
`mission decisions.
`FIG. 2B illustrates a block diagram showing a pose analy
`sis software or hardware module 250 analyzing processed
`image data (for example utilizing camera image data pro
`cessed by visionbloks software from Integral Vision Corp.)
`from the computer 220 (which may be the same physical
`microprocessor, such as a Intel Pentium 2 in a Dell inspiron
`3500 laptop computer, or different) and determining from
`same when a certain pose for example has been seen. When
`this occurs, a signal is sent to the camera control module 255
`to hold the last frame taken by camera 202, and to display it to
`the photographer, digitally store it, or transmit it—to some
`one else, or another data store or display. Such transmission
`can be by data link, internet, cellphone, or any other Suitable
`CaS.
`Another criteria could be that two or more preselected
`poses were seen one after the other, with a time delay between
`them, also pre-selected if desired.
`FIG. 2C illustrates a specific case whereby a point on one
`person, say hand 260 of man 265 having head 271, is deter
`mined, and a picture is taken by camera system 201 of the
`invention when this point comes within a distance of approxi
`mately 6 inches (or any other desired amount including con
`tact—i.e. Zero distance) from another person or object, say
`the head 270 of woman 275. To obtain the data, one can look
`for hand or head indications in the image using known
`machine vision techniques, and/or in a more simple case put
`a target marker such as colored triangle 285 or other type on
`the hand or head or both and look for it.
`The use of the natural features of the subjects heads, which
`are distinguishable by shape and size in a known field con
`taining two persons, is now illustrated. For example, image
`morphology or template matching in the image field of the
`solid state TV camera 202 can be used to distinguish the head
`shapes from background data and data concerning the rest of
`the features such as hands, etc. of subjects 265 and 275 (or
`conversely hand shapes if desired can be found and heads
`excluded, or the hand of the right person, versus the head of
`the left, and so forth).
`As shown in FIG. 2D, when the image field 287 of camera
`202 after processing contains the two head images, 290 and
`291, spaced a distance “W’. When W is not withina tolerance
`D, the picture is not taken; whereas if the heads are close
`enough, within D as illustrated in dotted lines, the picture is
`taken.
`Criteria as mentioned can include proximity of other parts
`of the body, or objects associated with the subjects (which
`themselves can be objects). In addition, the motion or relative
`motion of objects can be the criteria. For example, one could
`take program the device to take the picture when on two
`successive frames the condition shown in FIG. 2D exists
`where the heads are apart in frame 1, but closer in frame 2
`(probably corresponding to a movement say of the boy to kiss
`the girl). Clearly other sequences are possible as well. Such as
`movement taking place in several frames followed by a
`sequence of frames in which no movement occurs. Other
`means to determine motion in front of the camera can also be
`used in this context, such as ultrasonic sensors.
`It is also noted that the actual position or movement desired
`can be “Taught to the computer 220 of the picture taking
`system. For example, a boy and girl in a wedding could
`approach each other and kiss beforehand. The sequence of
`frames of this activity (a “gesture of sorts by both parties) is
`
`

`

`US 8,878,949 B2
`
`7
`recorded, and the speed of approach, the head positions and
`any other pertinent data determined. When the photographer
`thinks the picture is right, the computer of the camera system
`is instructed to take the picture—for example it could be at the
`instant when after a Suitable approach, two head images
`become joined into one—easily recognizable with machine
`vision processing Software under uniform background con
`ditions. Then in the future, when such a condition is reached
`in the camera field of view, pictures are taken and stored, or
`transmitted. This allows a camera to free run whose image
`field for example takes in the head table at a wedding party,
`taking only the shots thought to be of most interest. Numerous
`conditions might be programmed in, or taught in another at
`the same party, would be anyone at the head table proposing
`a toast to the bride and groom, with arm and glass raised. If
`video is taken, it might be taken from the point at which the
`arm rises, until after it comes down. Or with suitable voice
`recognition, when certain toast type words are heard, for
`example.
`
`10
`
`15
`
`Application to “3-D' Pictures
`
`Where it is desired to take "3-D" pictures, it can be appre
`ciated that each camera, 210 and 211 can take images of the
`scene in place of camera 202, and that both cameras 210 and
`211 outputs can be stored for later presentation in a 3D view
`ing context, using known display techniques with appropriate
`polarized glasses or Switchable LCD goggles for example. In
`this case the camera outputs can serve double duty if desired,
`each both recording picture data, as well as determining posi
`tion of one or more points on the object or objects desired.
`In addition, or alternatively, one can use in this 3D picture
`case, the camera 202 (or even a stereo camera pair in place of
`202) as a means for determining position and orientation
`independently from the Stereo picture taking cameras.
`If not used for immediate position information, camera202
`does not have to be digital and could employ film or other
`media to record information.
`
`25
`
`30
`
`35
`
`8
`desired effect resulting in a picture). The effect desired can be
`changed in midstream to adjust for changing wants as well, by
`changing the program of the computer (which could be done
`using hardware Switches, inserting a disc, or otherwise
`entered as a command). In addition, as mentioned above, the
`gesture or pose desired, can be taught to the system, by first
`photographing a variety of acceptable positions or sequences,
`and putting bounds on how close to these will be accepted for
`photographing.
`A specialized case is shown in FIG.3, for self taking instant
`picture or printout device for use in a shopping mall Kiosk or
`other venue. In this case two Sweethearts 300 and 310 are on
`a bench 315 in front of the digital or other camera 320. When
`the computer 330 detects from processing the image (or
`images) of the invention that their faces are in close proximity
`(for example using the centroid of mass of their head as the
`position indicator, or even facial features such as described in
`the Lobo et al patent reference), the computer then instructs
`the camera to record the picture. A push button or other
`selector on the device allows the subjects to select what cri
`teria they want—for example when their heads are together
`for 5 seconds or more, or not together, or hands held, or
`whatever. Or when their faces are within a certain distance
`criteria, Such as one inch.
`Alternatively, camera 320 may be a video camera and
`recorder which streams in hundreds or even thousands of
`frames of image data, and the selection of a group is made
`automatically by the invention in rapid fashion afterwards,
`with the subjects selecting their prints from the pre-selected
`(or taught as above) images as desired. Or the machine itself
`can make the final selection from the group, sort of as a
`random slot machine for

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket