`Sengupta et al.
`
`USOO6359647B1
`(10) Patent No.:
`US 6,359,647 B1
`(45) Date of Patent:
`Mar. 19, 2002
`
`(54) AUTOMATED CAMERA HANDOFF SYSTEM
`FOR FIGURE TRACKING IN A MULTIPLE
`CAMERA SYSTEM
`
`(75) Inventors: Soumitra Sengupta, Stamford, CT
`(US); Damian Lyons, Putnam Valley,
`NY (US); Thomas Murphy,
`Manchester, NH (US); Daniel Reese,
`Landisville, PA (US)
`(73) Assignee: Philips Electronics North America
`Corporation, New York, NY (US)
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`(21) Appl. No.: 09/131,243
`(22) Filed
`Aug. 7, 1998
`7
`(51) Int. Cl.' .................................................. H04N 7/18
`
`(*) Notice:
`
`(52) U.S. C. - - - - - - - - - - - - - - - - - - - - - - - 348/154; 348/143; 348/153;
`
`348/159; 348/169
`(58) Field of Search ................................. 348/143, 152,
`348/153, 154, 159, 169; 382/103
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`... 34.0/534
`4,511,886 A * 4/1985 Rodriguez ...
`5,164.827. A 11/1992 Paff ........................... 348/143
`
`5,699,444 A 12/1997 Palm .......................... 382/106
`5,729,471. A
`3/1998 Jain et al. ..................... 348/13
`5,745,126 A
`4/1998 Jain et al. ..................... 348/42
`6,002.995 A * 12/1999 Suzuki et al................ 702/188
`FOREIGN PATENT DOCUMENTS
`O529317 A1 * 3/1993 ............ HO4N/7/18
`EP
`SR A. to
`City,
`E.
`WO97/04428
`2/1997 ......... GO8E/13/196
`WO
`* cited by examiner
`Primary Examiner Vu Le
`(57)
`ABSTRACT
`The invention provides for the automation of a multiple
`camera System based upon the location of a target object in
`Egili. First Eric
`about throughout multiple cameras potential fields of view.
`When the figure approaches the bounds of a Selected cam
`era's field of view, the system determines which other
`
`camera's potential field of view contains the figure, and
`
`adjusts that other camera's actual field of view to contain the
`figure. When the figure is at the bounds of the selected
`camera's field of View, the System automatically Selects the
`other camera. The System also contains predictive location
`determination algorithms. By assessing the movement of the
`figure, the System Selects and adjusts the next camera based
`upon the predicted Subsequent location of the figure.
`18 Claims, 9 Drawing Sheets
`
`SECUREDAREA
`
`1
`
`CAMERA
`
`
`
`
`
`
`
`
`
`LOCATION
`DETERMINATOR
`FIGURE
`TRACKING
`SYSTEM
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 1
`
`
`
`U.S. Patent
`
`Mar. 19, 2002.
`
`Sheet 1 of 9
`
`US 6,359,647 B1
`
`(NE)
`
`SECURED AREA
`
`10
`
`CAMERA
`
`
`
`120
`
`103
`
`
`
`SWITCH
`
`140
`
`144
`
`142
`
`
`
`
`
`LOCATION
`DETERMINATOR
`FIGURE
`TRACKING
`SYSTEM
`
`PREDICTOR
`
`SECURED
`AREA
`DATABASE
`
`160
`
`CAMERAHANDOFFSYSTEM
`
`F.G. 1
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 2
`
`
`
`U.S. Patent
`U.S. Patent
`
`Mar. 19,2002
`Mar. 19, 2002.
`
`Sheet 2 of 9
`Sheet 2 of 9
`
`US 6,359,647 B1
`US 6,359,647 B1
`
`
`
`--
`
`101
`
`
`
`IPR2022-00092 - LGE
`
`Ex. 1014 - Page 3
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 3
`
`
`
`U.S. Patent
`U.S. Patent
`
`Mar. 19,2002
`Mar. 19, 2002.
`
`Sheet 3 of 9
`Sheet 3 of 9
`
`US 6,359,647 B1
`US 6,359,647 B1
`
`998
`228
`
`226
`226
`
`225
`
`229
`229
`
`
`
`102
`
`229
`
`221
`221
`
`222
`222
`
`IPR2022-00092 - LGE
`
`Ex. 1014 - Page 4
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 4
`
`
`
`U.S. Patent
`U.S. Patent
`
`Mar. 19,2002
`Mar. 19, 2002.
`
`Sheet 4 of 9
`Sheet 4 of 9
`
`US 6,359,647 B1
`US 6,359,647 B1
`
`233
`233
`
`232
`232
`
`
`
`226
`
`226
`
`IPR2022-00092 - LGE
`
`Ex. 1014 - Page 5
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 5
`
`
`
`U.S. Patent
`
`Mar. 19, 2002.
`
`Sheet 5 of 9
`
`US 6,359,647 B1
`
`241
`
`
`
`253
`
`250
`
`FIG. 3C
`
`240
`
`104
`
`256
`
`255
`
`254
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 6
`
`
`
`Mar. 19, 2002
`Mar.19, 2002
`
`Sheet 6 of 9
`Sheet 6 of 9
`
`US 6,359,647 B1
`US 6,359,647 B1
`
`U.S. Patent
`
`
`
`U.S. Patent
`
`IPR2022-00092 - LGE
`
`Ex. 1014 - Page 7
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 7
`
`
`
`U.S. Patent
`U.S. Patent
`
`Mar. 19,2002
`Mar. 19, 2002.
`
`Sheet 7 of 9
`Sheet 7 of 9
`
`US 6,359,647 B1
`US 6,359,647 B1
`
`
`
`210
`
`FIG. 5a
`
`
` [_ U FIG. 5c
`
`IPR2022-00092 - LGE
`
`Ex. 1014 - Page 8
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 8
`
`
`
`U.S. Patent
`
`Mar. 19, 2002
`
`Sheet 8 of 9
`
`US 6,359,647 B1
`
`GET CAMERAID, AND
`POSITION OF FIGURE IN
`IMAGE
`
`DETERMINE LOS
`DIRECTION FROM
`CAMERATO TARGET
`
`610
`
`615
`
`620
`TRANGULATE <ED RANGE
`
`
`
`624
`
`
`
`
`
`
`
`LOS1 = LOS
`CAM1 - CAMERAID
`
`
`
`GET CAMERAID
`(EM AND
`POSITION OF FIGURE
`IN ANOTHER MAGE
`
`
`
`
`
`DETERMINE LOS2
`DIRECTION FROM CAM2
`TO TARGET
`
`TARGET LOCATION, P
`- INTERSECTION OF
`LOS1 FROM CAM1 &
`LOS2 FROM CAM2.
`
`DETERMINE DISTANCE
`R, FROM CAMERATO
`TARGET, ALONG LOS.
`
`TARGET LOCATION
`- POINT PON LOSAT
`A DISTANCER FROM
`LOCATION OF CAMERA
`
`628
`
`PROCESS
`(FILTER, PREDICT, ETC)
`TARGET POSITIONP
`
`640
`
`645
`
`FIG. 6a
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 9
`
`
`
`U.S. Patent
`
`Mar. 19, 2002
`
`Sheet 9 of 9
`
`US 6,359,647 B1
`
`USER SELECTS
`ALTERNATE
`CAMERA
`
`GET COORDINATES
`t
`OFTARGET LOCATIONP
`
`680
`
`FOREACH CAMERA
`
`652
`
`69
`
`692
`
`REMOTEALARM
`
`MARKTHE CAMERA
`ASSOCATED WITH
`THISALARM
`
`
`
`
`
`654
`
`Y.7 WASYS 6
`CAMPOLYC)
`
`MARKCAMERA
`
`
`
`GET TARGET
`COORDINATES(X,Y,Z)
`ASSOCATED WITH
`THIS ALARM
`
`NEXT CAMERA
`
`658
`
`694
`
`SELECTA
`MARKED CAMERA
`
`DETERMINELINE OF
`SIGHT FROM CAMERA
`POSITION TOX,Y,Z
`
`ADJUST CAMERATO
`THIS LINE OF SIGHT
`
`UPDATEFIGURE
`TRACKING SYSTEM
`
`RETURN
`
`660
`
`664
`
`668
`
`670
`
`675
`
`FIG. 66
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 10
`
`
`
`1
`AUTOMATED CAMERA HANDOFF SYSTEM
`FOR FIGURE TRACKING IN A MULTIPLE
`CAMERA SYSTEM
`
`BACKGROUND OF THE INVENTION
`
`1. Field of the Invention
`This invention relates to a System for controlling multiple
`Video cameras. This invention allows for an automated
`camera handoff for Selecting and directing cameras within a
`multi-camera System, as might be used in a Security System
`or a multi-camera broadcasting System. The automation is
`provided by tracking a figure within the image from an
`individual camera, coupled with an area representation of
`the fields of view of each of the other cameras.
`2. Description of Related Art
`Security Systems for airports, casinos, and the like typi
`cally employ a multitude of cameras that provide images of
`Selected areas to a control Station. The images from each of
`these cameras, or a Subset of these cameras, are displayed on
`one or more monitors at the control Station. The operator of
`the control Station is provided an ability to Select any one of
`the cameras for a display of its image on a primary monitor,
`and, if the camera is adjustable, to control of the camera's
`field of view. Such control systems are also utilized for
`Selecting from among multiple cameras at an event being
`broadcast, for example, multiple cameras at a Sports arena,
`or Studio.
`The Selection and control of the cameras is typically
`accomplished by controlling a bank of Switches, or by
`Selecting from amongst a list of cameras on a computer
`terminal. To view a particular area, the operator Selects the
`camera associated with that area. If the camera is adjustable,
`the operator Subsequently adjusts the Selected camera's field
`of view by adjusting its rotation about a horizontal axis (pan)
`or vertical axis (tilt), or its magnification (Zoom). The entire
`span of an adjustable camera's span of view is termed herein
`as the camera's potential field of View, whereas the view
`resulting from the particular pan, tilt, and Zoom Settings is
`termed the camera's actual field of view.
`Image processing algorithms are available which allow
`for the identification of a particular pattern, or figure, within
`an image, and the identification of any Subsequent move
`ment of that figure. Coupled with a Security control System,
`Such image processing algorithms allow for the automated
`adjustment of a camera So as to keep the figure in the center
`of the cameras actual field of view. When the figure travels
`beyond the potential field of view of the camera, the operator
`Selects another camera whose potential field of View con
`tains the figure at its new location, adjusts the camera,
`identifies the figure in the camera's actual field of View, and
`thereafter continues the automated tracking until the figure
`exits that camera's potential field of view.
`In the conventional camera Selection Scenario, the opera
`tor must be familiar with the layout of the Secured area, as
`well as the correspondence between the displayed image and
`this layout. That is, for example, if a figure is seen exiting
`through one of Several doorways, the operator must be able
`to quickly determine to which other area that particular
`doorway leads, and must further determine which camera
`includes that other area.
`
`SUMMARY OF THE INVENTION
`It is an object of this invention to provide for the auto
`mation of a multiple camera System, So as to provide for a
`multi-camera figure tracking capability. The preferred SyS
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,359,647 B1
`
`2
`tem will allow for the near continuous display of a figure as
`the figure moves about throughout the multiple cameras
`potential fields of view.
`The approximate physical location of a figure is deter
`mined from the displayed image, the identification of the
`figure within this image by the figure tracking System, and
`a knowledge of the camera's location and actual field of
`View which is producing the displayed image. If the figure
`exits a Selected camera's field of view, another camera
`containing the figure within its field of view is selected. The
`bounds of each camera's potential field of view are con
`tained in the System. The System determines which cameras
`potential fields of view contain the figure by determining
`whether the figure's determined physical location lies within
`the bounds of each camera's field of view.
`In a preferred embodiment, when the figure approaches
`the bounds of the selected camera's potential field of view,
`the System determines which other camera's potential field
`of View contains the figure, then adjusts that other camera's
`actual field of view to contain the figure. When the figure is
`at the bounds of the selected camera's field of view, the
`System automatically Selects an other camera and commu
`nicates the appropriate information to the figure tracking
`process to continue the tracking of the figure using this other
`Caca.
`In a further embodiment of the invention, the system also
`contains predictive location determination algorithms. By
`assessing the movement of the figure, the Selection and
`adjustment of the next camera can be effected based upon
`the predicted Subsequent location of the figure. Such pre
`dictive techniques are effective for tracking a figure in a
`Secured area in which the cameras fields of View are not
`necessarily overlapping, and also for Selecting from among
`multiple cameras containing the figure in their potential field
`of view.
`By associating the displayed image to the physical locale
`of the Secured area, the operator need not determine the
`potential egreSS points from each camera's field of view, nor
`need the operator know which camera or cameras cover a
`given area, nor which areas are adjacent each other.
`In another embodiment, the Selection of a target is also
`automated. Security Systems often automatically Select a
`camera associated with an alarm, for the presentation of a
`View of the alarmed area to the operator. By associating a
`target point with each alarm, for example the entry way of
`a door having an alarm, the System can automatically Select
`and adjust the camera associated with the alarm to contain
`that target point, and identify the target as those portions of
`the image which exhibit movement. Thereafter, the system
`will track the target, as discussed above.
`BRIEF DESCRIPTION OF THE DRAWINGS
`FIG. 1 illustrates an example multi-camera Security Sys
`tem in accordance with this invention.
`FIG. 2 illustrates an example graphic representation of a
`Secured area with a multi-camera Security System, in accor
`dance with this invention.
`FIGS. 3a, 3b and 3c illustrate example field of view
`polygons associated with cameras in a multi-camera Security
`System, in accordance with this invention.
`FIG. 4 illustrates an example three dimensional represen
`tation of a Secured area and a camera's field of view
`polyhedron, in accordance with this invention.
`FIGS. 5a, 5b and 5c illustrate an example of the associa
`tion between a figure in an image from a camera and the
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 11
`
`
`
`3
`physical representation of the Secured area, in accordance
`with this invention.
`FIGS. 6a and 6b illustrate example flowcharts for the
`automated camera handoff process in accordance with this
`invention.
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`FIG. 1 illustrates a multi-camera security system. The
`system comprises video cameras 101,102,103 and 104-106
`(shown in FIG. 2). Cameras 101 and 102 are shown as
`adjustable, pan/tilt/Zoom, cameras. The cameras 101, 102,
`103 provide an input to a camera handoff system 120; the
`connections between the cameras 101, 102, 103 and the
`camera handoff system 120 may be direct or remote, for
`example, via a telephone connection. In accordance with this
`invention, the camera handoff System 120 includes a con
`troller 130, a location determinator 140, and a field of view
`determinator 150. The controller 130 effects the control of
`the cameras 101, 102,103 based on inputs from the sensors
`111, 112, the operator station 170, and the location deter
`minator 140 and field of view determinator 150.
`An operator controls the Security System via an operator's
`station 170, and controller 130. The operator typically
`Selects from options presented on a Screen 180 to Select one
`of the cameras 101, 102, 103, and controls the selected
`camera to change its line of Sight, via pan and tilt
`adjustments, or magnification factor, via Zoom adjustments.
`The image from the selected camera's field of view is
`presented to the operator for viewing via the Switch 135.
`The optional alarm sensors 111,112 provide for automatic
`camera Selection when an alarm condition is Sensed. Each
`alarm Sensor has one or more cameras associated with it;
`when the alarm is activated, an associated camera is Selected
`and adjusted to a predefined line of Sight and the View is
`displayed on the screen 180 for the operator's further
`assessment and Subsequent Security actions.
`The field of view determinator 150 determines the field of
`View of each camera based upon its location and orientation.
`Non-adjustable camera 103 has a fixed field of view,
`whereas the adjustable cameras 101,102 each have varying
`fields of View, depending upon the current pan, tilt, and
`Zoom Settings of the camera. To facilitate the determination
`of each camera's field of View, the camera handoff System
`120 includes a database 160 that describes the Secured area
`and the location of each camera. The database 160 may
`include a graphic representation of the Secured area, for
`example, a floor plan as shown in FIG. 2. The floor plan is
`created and entered in the control System when the Security
`System is installed, using for example Computer Aided
`Design (CAD) techniques well known to one skilled in the
`art. Each wall and obstruction is shown, as well as the
`location of each of the cameras 101-106.
`The location determinator 140 determines the location of
`an object within a selected camera's field of view. Based
`upon the object's location within the image from the
`Selected camera, and the camera's physical location and
`orientation within the Secured area, the location determina
`tor 140 determines the object's physical location within the
`Secured area. The controller 130 determines which cameras
`field of view include the object's physical location and
`Selects the appropriate camera when the object traverses
`from one camera's field of view to another camera's field of
`View. The Switching from one camera to another is termed
`a camera handoff.
`In a preferred embodiment, the camera handoff is further
`automated via the use of figure tracking System 144 within
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,359,647 B1
`
`4
`the location determinator 140. In FIG. 2, line segments P1
`through P5 represent the path of a person (not shown)
`traversing the Secured areas. The operator of the Security
`System, upon detecting the figure of the perSon in the image
`of camera 105, identifies the figure to the figure tracking
`System 144, typically by outlining the figure on a copy of the
`image from camera 105 on the video screen 180.
`Alternatively, automated means can be employed to identify
`moving objects in an image that conform to a particular
`target profile, Such as Size, shape, Speed, etc. Camera 105 is
`initially adjusted to capture the figure, and the figure track
`ing techniques continually monitor and report the location of
`the figure in the image produced from camera 105. The
`figure tracking System 144 associates the characteristics of
`the Selected area, Such as color combinations and patterns, to
`the identified figure, or target. Thereafter, the figure tracking
`System 144 determines the Subsequent location of this same
`characteristic pattern, corresponding to the movement of the
`identified target as it moves about the camera's field of view.
`Manual figure tracking by the operator may be used in
`addition to, or in lieu of, the automated figure tracking
`System 144. In a busy Scene, the operator may be better able
`to distinguish the target. In a manual figure tracking mode,
`the operator uses a mouse or other Suitable input device to
`point to the target as it traverses the image on the display
`180.
`If camera 105 is adjustable, the controller 130 adjusts
`camera 105 to maintain the target figure in the center of the
`image from camera 105. That is, camera 105’s line of sight
`and actual field of view will be adjusted to continue to
`contain the figure as the perSon moves along path P1 within
`camera 105’s potential field of view. Soon after the person
`progresses along path P2, the person will no longer be within
`camera 105’s potential field of view.
`In accordance with this invention, based upon the deter
`mined location of the person and the determined field of
`view of each camera, the controller 130 selects camera 106
`when the person enters camera 106's potential field of view.
`In a preferred embodiment that includes a figure tracking
`System 144, the figure tracking techniques will Subsequently
`be applied to continue to track the figure in the image from
`camera 106. Similarly, the system in accordance with this
`invention will select camera 103, then camera 102, then
`camera 104, and then camera 102 again, as the perSon
`proceeds along the P3-P4-P5 path.
`To effect this automatic Selection of cameras, the camera
`handoff System 120 includes a representation of each cam
`era's location and potential field of View, relative to each
`other. For consistency, the camera locations are provided
`relative to the Site plan of the Secured area that is contained
`in the Secured area database 160. ASSociated with each
`camera is a polygon or polyhedron, outlining each camera's
`potential field of view. FIG. 3a illustrates the polygon
`associated with camera 102. FIG. 3b illustrates the polygon
`associated with camera 103. Camera 102 is a camera having
`an adjustable field of View, and thus can view any area
`within a full 360 degree arc, provided that it is not blocked
`by an obstruction. Camera 103 is a camera with a fixed field
`of view, as represented by the limited view angle 203.
`Camera 102's potential field of view is the polygon bounded
`by vertices 221 through 229. Camera 103’s field of view is
`the polygon bounded by vertices 230-239. As shown, the
`field of View polygon can include details Such as the ability
`to see through passages in obstructions, Such as shown by
`the vertices 238 and 239 in FIG. 3b. Also associated with
`each camera is the location of the camera, shown for
`example as 220, 230, 240 in FIGS. 3a, 3b, 3c. The polygon
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 12
`
`
`
`US 6,359,647 B1
`
`S
`representing the field of view of camera 104 is shown in
`FIG. 3c, comprising vertices 240 through 256. As shown in
`FIG. 3c, the field of view polygon can omit details, as shown
`by the use of vertices 244-245, omitting the actual field of
`view vertices 264-265. The level of detail of the polygons
`is relatively arbitrary; typically, one would provide the detail
`necessary to cover the maximum Surveillance area within
`the Secured area. If one area is coverable by multiple
`cameras, the need is minimal for identifying the fact that a
`particular camera can also view that area by viewing through
`a doorway. Conversely, if the only view of an area is through
`Such a doorway, the encoding of the polygon to include this
`otherwise uncovered area may be worthwhile. Similarly,
`although an unobstructed View of a camera is infinite,
`polygon bounds can be defined to merely include the area of
`interest, as shown for example in FIG. 3c, where the bounds
`249–250 and 253–254 are drawn just beyond the perimeter
`of the area being Secured.
`The Site map may also be represented as a three dimen
`Sional model, as shown in FIG. 4. In a three dimensional
`model, the cameras fields of view are represented by
`polyhedron, to include the three-dimensional nature of a
`camera's field of view. The polyhedron associated with
`camera 104 is shown in FIG. 4, and is represented by the
`vertices 441 through 462. As discussed above, the detail of
`the polyhedron model is dependent upon the level of pre
`cision desired. For example, vertices 449 through 454 model
`the view through the portal 480 as a wedge shaped area,
`whereas vertices 455 through 462 model the view through
`the portal 481 as a block shaped area. Three dimensional
`modeling will provide for greater flexibility and accuracy in
`the determination of actual location of the target, but at
`increased computational costs. For ease of understanding,
`two dimensional modeling will be discussed hereafter. The
`techniques employed are equally applicable to three dimen
`Sional Site maps, as would be evident to one skilled in the art.
`The coordinate System utilized for encoding the camera
`locations and orientations can be any convenient form.
`Actual dimensions, relative to a reference Such as the floor
`plan, may be used; or, Scaled dimensions, Such as Screen
`coordinates may be used. Techniques for converting from
`one coordinate System to another are well known to one
`skilled in the art, and different coordinate Systems may be
`utilized as required. Combinations of three dimensional
`modeling and two dimensional modeling may also be
`employed, wherein for example, the cameras at each floor of
`a multistoried building are represented by a two dimensional
`plan, and each of these two dimensional plans have a third,
`elevation, dimension associated with it. In this manner, the
`computationally complex process of associating an image to
`a physical locale can operate in the two dimensional
`representation, and the third dimension need only be pro
`cessed when the target enters an elevator or Stairway.
`FIGS. 5a-5c demonstrates the association of a figure in an
`image to a target in the physical coordinate System, in
`accordance with this invention. An image 510 from camera
`502 (shown in FIG. 5c), containing a figure 511, is shown in
`FIG. 5a. AS discussed above, figure tracking processes are
`available that determine a figure's location within an image
`and allows a camera control System to adjust camera 502's
`line of Sight So as to center the figure in the image, as shown
`in FIG. 5b. The controller 130 in accordance with this
`invention will maintain the camera 502's actual line of sight,
`in terms of the physical Site plan, for Subsequent processing.
`If the camera is not adjustable, the line of Sight from the
`camera to the figure is determined by the angular distance
`the figure is offset from the center of the image. By adjusting
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`6
`the camera to center the figure, a greater degree of accuracy
`can be achieved in resolving the actual line of Sight to the
`figure. With either an adjustable or non adjustable camera,
`the direction of the target from the camera, in relation to the
`physical Site plain, can thus be determined. For ease of
`understanding, the line of Sight is used herein as the Straight
`line between the camera and the target in the physical
`coordinate Site plan, independent of whether the camera is
`adjusted to effect this line of Sight.
`FIG.5c illustrates the physical representation of a secured
`area, as well as the location of camera 502, the line of Sight
`580 to the target, and the camera's actual field of view, as
`bounded by rays 581 and 582 about an angle of view 585.
`To determine the precise location of the target along the
`line of sight 580, two alternative techniques can be
`employed: triangulation and ranging. In triangulation, if the
`target is along the line of Sight of another camera, the
`interSection of the lines of Sight will determine the target's
`actual location along these lines of Sight. This triangulation
`method, however, requires that the target lie within the field
`of View of two or more cameras. Alternatively, with auto
`focus techniques being readily available, the target's dis
`tance (range) from the camera can be determined by the
`Setting of the focus adjustment to bring the target into focus.
`Because the distance of the focal point of the camera is
`directly correlated to the adjustment of the focus control on
`the camera, the amount of focus control applied to bring the
`target into focus will provide Sufficient information to esti
`mate the distance of the target from the location of the
`camera, provided that the correlation between focus control
`and focal distance is known. Any number of known tech
`niques can be employed for modeling the correlation
`between focus control and focal distance. Alternatively, the
`camera itself may contain the ability to report the focal
`distance, directly, to the camera handoff System. Or, the
`focal distance information may be provided based upon
`independent means, Such as radar or Sonar ranging means
`asSociated with each camera.
`In the preferred embodiment, the correlation between
`focus control and focal distance is modeled as a polynomial,
`asSociating the angular rotation X of the focus control to the
`focal distance R as follows:
`
`The degree n of the polynomial determines the overall
`accuracy of the range estimate. In a relatively simple System,
`a two degree polynomial (n=2) will be Sufficient; in the
`preferred embodiment, a four degree polynomial (n=4) is
`found to provide highly accurate results. The coefficients ao
`through an are determined empirically. At least n+1 mea
`Surements are taken, adjusting the focus X of the camera to
`focus upon an item place at each of n+1 distances from the
`camera. Conventional least Squares curve fitting techniques
`are applied to this Set of measurements to determine the
`coefficients a through a . These measurements and curve
`fitting techniques can be applied to each camera, to deter
`mine the particular polynomial coefficients for each camera;
`or, a single Set of polynomial coefficients can be applied to
`all cameras having the Same auto-focus mechanism. In a
`preferred embodiment, the common Single Set of coefficients
`are provided as the default parameters for each camera, with
`a capability of Subsequently modifying these coefficients via
`camera Specific measurements, as required.
`If the camera is not adjustable, or fixed focus, alternative
`techniques can also be employed to estimate the range of the
`target from the camera. For example, if the target to be
`
`IPR2022-00092 - LGE
`Ex. 1014 - Page 13
`
`
`
`7
`tracked can be expected to be of a given average physical
`size, the size of the figure of the target in the image can be
`used to estimate the distance, using the conventional Square
`law correlation between image size and distance. Similarly,
`if the camera's line of Sight is Set at an angle to the Surface
`of the Secured area, the vertical location of the figure in the
`displayed image will be correlated to the distance from the
`camera. These and other techniques are well known in the art
`for estimating an object's distance, or range, from a camera.
`Given the estimated distance from the camera, and the
`camera's position and line of Sight, the target location P, in
`the Site plan coordinate System, corresponding to the figure
`location in the displayed image from the camera, can be
`determined. Given the target location P, the cameras within
`whose fields of view the location P lies can be determined.
`This is because the cameras fields of view are modeled in
`this Same coordinate System. Additionally, the cameras
`whose fields of view are in proximity to the location P can
`also be determined.
`At option, each of the cameras including the target point
`can be automatically adjusted to center the target point in
`their respective fields of view, independent of whether the
`camera is Selected as the camera utilized for figure tracking.
`In the preferred embodiment, all cameras which contain the
`target in their potential field of View, and which are not
`allocated to a higher priority task, are automatically redi
`rected to contain the target in their actual field of View.
`Note that while automated figure tracking Software is
`utilized in the preferred embodiment, the techniques pre
`Sented herein are also applicable to a manual figure tracking
`Scenario as well. That is, for example, the operator points to
`a figure in the image from a camera, and the System
`determines the line of Sight and range as discussed above.
`Thereafter, knowing the target location, the System displayS
`the same target location from the other cameras, automati
`cally. Such a manual technique would be useful, for
`example, for managing multiple cameras in a Sports event,
`Such that the operator points to a particular player, and the
`other cameras having this player in their field of View are
`identified for alternative Selection and/or redirection to also
`include this player.
`A variety of techniques may be employed to determine
`whether to Select a different camera from the one currently
`Selected for figure tracking, as well as techniques to Select
`among multiple cameras. Selection can be maintained with
`the camera containing the figure until the figure tracking
`System 144 reports that the figure is no longer within the
`View of that camera; at that time, one of the cameras which
`had been determined to have contained the target in its prior
`location P can be selected. The camera will be positioned to
`this location P and the figure tracking system 144 will be
`directed to locate the figure in the image from this camera.
`The assumption in this Scenario is that the cameras are
`arranged to have overlapping fields of View, and the edges
`of these fields of view are not coincident, Such that the target
`cannot exit the field of view of two cameras Simultaneously.
`In a preferred System, rather than utilizing the prior
`location P, the camera handoff System includes an predictor
`142 that estimates a next location Q, based upon the motion
`(Sequence of prior locations) of the figure. A linear model
`can be used, wherein the next location is equal to the prior
`location plus the vector distance the target traveled from its
`next-prior location. A non-linear model can be used, wherein
`the next location is dependent upon multiple prior locations,
`So as to model both Velocity and acceleration. Typically, the
`figure tracking System 144 locations exhibit jitter, or Spo
`radic deviations, because the movement of a figure Such as
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,359,647 B1
`
`8
`a perSon, comprising arbitrarily moving appendages and
`relatively unsharp edges, is difficult to determine absolutely.
`Data Smoothing techniques can be applied So as to minimize
`the jitter in the predictive location Q, whether determined
`using a linear or non-linear model. These and other tech
`niques of motion estimation and location prediction are well
`known to those skilled in the art.
`Given a predicted location Q, in the Site map coordinate
`System, the cameras containing the point Q within their
`potential fields of view can be determined. If the predicted
`location Q lies outside the limits of the current camera's