throbber
United States Patent
`Lelong et al.
`
`1541 IMAGE PROCESSING METHOD AND
`DEVICE FOR CONSTRUCTING AN IMAGE
`FROM ADJACENT IMAGES
`[75] . Inventors: Pierre Lelong, Nogent/Sur/Marne,
`France; Govert Dalm, Veldhoven;
`Jan Klijn, Breda, both of
`Netherlands
`[73] Assignee: U.S. Philips Corporation, New York,
`N.Y.
`[21] Appl. No.: 174,091
`Dec. 28, 1993
`1221 Filed:
`
`Foreign Application Priority Data
`1301
`Dec. 29, 1992 FR] France ................................ 92 15836
`[5 11 Int. C1.6 ............................................... H04N 7/18
`[52] U.S. Cl. ......................................... 348/39; 348/38
`[58] Field of Search ................... 348/36, 39, 580, 383,
`348/37, 38; 382/41
`
`~561
`
`References Cited
`U.S. PATENT DOCUMENTS
`4,660,157 4/1987 Beckwith .
`4,677,576 6/1987 Berlin, Jr. et al. .
`................................
`4,740,839 4/1988 Phillips
`358/108
`4,772,942 9/1988 Tuck ..................................... 348/38
`..........................
`5,023,725 6/1991 McCutchen
`348/38
`5,130,794 7/1992 Ritchey ............................... 348/383
`5,185,667 2/1993 Zimmermann ........................ 348/36
`5,187,571 2/1993 Braun et al. .......................... 348/39
`
`US005444478A
`5,444,478
`[i 11 Patent Number:
`[45] Date of Patent: Aug. 22, 1995
`
`.............................
`5,200,818 4/1993 Neta et al.
`348/36
`5,262,867 11/1993 Kojima .................................. 348/39
`Primary Examiner-Tommy P. Chin
`Assistant Examiner-A. Au
`Attorney, Agent, or Firm-Edward W. Goodman
`
`ABSTRACT
`[571
`A method of processing images for constructing a tar-
`get image (10) from adjacent images having a fixed
`frame line and referred to as source images (11, ..., Ii,
`Ij, .... In), the source and target images having substan-
`tially common view points. This method includes the
`steps of: digitizing the images, determining, for one of
`the pixels of the target image (Io), the address (Aq) of a
`corresponding point in one of all source images (Ij),
`determining the luminance value (F) at this correspond-
`ing point, assigning the luminance value (F) of this
`corresponding pixel to the initial pixel in the target
`image (Io), and repeating these steps for each pixel of
`the target image (10). A device for performing this
`method includes a system of n fixed real cameras (Cl, .
`. . , Cn) which provide n adjacent source images (11, . .
`. , In) covering a wide-angle field of view and which
`have common view points (P), and an image recon-
`struction system (100) simulating a mobile camera re-
`ferred to as target image (Co) for providing a sub-image
`referred to as target image (10) of the wide-angle field of
`view, and constructed on the basis of source images
`having the same view point (P).
`
`17 Claims, 6 Drawing Sheets
`
`I
`
`VALEO EX. 1013_001
`
`

`

`US. Patent
`
`A U ~ . 22, 199s
`
`Sheet 1 of 6
`
`5,444,478
`
`FIG. 1 B
`
`FIG. 1 A
`
`FIG. 1 C
`
`FIG. 1 D
`
`VALEO EX. 1013_002
`
`

`

`U.S. Patent
`
`A U ~ . 22, 199s
`
`Sheet 2 of 6
`
`5,444,478
`
`I0
`
`ma
`X
`
`/oj
`
`0
`
`I o i l -
`
`1
`I
`I
`I
`I
`
`FIG. 1 E
`
`FIG. 1 F
`
`VALEO EX. 1013_003
`
`

`

`U.S. Patent
`
`Aug. 22, 1995
`
`Sheet 3 of 6
`
`4 L--,,,
`
`c i
`I Cn
`,------
`1
`
`I--,
`
`-1
`TI
`I I
`
`l MAGE
`---- RECONSTRUCTION
`- ---
`SY STE H
`1
`
`100
`
`t DISPLAY
`
`,
`
`J
`
`L1O
`
`t
`CONTROL
`SYSTEM
`
`FIG. 2
`
`i
`
`\
`
`- A
`1
`
`1
`I
`1
`
`-------A
`
`205
`
`200
`ADDRESS
`COMPUTER
`
`y - ~ ~ $ y ~ @o yo lo
`
`USER INTERFACE
`
`FIG. 3
`
`VALEO EX. 1013_004
`
`

`

`U,S, Patent
`
`Aug. 22, 1995
`
`Sheet 4 of 6
`
`FIG. 4
`
`FIG. 58
`
`FIG. 5A
`
`VALEO EX. 1013_005
`
`

`

`U.S. Patent
`
`Aug. 22,1995
`
`Sheet 5 of 6
`
`,
`
`/20 1
`MCo
`VIRTUAL J)r
`CAMERA MODELER
`
`h
`
`L
`
`STORE
`
`
`\
`
`I
`
`rn
`
`A;( m')
`v
`
`INVERSE
`f PERSPECTIVE XFMR
`
`f
`IH)PM / 2 00
`STORE ' 204
`
`-- 7 DIRECT PERSPECTIVE XFMR 22.j
`
`FIG. 6
`
`VALEO EX. 1013_006
`
`

`

`U.S. Patent
`
`Aug. 22,1995
`
`Sheet 6 of 6
`
`5,444,478
`
`FIG. 78
`
`FIG. 7C
`
`VALEO EX. 1013_007
`
`

`

`1
`
`5,444,478
`
`IMAGE PROCESSING METHOD AND DEVICE
`FOR CONSTRUCTING AN IMAGE FROM
`ADJACENT IMAGES
`
`2
`include, a displacement perpendicular to the scan; its
`localization only includes displacements parallel to this
`scan. The formation of the sub-image does not include
`the zoom effect with respect to the composite image, i.e.
`5 the focal change of the sub-image with respect t o the
`BACKGROUND OF THE INVENTION
`focal length of the image pick-up cameras.
`The image processing station thus comprises means
`1. Field of the Invention
`for constructing the selected video sub-image line after
`The invention relates to a method of processing im-
`line. These means essentially include a circuit for con-
`ages for constructing a target image from adjacent im-
`ages having a fixed frame line and referred to as source lo trolling the synchronization of the video signals from
`the different cameras.
`images, said source and target images having substan-
`tially common view points.
`SUMMARY O F THE INVENTION
`The invention also relates to an image processing
`- -
`-
`It is an object of the present invention to provide a
`device comprising:
`a system of n fixed real cameras arranged in such a l5 device which is capable of simulating a mobile camera
`way that their individual fields of view merge so as
`scanning the wide-angle field of view covered by the n
`to form a single wide-angle field of view for obser-
`fixed cameras whose fields of view merge.
`vation of a panoramic scene,
`A particular object of the present invention is to
`an image construction system simulating a mobile,
`provide such a device simulating a camera which is
`virtual camera continuousl~ scanning the Pan- 20 provided with all the facilities of a real existing mobile
`oramic scene So as to form a sub-image referred to
`camera, i.e. from a stationary observer, possibilities of
`as target image corresponding to an
`horizontal angular displacements towards the left or the
`Set-
`tion of the wide-angle
`of view and con-
`right of a panoramic scene to be observed or to be moni-
`structed from adjacent Source images furnished by
`tored, possibilites of vertical angular displacements to
`said virtual camera having a 25 the top or the bottom of this scene, possibilities of rota-
`the
`real
`tion and also possibilities of zooming in on a part of the
`view point which is common with or close to that
`of the real cameras.
`surface area of this scene.
`The invention is used in the field of telemonitoring or
`a method of
`This object is achieved by
`in the field of television where shots covering large
`processing images for constructing a target image from
`for
`are
`when recording sports 30 adjacent images having a fixed frame line and referred
`events. The invention is also used in the field of automo-
`to as source images, said source and target images hav-
`bile construction for realizing peripheral and panoramic
`ing substantially common view points, characterized in
`rear-view means without a blind angle.
`that the method comprises the steps of:
`2. Description of the Related Art
`An image processing device is known from Patent 35 digitizing the
`determining, for One
`the pixels of the target image,
`Application WO 92-14341, corresponding to U.S. Pat.
`the address of a corresponding point in one of all
`No. 5,187,571. This document describes an image pro-
`source images,
`cessing system for television. This device comprises a
`determining the luminance
`transmitter station including a plurality of fixed cameras
`ing point*
`arranged adjacent to each other so that their fields of 40
`assigning the luminance value of this corresponding
`view merge and form a wide-angle field of view. s hi^
`pixel to the initial pixel in the target image,
`system also comprises a processing station including
`means for generating a composite video signal of the
`repeating these steps for each pixel of the target im-
`age.
`overall image corresponding to the wide-angle field of
`view, and means for selecting a sub-image from this 45 According to the invention, for performing this
`composite image. This system also comprises means,
`method, an image processing device is also ~ r o ~ o s e d ,
`which device includes:
`such as a monitor, for displaying this sub-image. This
`a system of n fmed real Cameras arranged in such a
`sub-image corresponds to a field of view having an
`way that their individual fields of view merge so as
`angle which is smaller than that of the composite image
`to form a single wide-angle field of view for obser-
`and is referred to as sub-section of the wide-angle field 50
`vation of a panoramic scene,
`of view.
`an image construction system simulating a mobile,
`This image processing device is solely suitable for
`virtual camera continuously scanning the pan-
`conventional television systems in which the image is
`oramic scene so as to form a sub-image referred to
`formed line by line by means of a scanning beam.
`as target image corresponding to an arbitrary Set-
`The processing station enables a user to select the 55
`tion of the wide-angle field of view and con-
`sub-section of the wide-angle field of view. The corre-
`structed from adjacent source images furnished by
`sponding sub-image has the same dimension as the
`the n real cameras, said virtual camera having a
`image furnished by an individual camera. The user se-
`view point which is common with or close to that
`lects this sub-image by varying the starting point of the
`of the real cameras, characterized in that this image
`scan with respect to the composite image correspond- 60
`processing device is a digital device and in that the
`ing to the wide-angle field of view. The wide-angle field
`system (100) for constructing the target image 10
`of view has an axis which is parallel to the video scan,
`includes:
`with the result that the starting point for the video scan
`an address computer for causing a point at an address
`of the sub-image may be displaced arbitrarily and con-
`in one of the source images to correspond to a pixel
`tinuously parallel to this axis.
`The angle of the field of view to which the sub-image
`address in the target image,
`means for computing the luminance value of the point
`corresponds may be smaller than that of a real camera.
`However, the localization of the sub-image does not
`at the address found in the source image and for
`
`at this
`
`65
`
`VALEO EX. 1013_008
`
`

`

`5,444,478
`
`l5
`
`4
`3
`assigning this luminance value to the initial pixel at
`second means for constructing models (MC1-MCn)
`the address in the target image.
`of the real cameras with a projection via the view
`Thus, the device according to the invention provides
`point and with corrections of distortions and per-
`the possibility of constructing a target image like the
`spective faults.
`one furnished by a supposed camera which is being 5
`In a particular embodiment, this device is character-
`displaced in a continuous manner; this target image is
`ized in that the address computer comprises:
`formed from several adjacent Source images each pro-
`first means for computing the geometrical transform
`vided by one camera from a group of cameras arranged
`for applying a geometrical transform referred to as
`in a fixed manner with respect to the scene to be ob-
`inverse "perspective transform" @I,-4)
`to each
`served, and, based on this construction, this device may 10
`pixel at an address (AO) of the image of the
`furnish, by way of display on the screen, or by way of
`camera, in which transform the model (MCo) of
`recording:
`the virtual camera provided by the first construc-
`either a sequential image-by-image read-out of parti-
`tion means and the parameters for the azimuth
`tions of the observed scene, possibly with a zoom
`angle, the angle of sight, the angle of rotation and
`effect,
`the scale factor of this virtual camera provided by
`or a continuous read-out by scanning the scene ob-
`the first storage means are taken into account for
`served with the sight and azimuth effect or with
`determining, on the basis of this inverse perspective
`rotation.
`transform (b-4), the positioning in said landmark
`In a particular embodiment, this device is character-
`of the light ray passing through this pixel and the
`ized in that the target image reconstruction system com- 20
`view point,
`prises:
`means for storing the position of the light ray ob-
`first means for storing the parameters relating to the
`tained by the inverse perspective transform (H,-4),
`for
`the address
`means for selecting the particular source image tra-
`virtual
`with the scale factor and the orientation of the
`versed by this light ray,
`optical axis of the virtual camera in a fixed ortho- 25
`second means for computing the geometrical trans-
`normal landmark which is independent of the cam-
`form for applying a geometrical transform referred
`eras, i.e. the azimuth angle, the angle of sight and
`to as "direct perspective transform" (HI-Hn) to
`the angle of rotation;
`this light ray in said landmark, in which transform
`second means for storing the parameters relating to
`of the
`the
`cameras provided by the
`the real cameras for supplying the address com- 30
`second construction means, the parameters for the
`puter with the scale factor and the orientation of
`azimuth angle, the angle of sight, the angle of rota-
`the optical axis of each real camera, i.e. their azi-
`tion and the scale factor of the corresponding real
`muth angle, the angle of sight and the angle of
`camera provided by the second storage means are
`rotation in said fixed landmark;
`taken into
`an address generator for generating, pixel by pixel, 35
`and storage means for supplying* on the basis of this
`the addresses (Ao) of the pixels of the target image
`direct perspective transform (HI-Hn), the address
`so as to cover the entire target image, the address
`(Aq) in the particular source image which corre-
`computer determining the particular source image
`sponds to the light ray and thus to the pixel of the
`and the point at the address (Aq) in this source
`address (AO) in the target image.
`image, which corresponds to each pixel of the 40
`With this device, the user who monitors a panoramic
`target image, on the basis of the parameters of the
`scene exactly obtains the same convenience of use and
`virtual camera and the real cameras.
`the Same Service as a User of a mobile Camera with Zoom
`Another technical problem is posed by the construe-
`and mechanical means for realizing the variation of the
`tion of the target image. ~t is supposed that a plurality of
`cameras is arranged adjacent to one another and that no 45 orientation of the optical axis, i.e., for realizing varia-
`zone of the panoramic scene to be constructed is be-
`tions of sight and azimuth, as well as rotations around
`yond the field covered by each camera: it is thus sup-
`the optical axis of the camera. The advantage is that the
`posed that all the data for constructing the target image mechanical means are not necessary. These mechanical
`are provided. Nevertheless, at each boundary between
`means, which include mechanical motors for rotating
`the cameras, where an image from one camera passes to 50 the azimuth angle and the angle of sight and a motor for
`zoom control always have drawbacks: first, they may
`another image of an adjacent camera, the viewing angle
`difference between these two cameras for two adjacent
`get blocked and then the generated displacements are
`zones of the scene recorded by these two different cam-
`very slow. Moreover, they are very expensive. As they
`eras causes great distortions of the image. The result is
`are most frequently installed externally, they will rap-
`that the partitions which are realized on and at both 55 idly degrade because of poor weather conditions. The
`sides of the two zones of the scene recorded by two
`electronic image processing means according to the
`different cameras are very diMicult to display and com-
`invention obviate all these drawbacks because they are
`very precise, reliable, very rapid and easy to control.
`pletely lack precision.
`It is another object of the invention to provide a Moreover, they may be installed internally and thus be
`construction of the target image whose image distortion 60 sheltered from bad weather. The electronic means are
`at the boundary between two cameras is corrected so
`also easily programmable for an automatic function.
`that this (these) boundary(ies) is (are) completely invisi-
`Finally, they are less costly than the mechanical means.
`ble to the user.
`With the means according to the invention, the user
`This object is achieved by means of an image process-
`thus obtains an image which is free from distortions and
`ing device as described hereinbefore, which is charac- 65 has a greater precision and an easier way of carrying out
`terized in that the address computer comprises:
`the sighting operations than with mechanical means.
`first means for constructing a model (MCo) of the Moreover, a panoramic scene of a larger field may be
`virtual camera with a projection via the view point,
`observed because fields of 180" or even 360", dependent
`
`VALEO EX. 1013_009
`
`

`

`FIG. 5A illustrates the models of the real and virtual
`on the number of cameras used, can be observed. The
`operations can also be easily programmed.
`cameras;
`FIG. 5B illustrates, in projection on the horizontal
`Great progress is achieved as regards surveillance. As
`plane of the landmark, the perspective and distortion
`for realizing panoramic rear-view means for automo-
`5 effects on the positions of the corresponding points
`biles, this progress is also very important.
`The fact that several cameras are used for acquiring
`having the same luminance in the target image and in
`the source image traversed by the same light ray passing
`data which are necessary for constructing the target
`through these points;
`image is not a disadvantage, because such an assembly
`FIG. 6 shows, in the form of functional blocks, the
`of fixed CCD cameras has become less difficult to han-
`dle than the mechanical devices for varying the sight, 10 address computer which computes the address of the
`azimuth and rotation, as well as the zoom for a single
`point in the source image corresponding to a pixel at an
`real mobile camera.
`address in the target image;
`FIG. 7A shows a first digital source image formed by
`In a particular embodiment, this system is character-
`ized in that the means for determining the luminance
`a first real fixed camera and FIG. 7B shows a second
`comprise:
`15 source image formed by a second real fixed camera
`adjacent to the first camera;
`an interpolator for computing a most probable value
`of a luminance function (F) at the address (Aq)
`FIG. 7C shows a digital target image reconstructed in
`the same manner as in the case of FIG. 1F showing the
`found by the address computer in the source image
`distortion and perspective faults between the first target
`furnished by the selection means;
`third storage means for assigning the luminance value 20 image part constructed on the basis of the first source
`(F) corresponding to the point at the address (Aq)
`image and the second target image part constructed on
`the basis of the second source image; and
`found in the source image to the initial pixel in the
`FIG. 7D shows the digital target image of FIG. 7C
`target image at the address (Ao) furnished by the
`address generator, and in that the system for recon-
`after treatment by the image processing device, in
`25 which the distortion and perspective faults have been
`structing the target image also comprises:
`-
`an interface for enabling a user to define the Darame-
`eliminated.
`ters of the virtual camera, which parameters in-
`DESCRIPTION O F THE PREFERRED
`clude the scale factor and the orientation of the
`EMBODIMENT
`optical axis.
`~ h i s e and other aspects of the invention will be ap- 30 I/The image pick-up system.
`FIG. 1G shows a possible arrangement of several real
`parent from and elucidated with reference to the em-
`fixed cameras for recording the data relating to a scene
`bodiments described hereinafter.
`through an angle of 180". This panoramic scene is re-
`DESCRIPT1ON OF THE
`corded with three fixed cameras C1, C2, C3. The cam-
`35 eras have such optical fields that, absolutely, all the
`In the drawings
`details of the panoramic scene are recorded by the one
`FIG. 1A is a plan view showing the traces of the
`or the other camera so that no object under surveillance
`different image planes in the horizontal plane of the
`is left out. The cameras are arranged to have a common
`landmark in the case where the real cameras have image
`view point P or very close view points.
`planes which are perpendicular to this horizontal plane;
`FIG. 1B shows the landmark Px, Py, Pz viewed in 40 The axes PZ1, PZ2, PZ3 represent the optical axes of
`the cameras C1, C2, C3, respectively, and the points 01,
`projection in the horizontal plane;
`02, 03 represent the geometrical centers of the images
`FIG. 1C is an elevational view of a source image
`11, 12, 13, respectively, in the image planes on the opti-
`plane with its particular system of coordinate axes;
`cal axes.
`FIG. 1D is an elevational view of the target image
`45 A horizontal surveillance through 360" can be carried
`plane with its particular system of coordinate axes;
`out by suitably arranging 6 fixed cameras. However, a
`FIG. 1E represents the effect of limiting a section of
`vertical surveillance or a surveillance in both directions
`the wide-angle field of view of two adjacent real cam-
`eras by means of parameters chosen by the user for the may also be carried out. Those skilled in the art will be
`able to realize any type of system for observation of a
`virtual camera for constructing a sub-image of a pan-
`50 panoramic scene so that a more detailed description of
`oramic scene;
`the various mutual arrangements of the fixed cameras is
`FIG. 1F shows the target image constructed by the
`not necessary.
`virtual camera defined by these parameters, this target
`With reference to FIG. lA, the image pick-up device
`image being composed of a first part of an image con-
`comprises a plurality of n fixed real cameras having
`structed on the basis of the source image furnished by
`the first of the two real cameras and of a second image 55 known and fixed focal lengths and being arranged adja-
`part constructed on the basis of the source image fur-
`cent one another so that their individual fields of view
`merge to cover a wide-angle field of view. The n adja-
`nished by the second of these cameras;
`FIG. 1G shows an arrangement of three adjacent real
`cent fixed cameras furnish n adjacent fixed images so
`cameras for covering a field of view of 180';
`that this image pick-up device can monitor a panoramic
`FIG. 2 shows, in the form of functional blocks, the 60 scene. The cameras have such optical fields that all the
`image processing device with the system for construct-
`details of the panoramic scene are recorded by the one
`ing the target image, the real cameras, the user interface
`or the other camera so that no object under surveillance
`and the system for displaying the target image;
`is left out.
`To obtain this result, these n adjacent fixed cameras
`FIG. 3 shows the image processing device in the form
`65 are also arranged in such a way that their optical centers
`of functional blocks in greater detail than in FIG. 2;
`P, referred to as view points coincide. The view point
`FIG. 4 illustrates the computation of a value of a
`of a camera is defined as the point at which each ray
`luminance function relative to an address in a source
`emitted from a luminous source and passing through
`image;
`
`VALEO EX. 1013_010
`
`

`

`and the corresponding planes and axes for both the
`this point traverses the optical system of the camera
`source images and for the target image described here-
`without any deviation.
`inafter.
`The view points of the n cameras need not coincide
`FIG. lA, which is a diagrammatic plan view of the
`physically. However, it will hereinafter be assumed that
`the condition of coincidence is fulfilled sufficiently if 5 images formed, thus only shows the traces Ii and Ij of
`the fixed source image planes represented by segments
`the distance separating each of these view points is
`small as regards their distance to the filmed panoramic
`in the horizontal plane Px, Pz.
`FIG. 1 E shows, for example, the contiguous images Ii
`scene, for example, if their respective distance is 5 cm or
`and Ij of the panoramic scene, furnished by two adja-
`10 cm and the distance to the panoramic scene is 5 m.
`The condition of coincidence is thus estimated to be 10 cent fixed cameras Ci and Cj. In FIG. lE, both images
`fulfilled if the ratio between these distances is of the
`Ii and Ij are projected in the same plane for the purpose
`of simplicity, whereas in reality these images constitute
`order of or is more than 50 and, according to the inven-
`an angle between them which is equal to that of the
`tion, it is not necessary to use costly optical mirror
`optical axes of the fixed cameras. In these images, the
`systems which are difficult to adjust for achieving a
`15 user may choose to observe any sub-image bounded by
`strict coincidence of the view points.
`II/Formation of the images by the cameras.
`the line Jo more or less to the left or to the right, more
`or less to the top or to the bottom with the same magni-
`It is an object of the invention to provide a system for
`fication as the fixed cameras or with a larger magnifica-
`reconstructing a digital image which simulates a mobile
`tion, or possibly with a smaller magnification.
`camera which, with the settings selected by a user, is
`The simulated mobile camera is capable of construct-
`capable of furnishing a digital image of any part, or 20
`ing a target image 10 from parts of the source image Si,
`sub-image, of the panoramic scene recorded by the n
`Sj bounded by the line Jo in FIG. 1E. This camera,
`fixed cameras.
`The n cameras are numbered C1, . . . , Ci, Cj, . . . , Cn
`denoted by Co hereinafter, is referred to as the virtual
`supplying digital source images 11, . . . , Ii, Ij, . . . , In,
`camera because it simulates a camera which does not
`respectively. For example, the source images Ii and Ij 25 really exist. Evidently, this simulated mobile camera is
`formed by two adjacent fixed real cameras Ci and Cj
`not limited to scanning the two images Ii, Ij. It may scan
`all the source images from I1 to In.
`will be considered hereinafter.
`This virtual camera Co can be defined in the same
`These fixed real cameras Ci and Cj form respective
`images of the panoramic scene in adjacent source image manner as the fixed real camera by means of:
`planes Ii and Ij. In FIG. 1A the axes Pzi and Pzj passing 30
`its azimuth angle 8 0
`its angle of sight 40
`through the geometrical centers Oi and Oj of the source
`its angle of rotation $0
`images Ii and Ij, respectively, represent the optical axes
`and its magnification (zoom effect) defined by its
`of the fixed real cameras Ci and Cj.
`focal length POo, and denoted as zo, with its view
`With reference to FIG. lB, a landmark Px, Py, Pz of
`orthogonal axes is defined in which the axes Px and Pz 35
`point P being common with the view points P of
`the fixed real cameras, while 00 is the geometrical
`are horizontal and the axis Py is vertical.
`center of the target image 10. The view point of the
`The source images, such as the images Ii and Ij, are
`numbered and each pixel m of these images is marked
`virtual camera is common with the approximate
`view point as defined above for the real cameras.
`by way of its coordinates in the image plane. As is
`FIG. 1A shows the trace denoted by 10 of the image
`shown in FIG. lC, a mark of rectangular coordinates 40
`(OiXi, OiYi) and (OjXj, OjYj) are defined in each image
`plane of the virtual camera in the horizontal plane and
`plane in which the axes OiXi, or OjXj are horizontal,
`its optical axis PZo passing through the geometrical
`centre 00 of the target image 10.
`i.e., in the plane of the landmark Px, Pz. The image
`In the definition of this mobile virtual camera Co, the
`planes defined by (OiXi, OiYi) and (OjXj, OjYj) are
`perpendicular to the optical axes Pzi and Pzj and have 45 azimuth angle 80 is the angle made by the vertical plane
`containing its optical axis PZo with the horizontal axis
`respective geometrical centers Oi and Oj.
`Pz of the landmark; the angle of sight 40 is the angle
`Once these individual marks relating to each image
`plane of the cameras are established, these fixed source made by its optical axis PZo with the horizontal plane
`image planes may be related to the landmark by means
`Px, Pz of the landmark; its angle $0 is the angle of
`50 rotation of the virtual camera about its own optical axis,
`of:
`their azimuth angle (or pan angle) 8i, 8j,
`the latter being fixed; and finally, its focal length POo is
`their angle of sight (or tilt angle) $i, 4j.
`variable so that the magnification of this target image
`The azimuth angle 8 i or 8 j is the angle forming a
`with respect to that of the source images can be
`vertical plane containing the optical axis PZi or PZj
`changed (zoom effect).
`with the horizontal axis Pz of the landmark. Thus, this 55 By varying the azimuth angle 8 0 and the angle of
`is a horizontal angle of rotation about the vertical axis
`sight 40, the angle of rotation $0 and the focal length
`POo, the virtual camera is entirely similar to a mobile
`PY.
`The angle of sight 4i or 4 j is the angle formed by the
`camera which scans the wide-angle field of view
`optical axis PZi PZj with the horizontal plane (Px, Pz).
`formed by the merged fields of view of the different
`Thus, this is a vertical angle of rotation about a horizon- 60 fixed real cameras C1 to Cn.
`It is to be noted that the virtual camera Co can view
`tal axis, the axis OiXi or OjXj of each image plane.
`For reasons of simplicity, it has hereinafter been as-
`a small part (or subsection) bounded by Jo of the wide-
`sumed, by way of example with reference to FIG. lA,
`angle field of view and by realizing a magnified image
`that the source image planes Ii, Ij furnished by the fixed
`10, for example, of the same final dimension as each of
`cameras Ci, Cj are vertical, i.e. their angles of sight 4i, 65 the images 11, . . . , In furnished by each real camera C1,
`. . . , Cn by varying its variable focal length POo.
`4 j are zero.
`For similar reasons of simplicity, the same reference
`It is also to be noted that the displacement of the field
`in FIG. 1A denotes the trace of the planes and the axes
`of view of the virtual camera Co may be continuous and
`
`VALEO EX. 1013_011
`
`

`

`5,444,478
`
`get image all the qualities of = image obtained by an
`
`10
`This method comprises a second step in which:
`the most probable luminance value is evaluated at
`said point m in the source image,
`subsequently this luminance value is assigned to the
`pixel m' in the target image.
`These steps are carried out for all the pixels rn' in the
`target image
`The processing means may give the constructed tar-
`
`9
`arbitrary; this field of view corresponding to Jo may be
`on or at both sides of the two parts (Si, Sj) of the contig-
`uous images Ii and Ij at LO, furnished by two adjacent
`cameras Ci and Cj.
`In this case, the image 10 constructed by the virtual 5
`camera Co contains two different image Parts, one Part
`Ioi being constructed on the basis of information Si in
`the digital image Ii and the other part Ioj being con-
`structed on the basis of information Sj in the digital
`image Ij. In FIG. lA, Ioi and Ioj represent the traces of 10 observer using a conventional mobile camera:
`the target images Ioi and Ioj in the horizontal plane.
`absence of distortions, adjustment of perspectives,
`Likewise as for the
`images, a mark
`absence of straight interrupting lines at the boundary
`coordinates (00 Xo, 00 Yo) will now be defined with
`between two or more adjacent images.
`reference to
`lD in the digita1 target image plane
`The problem thus is to render these straight boundaries
`10, in which mark the axis 00 Xo is horizontal, i.e. in 15 invisible.
`the horizontal plane of the landmark Px, Pz. The pixel
`IVnssential elements of the image processing device.
`00 is the geometrical, center of the target image 10 and
`FIG. 2 shows the different elements of the image
`is also situated on the optical axis PZo of the virtual
`processing device according to the invention in the
`camera Co. Each pixel m' of the target image plane 10
`is thus marked by its coordinates in this system of rect- 20 form of functional
`The bl

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket