`
`
`
`
`
`
`
`
`
`
`
`November 19, 2013
`
`
`
`
`Certification
`
`
`
`Park IP Translations
`
`
`
`This is to certify that the attached translation is, to the best
`of my knowledge and belief, a true and accurate translation from
`Japanese into English of the Japanese Patent Application 2002-
`74339.
`
`
`
`_______________________________________
`
`Abraham I. Holczer
`
`Project Manager
`
`
`
`
`
`
`
`Park Case # 39809
`
`134 W. 29th Street 5th Floor New York, N.Y. 10001
`Phone: 212-581-8870 Fax: 212-581-5577
`
`VALEO EX. 1013
`
`
`
`(19) Japan Patent Office (JP)
`
`
`(12) Unexamined Patent Gazette (A)
`
`
`
`
`(11) Patent Application Publication No.
`2002-74339
`(P2002-74339A)
`(43) Publication Date: March 15, 2002
`
`Theme Code (Reference)
`330B
`
`5B057
`C
`
`5C054
`E
`
`5H180
`J
`
`
`No. of Claims 4 OL (12 pages total [in original])
`000005108
`Hitachi, Ltd.
`4-6 Kanda Surugadai, Chiyoda-ku
`Tokyo, Japan
`Furusawa, Isao
`c/o Hitachi, Ltd. Automotive Systems
`Group, 2520 Takaba, Hitachinaka-shi,
`Ibaraki-ken, Japan
`Moji, Takahiko
`c/o Hitachi, Ltd. Automotive Systems
`Group, 2520 Takaba, Hitachinaka-shi,
`Ibaraki-ken, Japan
`100078134
`Take, Genjiro, Patent Agent
`
`continued on final sheet
`
`
`
`(51) Int. Cl.7
`G06T 1/00
`G08G 1/16
`
`H04N 7/18
`
`
`(21) Application No.
`(22) Application Date
`
`
`
`
`
`
`
`
`
`
`
`
`
`ID No.
`330
`
`
`
`
`
`FI
`
`
`G06T 1/00
`G08G 1/16
`
`H04N 7/18
`
`
`
`
`
`
`Request for Examination: Not Yet Made
`2000-263733 (P2000-263733)
`(71) Applicant
`August 31, 2000
`
`
`
`
`
`
`(72) Inventor
`
`
`
`
`
`
`(72) Inventor
`
`
`
`
`
`
`(74) Agent
`
`
`
`
`
`
`
`
`
`(54) [Title of the Invention] Automotive Image Capture
`Apparatus
`(57) [Abstract]
`[Problem] To provide an automotive image capture apparatus
`capable of automatically accommodating displacements of
`image capture optical axis and field angle and the like in
`relation to design values.
`[Means of Solution] A camera 1 configured to be capable of
`image capture of an area wider than the required area is used,
`and optical axis, field angle displacement, and viewing field
`rotation and the like of the camera 1 are detected from
`images of 2 marks 3 on the right and left disposed so as to be
`included within the viewing field thereof, and by adjusting the
`image capture area of the camera 1 in relation to the
`detection results, displacement of the optical axis and field
`angle and rotation of the viewing field can be automatically
`corrected without moving the camera 1.
`
`
`
`
`
`Fig. 2
`
`1: Camera
`3: Mark
`
`
`
`
`
`Unexamined JP 2002-074339
`
`[Claims]
`1. An automotive image capture apparatus of vehicle driving environment recognition method
`using image data taken from image capture means, wherein image capture means having an
`image capture viewing field winder than the image capture viewing field required for taking the
`image data is employed, 2 photographic subjects providing characteristic images are disposed
`on one part of the vehicle within the viewing field of the image capture means located to the
`left and right in a parallel direction within the viewing field, and adjustments are made to either
`field angle or image rotation with an optical axis of the image capture means by detecting
`changes in either field angle or image rotation with the optical axis of the image capture means
`from images of the 2 photographic subjects.
`
`2. An automotive image capture apparatus of vehicle driving environment recognition method
`using image data taken from image capture means, comprising
`means for detecting 2 photographic subjects mounted on a part of the vehicle, and changes in
`either field angle or image rotation with the optical axis of the image capture means from
`images of 2 photographic subjects, configured to provide characteristic images on the left and
`right in a horizontal direction within the viewing field of the image capture means, and
`employing image capture means having an image capture viewing field wider than the image
`capture viewing field required for taking the image data;
`means for calculating a correction amount corresponding to change detection results; and
`means for changing an area used to take image data from the image capture means within the
`area for which image data is capable of being captured in accordance with the correction
`amount;
`wherein either the field angle or the image rotation can be adjusted with the optical axis of the
`image capture means.
`
`3. An automotive image capture apparatus of vehicle driving environment recognition method
`using image data taken from image capture means, configured so that the image capture
`viewing field angle of the image capture means can be changed by changing the area used to
`take image data in the area for which image data capture is possible by the image capture
`means, employing image capture means having an image capture viewing field winder than the
`image capture viewing field required for taking the image data.
`
`4. An automotive image capture apparatus, wherein in the invention of Claim 3, changes in
`the volume of information in the image data due to changes in the image capture viewing field
`angle of the image capture means by skipping pixels in the image data in accordance with
`changes in the image capture viewing field angle of the image capture means.
`
`[Detailed Description of the Invention]
`[0001]
`[Field of the Invention] The present invention pertains to an image capture apparatus for
`recognizing a vehicle driving environment, and relates in particular to an automotive image
`capture apparatus well-suited to use in support of safe driving in automobiles.
`
`
`
`
`Unexamined JP 2002-074339
`
`[0002]
`[Prior Art] Attention has focused in recent years on devices that support save vehicle driving by
`using image data to recognize the driving environment for automobiles and other vehicles. In
`addition, one example of such a device is the environment recognition apparatus disclosed in
`Unexamined Japan Patent H3-203000.
`
`[0003] As it happens, this driving environment recognition apparatus is configured to take in
`image data from a video camera using a CCD (charge-coupled device) or other image capture
`device, for example. But in this case, when a change is made to the attachment position or the
`attachment angle of the video camera used to acquire image data (image capture means), or to
`the field angle (image capture range), the image data itself also changes.
`
`[0004] Accordingly, in this type of apparatus, position adjustment and field angle adjustment of
`the camera (video camera) represents an important element. These cameras need to be
`adjusted to a correct mounting state, and that state needs to be maintained, but in the prior art,
`manual methods are the main ones adopted for adjusting the camera platform or other camera
`attachment mechanism.
`
`[0005] Consequently, in the case of the prior art, complicated and bothersome tasks were
`required to adjust the camera attachment, requiring considerable work time and expense, and
`even after the attachment adjustment was complete and the camera fixed in place, the camera
`could come out of adjustment due to vibration or the like, and so there was not necessarily any
`guarantee of the precision of the adjustment.
`
`[0006] In order to shorten the task time, a method was disclosed of using a stereo camera and
`adjusting the optical axis of the camera by determining characteristic points of 3 locations of a
`typical scene from images from each camera in order perform adjustments, as in Unexamined
`Japan Patent H11-259632, for example. A method was disclosed in Unexamined Japan Patent
`H8-16999 of confirming attachment position displacement and using a single location mark for
`adjustment of the attachment position, aimed at single-lens cameras.
`
`[0007]
`[Problem the Invention Is to Solve] The prior art described above cannot be said to have taken
`into account all of the necessary requirements for camera adjustment. The problems they
`addressed were those of improving performance and maintaining high performance.
`
`[0008] For instance, in the prior art of extracting 2 characteristic points in a typical scene, since
`characteristic point extraction and the like involved numerous processing steps and enormous
`processing time, it was able to accommodate only very slight optical axis displacement. If the
`optical axis was off by very much, it was impossible to find corresponding points for the camera
`left and right. Furthermore, because the technology was specialized for stereo cameras, it was
`difficult to easily convert the technology for devices using single-lens cameras, which was
`problematic.
`
`
`
`
`Unexamined JP 2002-074339
`
`[0009] Also, in addition to being able to accommodate only slight optical axis displacements,
`the prior art for extracting 1 characteristic point also had a problem in that it was unable to
`detect rotation displacement. The present invention was formulated taking note of the problem
`points discussed above, and an aim thereof is to provide an automotive image capture
`apparatus configured to be capable of automatically accommodating displacements of image
`capture optical axis and field angle and the like in relation to design values.
`
`[0010]
`[Means for Solving the Problem] The aim stated above can be achieved by an automotive image
`capture apparatus of vehicle driving environment recognition method using image data taken
`from image capture means, wherein image capture means having an image capture viewing
`field winder than the image capture viewing field required for taking the image data is
`employed, 2 photographic subjects providing characteristic images are disposed on one part of
`the vehicle within the viewing field of the image capture means located to the left and right in a
`parallel direction within the viewing field, and adjustments are made to either field angle or
`image rotation with an optical axis of the image capture means by detecting changes in either
`field angle or image rotation with the optical axis of the image capture means from images of
`the 2 photographic subjects.
`
`[0011] Similarly, the aim stated above can be achieved by an automotive image capture
`apparatus of vehicle driving environment recognition method using image data taken from
`image capture means, comprising means for detecting 2 photographic subjects mounted on a
`part of the vehicle, and changes in either field angle or image rotation with the optical axis of
`the image capture means from images of 2 photographic subjects, configured to provide
`characteristic images on the left and right in a horizontal direction within the viewing field of
`the image capture means, and employing image capture means having an image capture
`viewing field wider than the image capture viewing field required for taking the image data;
`means for calculating a correction amount corresponding to change detection results, and
`means for changing an area used to take image data from the image capture means within the
`area for which image data is capable of being captured in accordance with the correction
`amount, wherein either the field angle or the image rotation can be adjusted with the optical
`axis of the image capture means.
`
`[0012] Furthermore, the aim stated above can also be achieved by an automotive image
`capture apparatus of vehicle driving environment recognition method using image data taken
`from image capture means, configured so that the image capture viewing field angle of the
`image capture means can be changed by changing the area used to take image data in the area
`for which image data capture is possible by the image capture means, employing image capture
`means having an image capture viewing field winder than the image capture viewing field
`required for taking the image data.
`
`[0013] At this time, changes in the information volume of the image data due to changes in the
`image capture viewing angle of the image capture means can be reduced by skipping pixels in
`
`
`
`Unexamined JP 2002-074339
`
`the image data in relation to changes in the image capture viewing field angle of the image
`capture means.
`
`[0014] This description pertains to one example of a vehicle driving environment recognition
`apparatus, but the present invention can of course be applied to general environment
`recognition apparatuses using a camera or other image capture means.
`
`
`[0015]
`
`[Embodiments of the Invention] The following section will describe in detail embodiments of
`
`the present invention illustrated in drawings. First of all, the basic concept of detecting and
`
`adjusting either the field angle or the image rotation in relation to the optical axis of the
`
`camera according to the present invention will be described. First, in the present invention,
`
`rather than acquiring image data using the entire optoelectric conversion surface of the camera,
`in a camera having an image capture lens 10 and an image capture device 11 as shown in Fig. 1,
`
`when the required image data is posited as an area indicated by arrow A (hereafter, referred to
`
`as used image area A1), the focal distance of the lens 10 and the size of the optoelectric
`
`conversion surface of the image capture device 11 are selected, and the area from which the
`
`image data is acquired on the optoelectric conversion surface is moved so that an image area
`
`wider than area A1 is obtained (hereafter, referred to as capture-capable image area A0), as
`
`indicated by the broken line in the drawing.
`
`
`
`[0016] Here, the capture-capable image area A0 is the entire area of the optoelectric
`
`conversion surface on which acquisition of image data from the camera used at this time is
`
`possible, and is determined by photographic angle of the lens provided with the camera and
`
`the size of the optoelectric conversion surface. Used image area A1 is the area within the
`
`capture-capable image area A0 actually used for image data acquisition. Consequently, in order
`
`for the relationship "A0 > A1" to be true, a wide angle lens is used as the lens 10, and a CCD
`
`sensor with a large optoelectric conversion surface is used as part of the image capture device
`
`11.
`
`
`
`[0017] At this time, the embodiment of the present invention takes advantage of the fact that
`
`the optoelectric conversion surfaces of CCDs and other widely used image capture devices are
`
`made up of a plurality of optoelectric conversion elements known as pixels. If the pixels to
`
`perform data acquisition are selected partially from among these pixels, the image data
`
`acquisition area – that is, the used image area A1 – can be changed at will without actually
`moving the optoelectric conversion surface of the image capture device 11. The optical axis and
`
`
`
`Unexamined JP 2002-074339
`
`field angle of the camera can be altered in relation to how the pixels are selected, and image
`
`rotation can be obtained.
`
`
`
`[0018] Fig. 2 shows the embodiment when using the automotive image capture apparatus
`
`according to the present invention as a driving environment recognition apparatus. When
`
`applied to control of an automobile ACC (auto-cruise control), in the drawing, M is an
`
`automobile, and the camera 1 is attached facing forward on the roof of the cabin of the
`
`automobile M.
`
`
`
`[0019] AE represents the image capture visual field (area) of the camera 1. The image capture
`
`area based thereon is the aforementioned used image area A1, and at this time, marks 3 are
`
`disposed at 2 locations on the left and right of the hood MB of the automobile M.
`
`
`
`[0020] Disposed inside the automobile M are an ACC control unit 4, a display apparatus 5, a
`
`brake drive apparatus 6, a steering drive apparatus 7, and a throttle drive apparatus 8.
`
`
`
`[0021] In addition, the camera 1 is connected to an ACC control device 4 by a camera signal line
`
`40, and the ACC control device 4 is connected respectively to a display device 5 by a display
`
`signal line 50, to a brake drive device 6 by a brake drive signal line 60, to a steering drive device
`
`7 by a steering drive signal line 70, and to a throttle drive device 8 by a throttle drive signal line
`
`80.
`
`
`
`[0022] Fig. 3 is a detailed view of the camera 1. As shown in the drawing, the camera is broadly
`
`divided into an image input unit 1A and an image operation unit 1B. Also, the image input unit
`
`1A first of all has the lens 10 and the image capture device 11, along with the CCD output signal
`
`conversion processing unit 12, and performs the actions of capturing images mainly of the
`
`forward driving direction of the automobile M and outputting the captured digital images.
`
`
`
`[0023] Here, a 1.0 megapixel CCD sensor is used as the image capture device 11, but a CMOS
`
`sensor may also be used. As explained in reference to Fig. 1, the focal point distance of the lens
`
`10 and the size of the image capture device 11 are selected at this time so that an image
`
`capture visual field AE corresponding to the capture-capable image area A0, which is wider than
`
`the used image capture area A1, is obtained.
`
`
`[0024] Next, the image operation unit 1B is configured having an image memory 13 for taking in
`
`and storing image data in pixel units, an image data correction unit 14 for converting the image
`
`
`
`Unexamined JP 2002-074339
`
`data, an image processing unit 15 for performing the operation of recognizing the relative
`
`position of the automobile in relation to the external environment, a camera position change
`
`decision unit 16 for detecting positional displacement of the image input unit 1A from the initial
`
`set position and making decisions based thereon, a danger avoidance decision unit 17 for
`
`deciding to issue warnings and drive the brake, steering, and throttle, and a data I/O unit 18 for
`
`outputting decision results and inputting vehicle speed and other data.
`
`
`
`[0025] In addition, the image processing unit 15 is configured having a forward vehicle
`
`recognition unit 151, a vehicle distance calculation unit 152, a same-vehicle lane calculation
`
`unit 153, and a same-vehicle lane positional relationship calculation unit 154. Next, the camera
`
`position change decision unit 16 is configured having an initial mark position storage unit 161, a
`
`mark extraction unit 162, a mark position detection unit 163, and a comparison unit 164. Also,
`
`the danger avoidance decision unit 17 is configured having a danger level assessment unit 171,
`
`a warning issuance decision unit 172, and an actuator drive decision unit 173.
`
`
`
`[0026] Next, the operation of this embodiment will be described, starting with the operation of
`
`the driving environment recognition device in the ACC control of the automobile. The same-
`
`vehicle forward image data captured using the image capture device 11, which is configured so
`
`that the capture-capable image area A0 is obtained using a 1.0 megapixel CCD, for example, is
`
`converted into digital data by the CCD output signal conversion processing unit 12 and stored in
`
`the image memory 13.
`
`
`
`[0027] In addition, the data stored in the image memory 13 is input into the image processing
`
`unit 15, and based on this data, first forward vehicle recognition is performed by the forward
`
`vehicle recognition unit 151. Next, vehicle distance calculation is performed by the vehicle
`
`distance calculation unit 152. Also, in parallel with this [operation], same-vehicle lane
`
`recognition and same-vehicle lane positional relationship calculation are performed by the
`
`same-vehicle lane recognition unit 153 and the same-vehicle lane positional relationship
`
`calculation unit 154. The results of the calculations performed in the image processing unit 15
`
`are input into the danger avoidance decision unit 17.
`
`
`
`[0028] In the danger avoidance decision unit 17, in addition to the calculation results discussed
`
`above, data on the vehicle speed of the automobile M and the steering angle and the like is
`
`input via the data I/O unit 18 from the ACC control unit 4, and the state of approach to forward
`vehicles and the state of same-vehicle lane positioning is assessed by the danger level
`
`assessment unit 17 based on the foregoing calculation results and data.
`
`
`
`Unexamined JP 2002-074339
`
`
`
`[0029] In addition, when the same-vehicle approaches abnormally close to a forward vehicle or
`
`is about to deviate from the same-vehicle lane, a determination is made of a danger condition
`
`by the warning issuance decision unit 172. Prescribed display data is sent to the display unit 5
`
`via the data I/O unit 18, resulting in display of content informing the driver of the danger
`
`condition.
`
`
`
`[0030] However, when the distance to the forward vehicle changes, an amount of throttle
`
`control or brake control needed to maintain proper vehicle distance with the forward vehicle is
`
`calculated by the actuator drive decision unit 173, and the control data is sent from the data
`
`I/O unit 18 to the ACC control unit 4.
`
`
`
`[0031] Furthermore, when the same-vehicle drives outside a proximity of a center of the same-
`
`vehicle lane, an amount of steering control required to steer to the proximity of the center of
`
`the same-vehicle lane is calculated by the actuator drive decision unit 173, and the control data
`
`is similarly sent via the data I/O unit 18 to the ACC control unit 4.
`
`
`
`[0032] The foregoing describes the typical operation of the ACC in this embodiment. Next, the
`
`operation of the camera position change decision unit 16, which is a distinctive feature of this
`
`embodiment, will be described using Fig. 4 and Fig. 5. In Fig. 4, once the process of the
`
`flowchart is initiated, first, in Step S1 initial settings are entered for the position of the camera 1.
`
`
`
`[0033] With these initial settings, the tasks are performed of adjusting and fixing the
`
`attachment state of the camera 1 so the image capture viewing field AT is obtained correctly.
`
`Fig. 5(a) shows the result of the initial settings, and as shown in the drawing, the image capture
`
`viewing field AE of the camera 1 is set so the used image area A1 is formed in the exact center
`
`of the capture-capable image area A0.
`
`
`
`[0034] Next, in Step S2, a position of a mark 3 is stored in an initial mark position storage unit
`
`161 as an initial mark position X. The storage of this initial mark position X is treated as the
`
`image capture operational state of the camera 1 and is incorporated into and processed with
`
`the image data. A mark 3 is detected in the used image area A1 shown in Fig. 5(a) and a
`
`position X thereof may be stored in the initial mark position storage unit 161 as an initial mark
`
`3A.
`
`
`
`
`Unexamined JP 2002-074339
`
`[0035] Here, the initial mark position X subsequently is used repeatedly without change until
`
`reset. Consequently, rewritable ROM or RAM with backup power is used so that the initial mark
`
`position storage unit 161 can retain the storage even if the power supply is shut off to the
`
`apparatus as a whole.
`
`
`
`[0036] Here the discussion has focused on the task processes executed up through Step S2
`
`when installing the camera 1 on the automobile M. The subsequent processes operate in
`
`parallel with the routine operation of the driving environment recognition device, with a
`
`prescribed microcomputer executing a prescribed program, for example. The processes from
`
`Step S3 onward begin when an accessory power supply for the automobile M is turned on, and
`
`thereafter are executed every time 1-frame of image data is written to the image memory 13.
`
`
`
`[0037] First, in Step S3, the mark extraction unit 162 and the mark position detection unit 163
`
`detect the position of the mark 3 based on image data incorporated into the image memory 13.
`
`The result is treated as a current mark 3B, and a process is executed treating this position as a
`
`current mark position Y. Next, in Step S4, the comparison unit 164 compares the current mark
`
`position Y with the data stored in the initial mark position storage unit 161, or in other words,
`
`the initial mark position X.
`
`
`
`[0038] Also, first of all, in the case in which the detected position of the mark 3 is identical to
`
`the initial mark position – in other words, when X = Y – a determination is made in Step S5 that
`
`no change has occurred in the position of the camera 1, and control proceeds to Step S6. Then,
`
`after the processes of Steps S6 through S8 are executed, control returns to Step S3, and the
`
`process transitions to the next frame. Here, the processes of the Steps S6 through S8 are for the
`
`purpose of routine operation as the driving environment recognition device.
`
`
`
`[0039] However, if the result of Step S5 is "No" (negative) – in other words when X != Y – then a
`
`disparity exists between the initial mark 3A and the current mark 3B, as shown in Fig. 5(b) and
`
`(c). At this time, control proceeds to the processes in Step S9 and beyond. First, in Step S9, a
`
`determination is made whether or not the amount of disparity between the initial mark 3A and
`
`the current mark 3B is within the correctable range. The determination at this time – in other
`
`words, the determination whether or not the amount of disparity is within the correctable
`
`range – is made in this embodiment by deciding whether or not the current mark 3B position Y
`
`is within the capture-capable image area A0.
`
`
`
`
`Unexamined JP 2002-074339
`
`[0040] In addition, when the decision result in this Step S9 is Yes (affirmative), first, in Step S10,
`
`a process is executed for calculating the amount of correction required to correct the disparity.
`
`At this time, in the case in which the disparity state is a translational disparity, as shown in Fig.
`
`5(b), a translational correction amount process is executed, while in the case of rotational
`
`disparity, as shown in Fig. 5(c), a calculation process for the amount of rotational correction is
`
`performed.
`
`
`
`[0041] Next, in Step S11, the image data is corrected by the image data correction unit 14 based
`
`on the calculated amount of correction. Also, once the process of Step S11 is completed,
`
`control converges with the processes of Steps S6 through S8. At this time, the routine process
`
`as the driving environment recognition device is executed based on the corrected image data.
`
`However, when the decision result of the Step S9 is No – in other words, when the amount of
`
`disparity is too great, and the required amount of correction falls outside the capture-capable
`
`image area A0 – control proceeds to Step S12.
`
`
`
`[0042] As a result, in Step S12 a determination is made that correction is impossible, and then
`
`in Step S13, a signal is output from the data I/O unit 18 representing the fact that correction is
`
`impossible. At this time, a prescribed display signal is supplied from the ACC control unit 4 to
`
`the display device 5, and as a result, the driver is notified that disparity correction cannot be
`
`achieved.
`
`
`[0043] Consequently, according to this 1st embodiment, the position of the mark 3 disposed on
`the hood MB of the automobile M is detected in the serially acquired image data, and is
`
`compared with the initial position to detect position change of the camera 1. Accordingly, it is
`
`possible to always predictably detect position change of the camera 1 and to make proper
`
`corrections in the image data without affecting changes in the driving environment.
`
`
`
`[0044] In other words, in the case of this embodiment, because a mark disposed on the vehicle
`
`is detected instead of detecting a characteristic point within the general scenery to serve as a
`
`mark, consistent camera axis adjustment is possible irrespective of the driving environment.
`
`
`[0045] Also, according to this 1st embodiment, optical axis correction and image rotation
`correction of the camera 1 is automatically serially provided without moving the camera 1. As a
`
`result, it is possible to automatically respond to disparities from design values during camera
`manufacture or installation, or in optical axis or field angle occurring over time, and to always
`
`accurately comprehend the driving environment.
`
`
`
`Unexamined JP 2002-074339
`
`
`
`[0046] In addition, since no mechanism is required for rotating the camera 1 as a result, it is
`
`possible to reduce the weight, cost, and size of the apparatus. Furthermore, since 2 marks are
`
`used, it is possible to respond not only to translational disparity (translational horizontal or
`
`vertical disparity) but also to rotational direction disparity. Moreover, by using an image
`
`capture device and wide-angle lens larger than the usage range, a large disparity amount can be
`
`accommodated.
`
`
`[0047] In this 1st embodiment, marks disposed in 2 locations on the right and left of the
`automobile hood are used, but of course it is possible to achieve the same effect mascots
`
`mounted on the hood or stickers attached to the windshield glass, or any such items attached
`
`to the automobile in 2 locations for use as marks.
`
`
`[0048] In this 1st embodiment, the process of the Step S5 in Fig. 4 employs a method in which
`the position of the mark S3 is detected by the mark extraction unit 162 based on actually
`
`captured image data and then stored in the initial mark position storage unit 161 as the initial
`
`mark position X, as shown by the broken line in Fig. 3.
`
`
`
`[0049] However, instead of this method, it is acceptable to use a standard model camera,
`
`calculate in advance the initial mark position X, write the data to ROM, and use this ROM as the
`initial mark position storage unit 161. This configuration then becomes a 2nd embodiment of the
`present invention.
`
`
`[0050] Also, in the case of this 2nd embodiment, aside from the fact that the process of the Step
`S2 shown in Fig. 4 becomes unnecessary, the processes have the same configuration as in the
`case of the 1st embodiment described above, and the effect of the 2nd embodiment is as
`described below.
`
`
`]0051] In the case of the 1st embodiment, it was only possible to deal with disparities occurring
`over time after installing the camera 1 on the vehicle. But with the 2nd embodiment, by
`comparing the data in ROM with the mark locations on the vehicle, it is also possible to make
`
`adjustments in the optical axis that occurred during manufacturing or during installation.
`
`
`[0052] Next, a 3rd embodiment of the present invention will be described. First, in the case of
`the 3rd embodiment, the overall configuration is the same as the 1st embodiment described in
`Fig. 1-Fig. 3. The points of difference are in the processes shown Fig. 6. However, even in this
`
`
`
`Unexamined JP 2002-074339
`
`Fig. 6, aside from the fact that steps S14 and S15 are added, the processes are the same as in
`the case of the 1st embodiment as shown in Fig. 4.
`
`[0053] Consequently, even in this 3rd embodiment, in the case in which the mark 3 image
`position deviates from the initial position, the fact that the image data is corrected according to
`the amount of the deviation is the same as in the 1st embodiment. However, in the 1st
`embodiment, when the optical axis of the camera 1 is displaced, the Step S11 image correction
`
`process is added every time the processes are executed, which increases the processing time.
`
`
`[0054] However, in the 3rd embodiment shown in Fig. 6, when the mark position is displaced
`from the initial position, the displacement becomes correctable in Step S9, and when the image
`
`data correction amount is calculated in Step S10, that result is assessed in Step S14 as to
`
`whether or not it is only a case of translational displacement, as shown in Fig. 7(a). If the result
`
`is Yes, instead of image data correction, the used image area A1 and the initial mark 3A position
`
`X are changed, as shown in Fig. 7(b).
`
`
`
`[0055] The change in the used image area A1 and the initial mark 3A position X may be
`
`executed by partially selecting pixels from which to actually acquire data from among the pixels
`
`in the optoelectric conversion surface of the image capture device 11. As a result, when the
`
`next set of image data is acquired, the position Y of the current mark 3B will match the initial
`
`mark position X, making the result of the Step S5 Yes, and so no image data correction is
`
`performed, and the process from steps S6 to S8 is executed with the acquired images
`
`unchanged.
`
`
`
`[0056] On the other hand, when displacement is limited to translational displacement, the
`
`determination result in the Step S14 becomes No, and so the image data correction process of
`the Step S11 is executed, just as in the case of the 1st embodiment in Fig. 4. Consequently, in
`the case of th