`
`Mercedes-Benz USA, LLC, Petitioner - Ex. 1010
`
`
`
`
`
`
`
`Driving Environment Recognition for
`Active Safety
`
`Toshihiko Suzuki"
`Yoshiyuki Nakayama*
`Yukinori Yamada*
`Masato Kume“
`
`Peripheral Recognition for Active Safety Peripheral enhancement/advisory systems which provide perceptual
`enhancements and warnings of hazards for drivers are expected to contribute to active safety. This paper
`describes three types of peripheral recognition techniques which have been researched and developed by TOYO-
`TA Motor Corp. since 1 980’s, that is. millimeter-wave radar and laser radar based on active-sensing and CCD
`image processing based on passive-sensing. Both millimeter-wave radar and laser radar feature excellent weather
`resistance and provide range detection for relatively far objects. The CCD image processing system adopts
`template matching method to perform lane-line recognition and approaching vehicle detection by stereo vision
`and‘ optical-flow detection.
`
`Fig. 1 shows four definite types of perceptual enhance-
`ment/advisory systems that may be put into practical use.
`The peripheral recognition technologies for such systems
`must involve minimum lowering of the detecting performance
`due to changes in weather and other environmental condi-
`tions and less cost burden on the user side. Manufacturers
`
`have been studying various methods, but they have not been
`established as technologies for recognition of vehicle peripher-
`al conditions.
`
`Toyota Motor Corp. has been studying and developing
`millimeter-wave radar, laser radar and image processing tech-
`nologies shown in Table 1 as peripheral recognition technol-
`ogies for perceptual enhancement/recognition systems.
`While active-sensing systems detect the electromagnetic
`wave (beam) emitted from the built-in device and reflected
`from targets, passive-sensing systems detect reflected elec-
`tromagnetic waves existing in the ordinary state or and the
`electromagnetic wave radiated from targets. Because an
`active-sensing system irradiates the wave itself, the S/N of
`the received signal is high. As compared with a passive-sensing
`system, an active-sensing system is less affected by changes
`in weather conditions such as rain and fog. The existing active-
`sensing systems have problems such as insufficient resolution
`for accurate locating of targets and difficulty in mounting
`on vehicles.
`
`This paper describes the results of our studies on various
`peripheral recognition methods and themes to be studied in
`the future.
`
`1. Introduction
`
`It has been well known that driving operation by a driver
`is performed in three steps: perception/recognition, decision
`making and control/response. Along with complication of
`driving environment due to increasing traffic in recent years,
`the driver’s load for perception/recognition and decision mak-
`ing has been increasing. One of Japanese highway accident
`statistics shows that collisions with roadside structures and
`rear-end collisions account for over 50% of total accidents.
`
`Most of fatal accidents may be avoided by preventing depar-
`ture from the traveling lane and rear-end collisions.
`One conceivable method for preventing these is to install
`electronic perceptual enhancement/advisory systems for ac-
`tive safety. In other words. it is to make vehicles have intelli-
`gence for recognizing the driving environment and informing
`drivers of the surrounding conditions and any possible danger.
`Such a perceptual enhancement/advisory system, however,
`is to provide the driver with only the information required
`for safe driving, and the driver must assume the final respon-
`sibility for driving operation.
`Sufficient discussion may be required for obtaining social
`consensus on the system reliability and resultant change in
`the driver’s safety consciousness while clarifying the scope
`of responsibility.
`To make perceptual enhancement/advisory systems more
`reliable, infrastructure such as roadside monitoring and ve-
`hicle/roadway communication systems
`should be im-
`plemented.
`Intelligent vehicle systems can be used more efficiently when
`they are well coordinated with the roadside infrastructure.
`
`‘Research & Development Div. III
`”Research & Advanced Development Planning Div.
`
`44
`
`2
`
`
`
`
`
`Driving Environment Recognition for Active Safety
`
`
`
`Crossing—path
`warning systegn
`Distance to vehicle
`ahead and relative
`speed
`
`
`
`Run-off road
`warning system
`
`
` warning system
`Distance to vehicle
`Lane marker (white
`
`line) position and
`approaching from
`
`
`
`road shape
`behind (or motor
`cycle at right/left
`
`turn) and relative
`
`speed
`
`
`Collision
`warning system
`
`Distance to approaching
`vehicle, motor cycle or
`predestrian and relative
`
`Building
`
`Fig. 1 Perceptual Enhancement/Advisory System
`
`2. Millimeter-Wave Radarm'm
`
`This system allows relatively long range detection with less
`influence by environmental conditions such as rain, fog and
`snow.
`
`The FM-CW type millimeter-wave radar we have been de-
`veloping detects the relative speed by sensing the variation
`of the phase according to the moving speed of the target. For
`actual application to the collision warning system, however,
`electromagnetic interference with the radars on other vehi—
`cles exists as a big problem.
`To solve this problem, we have developed a system using
`45° polarization to prevent interference with opposing vehi-
`cles by 90° difference in polarization.
`
`Table 1 Recognition Methods for Perceptual
`Enhancement/Advisory System
`
`'Collision warning system
`Mnlimeter-wave radar
`
`Laser radar
`~Collision warning system
`
`OLane—change warning system
`
`
`0Collision warning system
`Passive
`Image
`
`-Lane-change warning system
`
`
`methods
`processing
`
`
`
`
`'Collision warning system
`Optical flow
`OLane-change warning system
`
`detection
`
`
`
`
`OCrossing-path warning system
`
`
`ORun-off road warning system
`
`late-
`matching
`Stereo
`lsion
`
`
`
`
`
`
`
`TOVOTA Technical Review Vol. 43 No.1 Sep. 1993
`
`
`
`Receiving antenna Transmitting antenna
`
`MlC mixer
`
`
`Directional coupler
`
`Gunn oscillator
`
`
`
`Circulator
`
`Fig. 2 Millimeter-wave Radar
`
`Fig. 2 shows the exterior view of the millimeter-wave ra-
`dar in V shape for 45° polarization. Fig. 3 shows the method
`of experiment and an example of evaluation results of the ef-
`fect of suppressing interference from other vehicles by adop-
`tion of this system. Fig. 4 shows the case where the radar
`on the vehicle running in parallel is directed to the same tar-
`get vehicle.
`The experimental results indicate almost no electromagnetic
`interference with the radars on other vehicles.
`
`Future themes to be studied are how to improve the signal
`processing method and antenna shape for recognition per-
`formance improvement and how to facilitate installation on
`vehicles.
`
`45
`
`
`
`3
`
`
`
`Measuring vehicle
`
`
`
`
`
`R = 10m, 0 = 5'
`Distance signal
`
`
`
`
`Radar ONIOFF signal from vehicle running in opposite
`direction
`
`OFF
`
`
`Fig. 3 Electromagnetic Interference (from Vehicle Running
`in Opposite Direction) Evaluation Result
`
`R
`
`Measuring vehicle
`
`Target vehicle
`
`
`
`
`
`
`
`
`(m)
`20
`
`10
`
`R = 10m, r = 10m
`0 = 20'
`Distance signal
`
`Fl=10m. r=10m
`0=5°
`
`Radar IN/OFF signal from vehicle running in same
`
`direction
`OFF
`
`
`
`
`3. Laser Radar”)
`
`This system will reduce the size and weight of the active-
`sensing range monitoring system in comparison with the case
`of millimeter~wave radar. It is not subject to frequency regu-
`lation, signal processing is easy, and narrow range detection
`is possible by concentrating beam irradiation.
`
`46
`
`Fig. 5 shows the principle of detection. The pulse method
`is adopted to increase the measurable distance and to improve
`the reliability.
`This system involves possible lowering of the detecting pre-
`cision due to variation in the receiving level resulting from
`change in the laser beam reflection factor of the target and
`environmental changes such as rainfall.
`We have, therefore, improved the AGC (automatic gain
`control) and STC (sensitivity time control) circuits to reduce
`the error to within :l: 2 m for a detecting distance of 100 in.
`
`GaAs laser (wave
`length: 0.904 um)
`
`
`
`Transmitting light pulse
`
`Light pulse reflected from vehicle
`
`
`
`
`
`
`
`Distance
`(a) 4
`
`
`
`
`Distance (Fl) = txc/z
`
`
`(c is the velocity
`of Iightl
`
`
`(0
`0 o to
`PIN h t (1' de
`D
`
`Fig. 5 Principle of Detection by Laser Radar
`
`4. Image Recognitionm
`
`For image recognition, feature extraction from the image
`data for recognition is necessary. The methods for feature
`recognition can roughly be classified into the boundary ex-
`traction method and region segmentation method as shown
`in Fig. 6. We have adopted the template matching method
`which is one of the simplest region segmentation methods.
`Fig. 7 shows the basic principle. This method is capable of
`simultaneous processing of feature extraction and initial
`recognition for simplification of the whole processing as it
`searches the region having the highest correlation with the
`template image.
`Table 3 shows three characteristics that can be detected by
`using the template matching method. These are because the
`stereo vision and optical flow can be detected in the same way
`as template matching.
`The template matching method enables a single processor
`to perform multiple types of recognition jobs as its merit. We
`have made a prototype compact compact high—speed proces-
`sor board using a special IC for calculating correlation be—
`tween images that would otherwise require high operation
`load. lPhoto. 1)
`
`In comparison with other recognition methods, the image
`processing system using a CCD camera as the input device
`has the following features:
`
`4
`
`
`
`,..._._..,_..__..___.__-..__.____.___.._-,.....-e_...
`
`Driving Environment Recognition for Active Safety
`
`Feature extraction method
`
`Boundary extractio
`method
`
`Region segmentation
`method
`- Template-matching
`method
`
`Extraction of boundary line
`where the light intensity
`changes by differentiating
`the image
`
`Extraction of regions where
`the intensity of light or
`color is the same
`
`Fig. 6 Feature Extraction Method for Image Recognition
`
`
`
`
`k4
`M-1,N1
`Correlation coefficient D i, j: Z I t(m n)- g (m+l, n+1) I
`
`Search window 9 (i, j)
`
`Fig. 7 Principle of Template-matching Method
`
`Table 2 Application of Template-matching Method
`Template-matching
`Object detection/recognition
`Stereo vision
`Detection of 2-dimensional depth (distance)
`Optical flow
`Detection 01 motion vector of movlng object
`
`
`
`Photo. 1 Image Recognition Board
`
`TOYOTA Technical Review Vol. 43 No.1 Sep. 1993
`
`(1) Capability of lane marker and road sign recognition
`(2) Concurrent high-speed pick-up of various image signals
`in a wide range
`(3) Relatively low cost because of the construction using
`general parts for consumer products
`Basically drivers operate according to visual information.
`Thus the image recognition system has wide applicability
`for use other than perceptual enhancement/advisory systems
`shown in Fig. 1.
`
`4.1 Lane-Line Recognition by Template-Matching
`
`Lane recognition by the lane line provides the most basic
`information for recognition of run—off (lane departure) and
`obstacles in peripheral recognition. One of the important fac—
`tors in lane-line recognition is the robustness against changes
`in shade/brightness of the road surface and in lane-line shape
`at curved portions.
`Fig. 8 shows the template image update method we have
`
`S.W.i: i-th search window
`T.l.i
`:
`i-th template
`
`
`
`
`Vehicle side
`
`Fig. 8 Template Image Updating Method
`
`developed. Knowledge on continuous variation of the posi—
`tion and shape of the lane line on the road surface is used
`as a basic premise.
`The area to be searched in front of the vehicle is divided
`
`into plural narrow search windows for speedy processing and
`minimization of the influence of change in shape. In each
`search window, the template moves only in the horizontal
`direction to search the position where the correlation is max—
`imized. To cope with any change in the lane line shape. search
`windows are searched in the ascending order of the distance
`from the vehicle. The lane line image obtained in the search
`window nearer from the vehicle is used for updating the tem-
`plate for use in the next window. To stabilize the lane-line
`position in the updated template image, a predefined image
`is used as the template for the search window nearest to the
`
`47
`
`5
`
`
`
`
`
`Righrhand image division
`into small blocks
`
`
`
`
`
`
`vehicle.
`
`The intensity transformation to cope with the changes in
`shade and light intensity on the road surface is performed only
`in each search window. The basic algorithm uses linear trans-
`formation so that the intensity distribution in the histogram
`of pixels in the search window matches that in the template
`image.
`As a result of intensity transformation, the robustness
`against the change in light intensity has been improved greatly.
`In the test for evaluating the effect by manually changing the
`electronic shutter speed, lane line detection was possible in
`the light intensity variation range corresponding to the shut-
`ter speed change between 1/125 and 1/4000 sec (32 times).
`Owing to the template image update control, stabilized lane
`line detection was possible up to 50 m ahead at a corner where
`
`
`
`Photo. 2 Lane-line Recognition on Shady Road
`
`the curvature was 80 R. Photo. 2 shows an example of the
`test for evaluating the robustness of recognition on a shady
`road. Accurate recognition in rainy weather involving strong
`reflection from the road surface is the next task to be accom-
`
`plished.
`
`4.2 Vehicle Recognition by Stereo Vision
`
`3—dimensional depth image is effective information for the
`collision and lane change warning systems. One depth (dis-
`tance) measuring technology using image processing is the
`stereo vision method for obtaining the distance distribution
`inthe whole screen from the images picked up by two cameras
`according to the principle of triangulation.
`
`48
`
`
`
`One block selection
`
`Searching for point corresponding
`to the Iefthand image
`(template-matching)
`
`Distance calculation
`
`Execution for all blocks
`
`Fig. 9 Principle of Distance Measurement
`by Stereo Vision
`
`Compare with millimeter—wave and laser radars, stereo vi-
`sion features excellent spatial resolution to enable the posi-
`tion and size of each object to be obtained from the depth
`map, resulting in more accurate recognition of an obstacle.
`Fig. 9 shows the principle of depth map acquisition
`method. After the image on one side is divided into small
`blocks, the block having the highest correlation with each
`block is searched from the image on the other side by the tem-
`plate matching method. The positional deviation between the
`corresponding blocks in the two images is called the dispari-
`ty, and the distance is obtained by using this value according
`to equation (1).
`
`Distance _W)
`(Pixel pitch width X Disparity)
`
`(1)
`
`The depth map can be obtained by executing this process-
`ing for all blocks.
`Obstacle recognition is performed by converting the depth
`map into 3—dimensional arrangement of external objects.
`Then only the blocks existing in the area are extracted. If
`blocks having almost equal distance information exist in ad-
`jacent blocks, they are collectively recognized as an obstacle.
`Photos. 3 to 5 show the result of application to the lane-
`change warning system for ensuring safety at the time of over-
`taking on a highway. Here, two cameras are installed on and
`beneath the door mirror. Photo. 3 shows the image picked
`up by the upper camera, and Photo. 4 the depth map ob-
`tained from it. Photo. 5 shows the result of recognizing only
`the vehicle in the adjacent lane.
`The stereo vision method is effective for peripheral recog-
`nition, but development of a high-speed processor is a future
`task as a tremendous work load is required for searching the
`
`WNJ...
`
`6
`
`
`
`Driving Environment Recognition for Active Safety
`
`corresponding point.
`
`4.3 Approaching Vehicle Recognition through Optical
`Flow
`
`It is necessary to detect only the approaching vehicle in the
`lane-change warning system and crossing-path warning sys—
`tem. In the collision warning system, on the other hand, de—
`tection of the vehicle moving direction is important for
`detecting the cutting—in vehicle and a vehicle running out from
`the present lane.
`As the image processing system for detecting the movement
`of objects in the image, it is the most effective to detect the
`optical
`flow. Optical
`flow detection is possible by the
`template-matching method. In other words, the moving dis-
`tance on the image can be calculated by dividing the block
`having the highest correlation with each block from the im-
`age after lapse of a certain time period.
`The time interval between image blocks used for matching
`is important for accurate recognition of the approaching ve-
`hicle by optical flow detection. Fig. 10 shows the simulated
`size of the horizontal component of the vector of an object
`that is approaching at a speed of 20 km/h.
`As apparent from the result, a very wide searching area is
`
`X-axis (m)
`
`Z-axis (m)
`Optical axis)
`
`Moving
`direction
`
`Okm/h
`
`
`
`
`Horizontalcomponentofflowvector(pixel)
`
`
`
`
`Flow calculation time interval: 300 msec
`1/2 inch CCD f = 7.5 mm
`Number of horizontal pixels: 512
`
`010
`
`AO
`
`(a)O
`
`N0
`
`_A O
`
`
`
`Photo. 3 Original Image Picked up by Upper Camera
`in Stereo Vision
`
`Photo. 4 Depth Map by Stereo Vision
`
`
`
`
`Photo. 5 Vehicle Recognition by Stereo Vision
`
`0
`
`5
`
`10
`
`20
`15
`Z-axis (m)
`
`25
`
`30
`
`35
`
`Fig. 10 Flow Calculation Model and Calculation Results
`
`revom Technical Review Vol. 43 No.1 Sep. 1993
`
`49
`
`7
`
`
`
`
`
`
`
`necessary near the vehicle and no flow vector having a suffi-
`cient size can be detected in a far range. Thus we have adopted
`a configuration for parallel operation of multiple correlation
`calculation boards at varied time intervals. A dominant cause
`
`of noise generation in the flow detected by the matching
`method is possible mismatching in the images of road sur-
`face and vehicle side faces having little characteristics.
`To solve this problem, the stability of the size and the direc-
`tion of the flow detected in each block over time are used
`
`for evaluating the reliability of the flow. As compared with
`the method which evaluates the similarity with the flow de-
`tected in the adjacent blocks, this method is effective for de—
`tecting a vehicle at a far location or a vehicle traveling at a
`low relative speed. Photos. 6 to 8 show a vehicle, a motor
`cycle and a bicycle, respectively, approaching from right at
`a trifurcate road with poor visibility.
`Results of several researches concerning the methods for
`obtaining the direction, moving speed and position of a mov-
`ing object from the flow vector have been reported. In opti-
`cal flow detection from the image picked up from a traveling
`vehicle, flow of the background due to the movement of the
`vehicle itself occurs in addition to the moving objects. The
`first problem to be solved is to extract only the flow of the
`object to be watched by compensating for the vector of the
`vehicle motion.
`
`5. Conclusion
`
`The peripheral recognition capabilities of the millimeter-
`wave radar, laser radar and image processing technology fall
`far short of the driver’s perception and judgment capabilities.
`Partially, however, they have equal or superior detection
`capabilities and the possibility of the perceptual enhancement
`system in limited situations for peripheral recognition is high.
`Finally, the important directions of the intelligence of the
`vehicle to be provided by the perceptual enhancement/advi—
`sory system can be summarized as follows:
`0 New human interface system
`- Driving environment recognition/judgment with danger
`prediction
`With regard to the former, the navigation system and many
`other intelligent functions using communication means have
`been developed in recent years to provide the driver of many
`types of information. It will, therefore, be indispensable to
`study the method for providing only the necessary informa-
`tion for the driver easily and as required.
`With regard to the latter, danger prediction during driving
`naturally lead to how the vehicle should be driven. Some ex-
`amples are judgment on the vehicle-to vehicle distance and
`vehicle speed for safe driving under the given traffic environ-
`
`50
`
`merit and weather conditions, and warning the driver of the
`possible danger by selecting the import point to be watched
`while approaching an intersection or a pedestrian crossing.
`An important task in the future will be providing vehicles
`with higher levels of intelligence by integrating various intel-
`ligent functions and systems being adopted and developed for
`vehicles.
`
`
`
`
`
`
`muywnmmezmm
`
`
`
`
`
`Photo. 8 Optical Flow (Bicycle)
`
`
`
`8
`
`
`
`Driving Environment Recognition for Active Safety
`
`We would like to express our deep gratitude to Prof.
`Hirochika lnoue in the Engineering Faculty of the Universi-
`ty of Tokyo for his kind advice given for our study on the
`image recognition technologies.
`
`I References
`
`(l) M. Kotaki, Y. Kakimoto, E. Akutu, Y. Fujita, H. Fuku-
`hara et a1; “Development of millimeter wave automative
`sensing technology", IEEE MTTS Digest p.709—712, 1992
`(2) Report of the Millimeter-wave Sensing System Study and
`Research Committee, (Japanese), Radio Equipment In—
`spection and Certification Institute, Tokyo, March 1990
`and March 1991
`
`(3) T. Teramoto, K. Fujimura, Y. Fujita; “Study of Laser
`Radar, National Highway Traffic Safety Administra-
`tion”, The Twelfth Technical Conference on Experimental
`Safety Vehicles, Paper No.89-4b-0-020, 1989
`(4) Suzuki, Tachibana, Aoki, Inoue; “An Automated High-
`way Vehicle System Using Computer Vision”, JSAE Au-
`tumn Convention Proceeding 924, Vol. I, 1992
`
`I Authors
`
` L
`
`T. sfizuxl
`
`
`
` {in
`
`Y. NAKAYAMA
`
`
`
`Y. YAMADA
`
`TOYOTA Technical Review Vol. 43 No.1 Sep. 1993
`
`51
`
`9
`
`