throbber
1
`
`On-Road Vehicle Detection Using
`Optical Sensors: A Review
`
`Zehang Sun1, George Bebis2 and Ronald Miller3
`1eTreppid Technologies, LLC, Reno, NV
`2Computer Vision Laboratory, University of Nevada, Reno, NV
`3Vehicle Design R & A Department, Ford Motor Company, Dearborn, MI
`(zehang,bebis)@cs.unr.edu, rmille47@ford.com
`
`Abstract—As one of the most promising applications of computer vision,
`vision-based vehicle detection for driver assistance has received consider-
`able attention over the last 15 years. There are at least three reasons for the
`blooming research in this field: first, the startling losses both in human lives
`and finance caused by vehicle accidents; second, the availability of feasible
`technologies accumulated within the last 30 years of computer vision re-
`search; and third, the exponential growth of processor speed has paved the
`way for running computation-intensive video-processing algorithms even
`on a low-end PC in realtime. This paper provides a critical survey of recent
`vision-based on-road vehicle detection systems appeared in the literature
`(i.e., the cameras are mounted on the vehicle rather than being static such
`as in traffic/driveway monitoring systems).
`
`I. INTRODUCTION
`
`Every minute, on average, at least one person dies in a vehicle
`crash. Auto accidents also injure at least 10 million people each
`year, and two or three million of them seriously. The hospital
`bill, damaged property, and other costs are expected to add up
`to 1%-3% of the world’s gross domestic product [1]. With the
`aim of reducing injury and accident severity, pre-crash sensing
`is becoming an area of active research among automotive man-
`ufacturers, suppliers and universities. Vehicle accident statistics
`disclose that the main threats a driver is facing are from other
`vehicles. Consequently, developing on-board automotive driver
`assistance systems aiming to alert a driver about driving environ-
`ments, and possible collision with other vehicles has attracted a
`lot of attention.
`In these systems, robust and reliable vehicle
`detection is the first step — a successful vehicle detection algo-
`rithm will pave the way for vehicle recognition, vehicle track-
`ing, and collision avoidance. This paper provides a survey of
`on-road vehicle detection systems using optical sensors. More
`general overviews on intelligent driver assistance systems can
`be found in [2].
`
`II. VISION-BASED INTELLIGENT VEHICLE RESEARCH
`WORLDWIDE
`
`With the ultimate goal of building autonomous vehicles,
`many government institutions have lunched various projects
`worldwide, involving a large number of research units work-
`ing cooperatively. These efforts have produced several proto-
`types and solutions, based on rather different approaches [2].
`In Europe, the PROMETHEUS program (Program for European
`Traffic with Highest Efficiency and Unprecedented Safety) pio-
`neered this exploration. More than 13 vehicle manufactures and
`several research institutes from 19 European countries were in-
`volved. Several prototype vehicles and systems (i.e., VaMoRs,
`VITA, VaMP, MOB-LAB, GOLD) were designed as a result of
`
`this project. Although the first research efforts on developing
`intelligent vehicles were seen in Japan in the 70’s, significant
`research activities were triggered in Europe in the late 80s and
`early 90s. MITI, Nissan and Fujitsu pioneered the research
`in this area by joining forces in the project “Personal Vehicle
`System” [3].
`In 1996, the Advanced Cruise-Assist Highway
`System Research Association (AHSRA) was established among
`automobile industries and a large number of research centers
`[2]. In the US, a great deal of initiatives have been launched
`to address this problem.
`In 1995, the US government estab-
`lished the National Automated Highway System Consortium
`(NAHSC) [4], and launched the Intelligent Vehicle Initiative
`(IVI) in 1997. Several promising prototype vehicles/systems
`have been investigated and demonstrated within the last 15 years
`[5].
`In March 2004, the whole world was stimulated by the
`“grand challenge” organized by DARPA [6]. In this competi-
`tion, 15 fully-autonomous vehicles attempted to independently
`navigate a 250-mile (400 km) desert course within a fixed time
`period, all with no human intervention whatsoever - no driver,
`no remote-control, just pure computer-processing and naviga-
`tion horsepower, competing for a $1 million cash prize. Al-
`though, even the best vehicle (i.e., “Red Team” from Carnegie
`Mellon) made only 7 miles, it is a very big step towards building
`autonomous vehicles in the future.
`
`III. ACTIVE VS. PASSIVE SENSORS
`
`The most common approach to vehicle detection is using
`active sensors such as lasers, lidar, or millimeter-wave radars.
`They are called active because they detect the distance of an
`object by measuring the travel time of a signal emitted by the
`sensors and reflected by the object. Their main advantage is
`that they can measure certain quantities (e.g., distance) directly
`requiring limited computing resources. Prototype vehicles em-
`ploying active sensors have shown promising results. However,
`active sensors have several drawbacks, such as low spatial reso-
`lution, and slow scanning speed. Moreover, when a large num-
`ber of vehicles are moving simultaneously in the same direction,
`interference among sensors of the same type poses a big prob-
`lem.
`Optical sensors, such as normal cameras, are usually referred
`to as passive sensors because they acquire data in a non-intrusive
`way. One advantage of passive sensors over active sensors is
`cost. With the introduction of inexpensive cameras, we can
`have both forward and rearward facing cameras on a vehicle, en-
`
`VALEO EX. 1009
`
`

`

`abling a nearly 360o field of view. Optical sensors can be used
`to track more effectively cars entering a curve or moving from
`one side of the road to another. Also, visual information can be
`very important in a number of related applications, such as lane
`detection, traffic sign recognition, or object identification (e.g.,
`pedestrians, obstacles), without requiring any modifications to
`road infrastructures. On the other hand, vehicle detection based
`on optical sensors is very challenging due to huge within class
`variabilities. For example, vehicles may vary in shape, size, and
`color. Vehicle appearance depends on its pose and is affected by
`nearby objects. Illumination changes, complex outdoor environ-
`ments (e.g. illumination conditions), unpredictable interactions
`between traffic participants, and cluttered background are diffi-
`cult to control.
`To address some of the above issues, more powerful optical
`sensors are currently being investigated such as cameras oper-
`ating under low light (e.g., Ford proprietary low light camera
`[7]) or cameras operating in the non-visible spectrum (e.g., In-
`frared (IR) camera [8]). Building cameras with internal process-
`ing power (i.e., vision chip) has also attracted great attention.
`In conventional vision systems, data processing takes place at a
`host computer. Vision chips have many advantages over conven-
`tional vision systems, for instance high speed, small size, lower
`power consumption, etc. The main idea is integrating photo-
`detectors with processors on a very large scale integration [9].
`
`IV. THE TWO STEPS OF VEHICLE DETECTION
`In driver assistance applications, vehicle detection algorithms
`need to process the acquired images at real-time or close to real-
`time. Searching the whole image to locate potential vehicle
`locations is not realistic. The majority of methods reported in
`the literature follow two basic steps: (1) Hypothesis Generation
`(HG) where the locations of potential vehicles in an image are
`hypothesized, and (2) Hypothesis Verification (HV) where tests
`are performed to verify the presence of a vehicle in an image
`(see Fig. 1).
`
`Fig. 1. Illustration of the two-step vehicle detection strategy
`V. HYPOTHESIS GENERATION
`The objective of the HG step is to find candidate vehicle lo-
`cations in an image quickly for further exploration. HG ap-
`proaches can be classified into one of the following three cat-
`egories: (1) knowledge-based, (2) stereo vision based, and (3)
`motion-based.
`
`A. Knowledge-based methods
`Knowledge-based methods employ a-priori knowledge to hy-
`pothesize vehicle locations in an image. We review below some
`
`2
`
`approaches using information about symmetry, color, shadow,
`corners, horizontal/vertical edges, texture, and vehicle lights.
`
`A.1 Symmetry
`
`Vehicle images observed from rear or frontal view are in gen-
`eral symmetrical in horizontal and vertical directions. This ob-
`servation was used as a cue for vehicle detection in the early 90s
`[10]. An important issue that arises when computing symmetry
`from intensity, however, is the presence of homogeneous areas.
`In these areas, symmetry estimation is sensitive to noise. In [11],
`information about edges was included in the symmetry estima-
`tion to filter out homogeneous areas. When searching for local
`symmetry, two issues must be considered carefully. First, we
`need a rough indication of where a vehicle is probably present.
`Second, even when using both intensity and edge maps, symme-
`try as a cue is still prone to false detections, such as symmetrical
`background objects, or partly occluded vehicles.
`
`A.2 Color
`
`Although few existing systems use color information to its
`full extent for HG, it is a very useful cue for obstacle detection,
`lane/road following, etc. Several prototype systems investigated
`the use of color information as a cue to follow lanes/roads, or
`segment vehicles from background [12]. Similar methods could
`be used for HG, because non-road regions within a road area are
`potentially vehicles or obstacles. The lack of deploying color in-
`formation in HG is largely due to the difficulties of color-based
`object detection or recognition methods in outdoor settings. The
`color of an object depends on illumination, reflectance prop-
`erties of the object, viewing geometry, and sensor parameters.
`Consequently, the apparent color of an object can be quite dif-
`ferent during different times of the day, under different weather
`conditions, and under different poses.
`
`A.3 Shadow
`
`Using shadow information as a sign pattern for vehicle de-
`tection was initially discussed in [13]. By investigating im-
`age intensity, it was found that the area underneath a vehicle
`is distinctly darker than any other areas on an asphalt paved
`road. A first attempt to deploy this observation can be found
`in [14], though there was no systematic way to choose appro-
`priate threshold values. The intensity of the shadow depends on
`the illumination of the image, which in turn depends on weather
`conditions. Therefore the thresholds are not, by no means, fixed.
`In [15], a normal distribution was assumed for the intensity of
`the free driving space. The mean and variance of the distribution
`were estimated using Maximum Likelihood (ML). It should be
`noted that the assumption about the distribution of road pixels
`might not always hold when true. For example, rainy weather
`conditions or bad illumination conditions will make the color of
`road pixels dark, causing this method to fail.
`
`A.4 Corners
`
`Exploiting the fact that vehicles in general have a rectangular
`shape, Bertozzi et al. proposed a corner-based method to hy-
`pothesize vehicle locations [16]. Four templates, each of them
`corresponding to one of the four corners, were used to detect all
`the corners in an image, followed by a search method to find the
`
`

`

`matching corners. For example, a valid upper-left corner should
`have a matched lower-right corner.
`
`A.5 Vertical/horizontal edges
`Different views of a vehicle, especially rear views, contain
`many horizontal and vertical structures, such as rear-window,
`bumper etc. Using constellations of vertical and horizontal
`edges has shown to be a strong cue for hypothesizing vehicle
`presence. Matthews et al. [17] applied horizontal edge detec-
`tor on the image first, then the response in each column was
`summed to construct the profiles, and smoothed using a trian-
`gular filter. By finding the local maximum and minimum peaks,
`they claimed that they could find the horizontal position of a
`vehicle on the road. A shadow method, similar to that in [15],
`was used to find the bottom of the vehicle. Goerick et al. [18]
`proposed a method called Local Orientation Coding (LOC) to
`extract edge information. Handmann et al. [19] also used LOC,
`together with shadow information, for vehicle detection. Parodi
`et al. [20] proposed to extract the general structure of a traf-
`fic scene by first segmenting an image into four regions:
`the
`pavement, the sky, and two lateral regions using edge grouping.
`Groups of horizontal edges on the detected pavement were then
`considered for hypothesizing the presence of vehicles. Betke et
`al. [21] utilized edge information to detect distant cars. They
`proposed a coarse-to-fine search method looking for rectangu-
`lar objects through analyzing vertical and horizontal profiles. In
`[22], vertical and horizontal edges were extracted separately us-
`ing the Sobel operator. Then, a set of edge-based constraint fil-
`ters were applied on those edges to segment vehicles from back-
`ground. The edge-based constraint filters were derived from a
`prior knowledge about vehicles. Assuming that lanes have been
`successfully detected, Bucher et al. [23] hypothesized vehicle
`presence by scanning each lane starting from the bottom, trying
`to find the lowest strong horizontal edge.
`Utilizing horizontal and vertical edges as cues can be very ef-
`fective. However, an important issue to be addressed, especially
`in the case of on-line vehicle detection, is how the choice of
`various parameters affects system robustness. These parameters
`include the threshold values for the edge detectors, the thresh-
`old values for picking the most important vertical and horizontal
`edges, and the threshold values for choosing the best maxima
`(i.e., peaks) in the profile images. Although a set of parameter
`values might work perfectly well under some conditions, they
`might fail in other environments. The problem is even more se-
`vere for an on-road vehicle detection system since the dynamic
`range of the acquired images is much bigger than that of an in-
`door vision system. A multi-scale driven method was investi-
`gated in [7] to address this problem. Although it did not root out
`the parameter setting problem, it did alleviate it to some extend.
`
`A.6 Texture
`The presence of vehicles in an image cause local intensity
`changes. Due to general similarities among all vehicles, the in-
`tensity changes follow a certain pattern, referred to as texture in
`[24]. This texture information can be used as a cue to narrow
`down the search area for vehicle detection. Entropy was first
`used as a measure for texture detection. Another texture-based
`segmentation method suggested in [24] used co-occurrence ma-
`
`trices. The co-occurrence matrix contains estimates of the prob-
`abilities of co-occurrences of pixel pairs under predefined ge-
`ometrical and intensity constraints. Using texture for HG can
`introduce many false detections. For example, when we drive
`a car outdoor, especially in some downtown streets, the back-
`ground is very likely to contain textures.
`
`A.7 Vehicle lights
`
`Most of the cues discussed above are not helpful for night
`time vehicle detection — it would be difficult or impossible to
`detect shadows, horizontal/vertical edges, or corners in images
`obtained at night conditions. Vehicle lights represent a salient
`visual feature at night. Cucchiara et al. [25] used morphological
`analysis for detecting vehicle light pairs in a narrow inspection
`area.
`
`B. Stereo-vision based methods
`
`There are two types of methods using stereo information for
`vehicle detection. One uses disparity map, while the other
`uses an anti-perspective transformation (i.e., Inverse Perspective
`Mapping (IPM)).
`
`B.1 Disparity map
`
`The difference in the left and right images between corre-
`sponding pixels is called disparity. The disparities of all the im-
`age points form the so-called disparity-map. If the parameters
`of the stereo rig are known, the disparity map can be converted
`into a 3-D map of the viewed scene. Computing the disparity
`map, however, is very time consuming. Hancock [26] proposed
`a method employing the power of the disparity while avoiding
`some heavy computations. In [27], Franke et al. argued that, to
`solve the correspondence problem, area-based approaches were
`too computationally expensive, and disparity maps from feature-
`based methods were not dense enough. A local feature extractor
`“structure classification” was proposed to solve the correspon-
`dence problem easier.
`
`B.2 Inverse perspective mapping
`
`The term “Inverse Perspective Mapping” does not correspond
`to an actual inversion of perspective mapping [28], which is
`mathematically impossible. Rather, it denotes an inversion un-
`der the additional constraint that inversely mapped points should
`lie on the horizontal plane. Assuming a flat road, Zhao et al. [29]
`used stereo vision to predict the image seen by the right camera,
`given the left image, using IPM. Specifically, they used the IPM
`to transform every point in the left image to world coordinates,
`and re-projected them back onto the right image, which were
`then compared against the actual right image. In this way, they
`were able to find contours of objects above the ground plane.
`Instead of warping the right image onto the left image, Bertozzi
`et al.
`[30] computed the inverse perspective map of both the
`right and left images. Although only two cameras are required
`to find the range and elevated pixels in an image, there are sev-
`eral advantages to use more than two cameras [31]. Williamson
`et al. investigated a triocular system [32]. Due to the additional
`computational costs, binocular system is more preferred in the
`driver assistance system.
`
`3
`
`

`

`In general, stereo-vision based methods are accurate and ro-
`bust only if the stereo parameters have been estimated accu-
`rately, which is really hard to guarantee in the on-road scenario.
`Since the stereo rig is on a moving vehicle, vibrations from car
`motion can shift the cameras while the height of the cameras can
`keep changing due to the suspension. Suwa et al. [33] proposed
`a method to adjust the stereo parameters to compensate for the
`error caused by camera shifting. Broggi et al. [34] analyzed
`the parameter drifts and argued that vibrations affect mostly the
`extrinsic camera parameters and not the intrinsic ones. A fast
`self-calibration method was investigated in that study.
`
`C. Motion-based methods
`
`All the cues discussed so far use spatial features to distinguish
`between vehicles and background. Another important cue that
`can be used is the relative motion obtained via the calculation
`of optical flow. Optical flow information can provide strong
`information for HG. Approaching vehicles at an opposite di-
`rection produce a diverging flow, which can be quantitatively
`distinguished from the flow caused by the car ego-motion [35].
`On the other hand, departing or overtaking vehicles produce a
`converging flow. Giachetti et al. [35] developed first-order and
`second-order differential methods and applied them to a typi-
`cal image sequence taken from a moving vehicle along a flat
`and straight road. The results were discouraging. Three factors
`causing poor performance were summarized in [35]: (a) dis-
`placement between consecutive frames, (b) lack of textures, and
`(c) shocks and vibrations. Given the difficulties faced by mov-
`ing camera scenario, getting a reliable dense optical flow is not
`an easy task. Giachetti et al. [35] managed to re-map the corre-
`sponding points between two consecutive frames, by minimiz-
`ing a distance measure. Kruger et al. [36] estimated the optical
`flow from spatio-temporal derivatives of the grey value image
`using a local approach. They further clustered the estimated op-
`tical flow to eliminate outliers. In contrast to dense optical flow,
`“sparse optical flow” utilizes image features, such as corners
`[37], local minima and maxima [38], or “Color Blob” [39]. Al-
`though it can only produce a sparse flow, feature based method
`can provide sufficient information for HG. In contrast to pixel-
`based optical flow estimation methods where pixels are pro-
`cessed independently, feature based methods utilize high level
`information. Consequently, they are less sensitive to noise.
`In general, motion-based methods can detect objects based on
`relative motion information. Obviously, this is a major limita-
`tion, for example, this method can not be used to detect static
`obstacles, which can represent a big threat.
`
`VI. HYPOTHESIS VERIFICATION
`
`The input to the HV step is the set of hypothesized locations
`from the HG step. During HV, tests are performed to verify the
`correctness of a hypothesis. HV approaches can be classified
`into two main categories: (1) template-based methods and (2)
`appearance-based methods.
`
`A. Template-based methods
`
`Template-based methods use predefined patterns of the ve-
`hicle class and perform correlation between the image and the
`template. Some of the templates in the literature are very
`
`“loose”, while others very strict. Parodi et al. [20] proposed
`a hypothesis verification scheme based on license plate and rear
`windows detection using constraints based on vehicle geometry.
`Handmann et al. [19] proposed a template based on the obser-
`vation that the rear/frontal view of a vehicle has a “U” shape.
`During verification, they considered a vehicle to be present in
`the image if they could find the “U” shape (i.e., one horizontal
`edge, two vertical edges, and two corners connecting the hor-
`izontal and vertical edges).
`Ito et al.
`[40] used a very loose
`template to recognize vehicles. They hypothesized vehicle loca-
`tion using active sensors and verified those locations by check-
`ing whether pronounced vertical/horizontal edges and symmetry
`existed. Regensburger et al. [41] utilized a template similar to
`[40]. They argued that the visual appearance of an object de-
`pends on its distance from the camera. Consequently, they used
`two slightly different generic object (vehicle) models, one for
`nearby objects and the other for distant objects. A rather loose
`template was also used in [42], where the hypothesis was gen-
`erated on the basis of road position and perspective constraints.
`The template contained a priori knowledge about vehicles: “a
`vehicle is generally symmetric, characterized by a rectangular
`bounding box which satisfies specific aspect ratio constraints”.
`
`B. Appearance-based methods
`
`Appearance-based methods learn the characteristics of the ve-
`hicle class from a set of training images which capture the vari-
`ability in vehicle appearance. Usually, the variability of the non-
`vehicle class is also modelled to improve performance. First,
`each training image is represented by a set of local or global
`features. Then, the decision boundary between the vehicle and
`non-vehicle classes is learned either by training a classifier (e.g.,
`Neural Network (NN)) or by modelling the probability distri-
`bution of the features in each class (e.g., using the Bayes rule
`assuming Gaussian distributions).
`In [17], Principal Component Analysis (PCA) was used for
`feature extraction and Neural Networks (NNs) for classification.
`All the vehicle candidates were scaled to 20x20, then this 20x20
`scaled image was divided into 25 4x4 small windows. PCA was
`applied on every sub window and the output of the “local PCA”
`was provided to a NN to verify the hypothesis. Different from
`[17], Wu et al. [43] used standard PCA for feature extraction
`method for vehicle detection, together with a nearest-neighbor
`classifier. Goerick et al. [18] used a method called Local Orien-
`tation Coding (LOC) to extract edge information. The histogram
`of LOC within the area of interest was then provided to a NN for
`classification. Kalinke et al. [24] designed two models for vehi-
`cle detection: one for sedans, and the other for trucks. Hausdorrf
`distances between the hypothesized vehicles and the models in
`terms of LOC were the input to a NN. The outputs were sedans,
`trucks or background. Similar to [18], Handmann et al. [19]
`utilized the histogram of LOC, together with a NN, for vehicle
`detection. Moreover, the Hausdorrf distance was used for the
`classification of trucks and cars such as in [24]. A statistical
`model for vehicle detection was investigated by Schneiderman
`et al. [44]. A view-based approach using multiple detectors was
`employed to cope with viewpoint variations. The statistics of
`both object and “non-object” appearance were represented using
`the product of two histograms with each histogram represent-
`
`4
`
`

`

`ing the joint statistics of a subset Haar wavelet features in [44]
`and their position on the object. A different statistical model
`was investigated by Weber et al. [45]. They represented each
`vehicle image as a constellation of local features and used the
`Expectation-Maximization (EM) algorithm to learn the parame-
`ters of the probability distribution of the constellations. An over-
`completed dictionary of Haar wavelet features was utilized in
`[46] for vehicle detection. They argued that the over-completed
`representation provided a richer model and spatial resolution
`and was more suitable for capturing complex patterns. Sun et
`al. [47][7] went one step further by arguing that the actual val-
`ues of the wavelet coefficients are not very important for vehicle
`detection. In fact, coefficient magnitudes indicate local oriented
`intensity differences, information that could be very different
`even for the same vehicle under different lighting conditions.
`Following this observation, they proposed using quantized co-
`efficients to improve detection performance. Feature extraction
`using Gabor filters was investigated in [48]. Gabor filters pro-
`vide a mechanism for obtaining orientation and scale tunable
`edge and line detectors. Vehicles contain strong edges and lines
`at different orientation and scales, thus, this type of features are
`very effective for vehicle detection.
`
`VII. CHALLENGES AHEAD
`
`Although many efforts have been put into the vehicle detec-
`tion research area, many algorithms/systems have already been
`reported, many prototype vehicles have already been demon-
`strated, a highly robust and reliable system is yet to be built. In
`general, surrounding vehicles can be classified into three cate-
`gories according to their relative positions to the host vehicle:
`(a) overtaking vehicles, (b) mid-range/distant vehicles, and (c)
`close-by vehicles (see Fig. 2).
`
`Fig. 2. Detecting vehicles in different regions requires different methods. A1:
`Close by regions; A2: Overtaking regions; A3: Mid-range/distant regions.
`
`In the close-by regions, we may only see part of the vehi-
`cle. In this case, there is no free space in the captured images,
`which makes the shadow/edge based methods inappropriate. In
`the overtaking regions, only the side view of the vehicle is visi-
`ble while appearance changes fast. Methods detecting vehicles
`in these regions might be better to employ motion information
`or dramatic intensity changes [21]. Detecting vehicles in the
`mid-range/distant region is relatively easier since the full view
`of a vehicle is available and appearance is more stable.
`Real-time on-road vehicle detection is so challenging, that
`none of the HG methods discussed in Section V can solve it
`alone completely. Different cues/methods would be required to
`
`5
`
`handle different cases. We discuss below several research direc-
`tions for moving this area forward.
`
`A. Vehicle classification
`
`The majority of reported works aim only at detecting/tracking
`vehicles without differentiating among vehicle types. Given
`many different participants on the road (sedan, trail truck, mo-
`torbikes, etc.), knowing exactly what kind of participants are
`around the host vehicle will benefit driver assistance systems.
`
`B. Feature selection
`
`Building accurate and robust vehicle detection algorithms, es-
`pecially in the framework of supervised learning, requires em-
`ploying a good set of features. In most cases, a large number
`of features are extracted to compensate for the fact that rele-
`vant features are unknown a − priori.
`It would be ideal if
`we could use only those features which have great separability
`power while ignoring or paying less attention to the rest. For ex-
`ample, to allow a vehicle detector to generalize nicely, it would
`be nice to exclude features encoding fine details which might
`be present in particular vehicles only. Finding out what features
`to use for classification/recognition is referred to as feature se-
`lection. Sun et al.
`[49][50] have investigated various feature
`selection schemes in the context of vehicle detection, showing
`significant performance improvements. However, selecting an
`optimum feature subset (i.e., leading to high generalization per-
`formance) is still an open problem.
`
`C. Sensor fusion
`
`Information from a single sensor is not enough for a driver
`assistance system to manage high level driving tasks in dense
`traffic environments. Substantial research efforts are required to
`develop systems employing information from multiple sensors,
`both active and passive, effectively.
`
`D. Failure detection
`
`An on-board vision sensor will face adverse operating con-
`ditions, and it may reach a point where it might not be able to
`provide good quality data to meet minimum system performance
`requirements. In these cases, the driver assistance system may
`not be able to fulfil its desired responsibilities correctly (e.g.,
`issuing severe false alerts). A reliable driver assistance system
`should be able to evaluate its performance and disable its op-
`eration when it can not provide reliable traffic information any
`more.
`
`E. Hardware implementation
`
`Vehicle detection systems should be able to process informa-
`tion very fast to allow enough time for the drivers to react in
`case of an emergency. Among many options, real-time perfor-
`mance based on hardware implementations stand out for their
`simplicity and efficiency.
`
`VIII. CONCLUSIONS
`
`We presented a critical survey of vision-based on-road vehicle
`detection systems — one of the most important components of
`a driver assistance system. Judging from the research activities
`
`

`

`underway worldwide, it is certain that this area will continue to
`be among the hottest research areas in the future. Major motor
`companies, government agencies, and universities, are all ex-
`pected to work together to make significant progress in this area
`over the next few years.
`
`Acknowledgements
`
`This work was supported by Ford Motor Company under
`grant No.2001332R, the University of Nevada, Reno under an
`Applied Research Initiative (ARI) grant, and in part by NSF un-
`der CRCD grant No.0088086.
`
`[3]
`
`REFERENCES
`[1] W. Jones, “Building safer cars,” IEEE Spectrum, vol. 39, no 1, pp. 82–85,
`2002.
`[2] M. Bertozzi, A. Broggi, M. Cellario, A. Fascioli, P. Lombardi, and M.
`Porta, “Artifical vision in road vehicles,” Proceedings of the IEEE, vol. 90,
`no. 7, pp. 1258–1271, 2002.
`S. Tsugawa and Sadayuki, “Vision-based vehcile on japan: Machine vi-
`sion systems and driving control systems,” IEEE Trans. on Ind. El.???,
`vol. 41, no. 4, pp. 398–405, 1994.
`[4] Vehicle-highway automation activities in the United States. U.S. Dept of
`Transportation, 1997.
`[5] C. Thorpe, J.D. Carlson, D. Duggins, J. Gowdy, R. MacLachlan, C. Mertz,
`A. Suppe, and C. Wan, “Safe robot driving in cluttered environments,”
`11th International Symposium of Robotics Research, 2003.
`[6] M. Walton, “Robots fail to complete grand challenge,” CNN news, March
`14, 2004.
`[7] Z. Sun, R. Miller, G. Bebis, and D. DiMeo, “A real-time precrash vehicle
`detection system,” IEEE International Workshop on Application of Com-
`puter Vision, Dec., 2002.
`[8] L. Andreone, P. Antonello, M. Bertozzi, A. Broggi, A. Fascioli, and D.
`Ranzato, “Vehicle detection and localization in infra-red images,” The
`IEEE 5th International Conference on Intelligent Transportation Systems,
`2002.
`[9] K. Yamada, “A compact integrated vision motion sensor for its applica-
`tions,” IEEE Transactions on Intelligent Transportation System, vol. 4, no.
`1, pp. 35–41, 2003.
`[10] A. Kuehnle, “Symmetry-based recognition for vehicle rears,” Pattern
`Recognition Letters, vol. 12, pp. 249–258, 1991.
`[11] M. Bertozzi, A. Broggi, A. Fascioli, and S. Nichele, “Stereo vision-based
`vehicle detection,” IEEE Intelligent vehicle symposium, pp. 39–44, 2000.
`[12] D. Guo, T. Fraichard, M. Xie, and C. Laugier, “Color modeling by spheri-
`cal influence field in sensing driving environment,” IEEE Intelligent Vehi-
`cle Symposium, pp. 249–254, 2000.
`[13] H.Mori and N. Charkai, “Shadow and rhythm as sign patterns of obstacle
`detection,” International Symposium on industrial exlectronics, pp. 271–

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket