`
`U.S. PATENT NO. 4,970,653, ISSUED TO KENUE ON NOV. 13, 1990
`
`("KENUE")
`
`TRW Automotive U.S. LLC: EXHIBIT 1305
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NUMBER 8,599,001
`
`
`
`United States Patent
`
`[191
`
`[11] Patent Number:
`
`4,970,653
`
`Kenue
`
`[45] Date of Patent:
`
`Nov. 13, 1990
`
`4,868,752
`
`.................... .. 364/424.02
`9/1989 Fujii et al.
`OTHER PUBLICATIONS
`Dickmanns’ E_ D and Zapp, A‘, 1.A CurVature_BaSed
`Scheme for Improving Road Vehicle Guidance by
`Computer Vision,” Proc. SPIE on Mobile Robots, vol.
`727, Oct. 1986, pp. 161-168.
`Primary Examiner—Orary Chin
`Attorney’ Agent’ 0' F”m‘_H°ward N’ Conkey
`[57]
`ABSTRACT
`An image pmcessing meth°d °perateS ‘"1 an image mm
`a ccd camera viewing a roadway scene in front of a
`vehicle to detect lane markers and determine the rela-
`tionship of the vehicle to the lane. Obstacles in the lane
`near the vehicle are detected and a warning is given to
`the dr1ve1'. The method uses template matching tech-
`-
`-
`mques or a Hough algonthm to detect the lane markers
`0‘ mad edges‘
`
`10 Claims, 7 Drawing Sheets
`
`[54] VISION METHOD OF DETECIING LANE
`BOUNDARIES AND OBSTACLES
`Inventor:
`Surender K. Kenue, Southfield, Mich.
`[75]
`[73] Assignee: General Motors Corporation, Detroit,
`Mich.
`[21] APPL No; 334,033
`[22] Filed;
`Aim 5, 1939
`[51]
`Int. CL5 .............................................. G06F 15/50
`[52] U.S. C1. ............................... 364/461; 364/424.02;
`[58] Field of Search .............. 364/424.02, 461,3¢5t%.l()023:
`180/167’ 168’ 169’ 318/587’ 103
`References Cited
`US. PATENT DOCUMENTS
`364/424 02
`4 566 032
`1/1986 H_
`k
`31
`.
`1I‘00 a et
`.
`.............. ..
`,
`,
`180/168
`4,573,547
`3/1986 Yoshimura et al‘
`. 364/424.02
`4,819,169 4/1989 Saitoh et al.
`...........
`.364/424.02
`4,847,774 7/1989 Tomikawa etal.
`4,862,047
`8/1989 Suzuki et al. . 318/587
`
`
`
`[56]
`
`START
`
`
`
`D IGITI ZATION
`
`COUNT EDGE PIXELS
`w1N LANE. eounmmes
`AND > 50 F1:
`
`
`
`
`TO START
`
`
`
`OBSTACLE
`WARNING
`
`
`
`
`DELETE VERT. AND HOR.
`roses AT nwsrmca
`> so FT.
`
`
`
`EACH MARKER
`FOR
`
`BASED ON MARKER
`POSITION MP IN
`PREVIOUS
`FRAME
`
`IN
`SEARCH AREA
`CORRELATE A 5+5
`WINDOW WITH IMAGE TO
`IDENTIFY MARKERS
`
`
`
`
`
` DEFINE SEARCH AREA
`
`CALCULATE CENTROID
`OF EACH‘ MARKER
`
`CALCULATE MA RKER
`ARE
`POSITION BASED ON
`
`MARKERS
`oswosrr: MARKER
`
`FOUND WHERE
`AND LANE WIDTH
`
`
`E.xPE%TED
`
`
`
`
`USE LANE BOUNDARIES
`FROM PREVIOUS FRAME
`
`USE LEAST SQUARES
`FIT TO GENERATE
`LANE BOUNDARY
`
`
`
`
`
`"0/
`
`
`
`NO
`
`
`
`
`
`
`
`REPEAT CORRELATION
`
`wma EDGE DATA
`IS
`MARKER
`
`
`5UFF(c1p_NT
`MATCH f0INTS
`
`
`CHECK LANE BOUNDARIE
`FOR CONSISTENO’ OF
`
`ANGLE AND WIDTH
`15
`LOWER
`av
`END OF LANE
`BOUNDARV NEAR
`MIDDLE OF
`IMAGE ,7
`
`
`
`
`INDICATE
`LANE’
`CHANGE
`
`
`
`
`
`PREDICT
`NEW
`
`BOUNDARIES
`
`
`
`
`
`LOOK-
`ETIMATE
`OVERLAY LANE
`AHEAD PATH
`BOUNDARIES
`ON IMAGE
`
`
`1305-001
`
`1305-001
`
`
`
`US. Patent
`
`Nov. 13, 1990
`
`Sheet 1 of 7
`
`4,970,653
`
` UTILIZATION
`
`CIRCUIT
`
`’
`
`j
`
`1305-002
`
`1305-002
`
`
`
`US. Patent
`
`Nov.13,1990
`
`Sheet 2 of7
`
`4,970,653
`
`START
`
`DIGITIZATION
`
`I
`
`CONTRAST EXPANSION
`
`.
`
`EDGE IMAGE
`SOBEL FILTER
`
`I
`
`42
`
`4'5‘
`
`45
`
`
`COUNT EDGE PIXELS
`w IN LANE BOUNDARIES
`
`
`AND > so FT
`
` Jgfi
`
`OBSTACLE
`WARNING
`
`0
`I
`
`YES
`
`46’
`
`50
`
`DELETE VERT. AND HOR.
`EDGES AT DISTANCE
`> so FT.
`
`-53
`
`
`
`
`DEFINE SEARCH AREA
`FOR EACH MARKER
`BASED ON MARKER
`POSITION MP IN
`PREVIOUS
`FRAME
`
`
`
`
`
`
`55‘
`
`
` IN
`SEA RCH AREA
`CORRELATE A 5+-5
`
`WINDOW WITH IMAGE TO
`IDENTIFY MARKERS
`
`56
`
`No
`
`_ REPEAT CORRELATION
`WITH EDGE DATA
`
`6.0
`
`I
`
`
`
`
`
`
`SUF FICIENT
`MATCH 7PO|NTS
`
`
`
`62
`
`
`
`IS
`MARKER
`
`FOUND?
`
`
`1305-003
`
`1305-003
`
`
`
`UOS0 Patent
`
`Nov. 13,1990
`
`Sheet 3 of7
`
`4,970,653
`
`0
`
`-,5
`
`
`MARKER
`ON OPPOSITE
`SIDE OF LANE
`KNOWN
`I
`7
`
`
`
`
`CALCULATE CENTROID
`OF EACH MARKER
`
`
`
`
`CALCULATE MA RKER
`POSITION BASED ON
`OPPOSITE MARKER
`AND LANE WIDTH
`
`
`
`
`
`
`
`ARE
`
`MARKERS
`FOUND WHERE
`EXPEC7TED
`
`
`
`NO
`
`
`
`ESTIMATE
`
`LOOK‘
`
`AHEAD PATH
`
`OVERLAY LANE
`BOUNDARIES
`ON "WAGE-
`
`Jgfi 36
`
`55
`
`1305-004
`
`USE MARKERS I-‘ROM
`PREvIous
`FRAME
`
`
`
`
`USE LEAST SQUARES
`FIT TO GENERATE
`
`LANE BOUNDARY
`
`
`
`USE LANE BOUNDARIES
`FROM I=REvIous FRAME
`
`70
`
`CHECK LANE BOUNDARIES
`
`FOR CONSISTENCY OF
`
`73
`
`Is
`
`52'
`
`ANGLES AND WIDTH
`8-7‘
`
`
`ENDLOWER
`E
`m
`
`or LAN
`,ND,CA-FE
`BOUNDARY NEAR
`LANE
`MIDDLE 0"
`CHANGE
`IMAGE ?
`
`
`
`
`
`
`
`
`
`PREDICT
`NEW
`BOUNDARIES
`
`
`
`1305-004
`
`
`
` US. Patent
`
`Nov» 13,1990
`
`Sheet 4 of7
`
`4,970,653
`
`54::
`
`INCREMENT=
`
`
`
`R LANE COORDINATE'L LANE COORDINATE
`
`
`5
`
`WAS
`
`EITHER
`LANE. MARKER
`NOT FOUND OR AN
`OLD MARKER USED IN
`
`
`PREVIOUS
`
`IMAGE
`
`54.6
`?
`
`
`
`
`
`INCREASE INCREZMENT
`
`BY 20
`
`5415
`
`56
`
`128)
`
`
`(LIMIT INCREMENT TO
`
`
`
`
`
`BOUNDS BR AND BL
`DEFINE
`FOR EACH
`SEARCH AREA
`BL=MP" INCREMENT
`BR MP+ INCREMENT
`
`
`
`
`IMAGE
`CORRELATE
`BL AND BR
`BETWEEN
`
`
` INDEX TO
`ANOTHER
`ROW
`
`ARE
`ALSOSEWS
`7
`
`‘
`
`5,7
`
`
`
`5’
`
`9&5
`
`1305-005
`
`1305-005
`
`
`
`U.S. Patent
`
`Nov..13,1990
`
`Sheet 5 of7
`
`4,970,653
`
`DIGITIZATION
`
`CONTRAST EXPANSION
`
`EDGE
`
`DETECTION
`
`/00
`
`/fli
`
`/04‘
`
`0
`
`‘
`
`I /06
`
`NOISE CLEANING
`AND BOUNDARY
`
`TRACING
`
`V
`
`COUNT EDGE
`
`PIXELS
`
`w/IN LANE BOUNDARIES
`AND < 50 FT.
`
`/35’
`
`//4‘
`
`‘ OBSTACLE
`WARNING
`
`A
`
`I
`
`TO START
`
`//5
`
`622
`
`1305-006
`
`//Z
`V
`YES
`
`N0
`
`//6
`
`LINE HOUGH TRANSFORM
`
`
`
`DEFINE SEARCH AREA IN
`HOUGH SPACE BASED ON
`PREVIOUS MARKER
`
`POSITION
`
`:
`
`I
`
`°
`
`1305-006
`
`
`
`US“ Patent
`
`Nov. 13,1990
`
`Sheet 6 of7
`
`4,970,653
`
`‘
`
`//.9
`
`DELETE PIXELS < 20 ‘A
`OF MAXIMUM INTENSITY ~
`
`NON‘ MAXIMUM
`SUPPRESSION FILTER
`
`S
`
`A
`
`PTxELs
`DELETE
`REPRESENTING vERT.
`AND HORIZ. LINES
`
`REMOVE
`
`PIXELS
`
`/22
`
`A/24%
`
`cLosER THAN
`
`0
`
`sET AMOUNT
`@ 65
`ARE
`
`
`
`SELECT PIXEL. WITH
`MOST CONTRIBUTING
`EDGE ELEMENTS
`
`
`LANE
`
`NO
`BOUNDARY
`PARAMETERS
`
`NG
`
`~
`-
`YES
`
`ARE
`LANE
`BOUNDARY
`PARAMETERS WITHIN
`10% or‘ PREVIOUS
`PARAMETERS
`
`
`
`
`
`
`
`
`S SE AVERAGE OF
`LAST TWO FRAMES
`PARAMETERS
`
`
`
`
`
`LANE
`DETECT
`CHANGE
`
`ESTIMATE LOOK-
`
`AHEAD PATH
`
`/5%
`
`/9'6‘
`
`/Z3
`
`HOUGH
`'N"ER5F-
`TRANSFORM
`
`‘
`
`OVERLAY LANE
`BOUNDARIES ON
`IMAGE
`
`
`
`/50
`
`/40
`
`1305-007
`
`1305-007
`
`
`
`US, Patent
`
`Nov.13, 1990
`
`Sheet 7 of7
`
`4,970,653
`
`
`
`STRAIGHT LINE IN
`IMAGE DOMAIN
`
`0
`
`X.
`
`STRAIGHT LINE IN
`HOUGH DOMAIN
`
`R
`
`fly. fé
`
`1305-008
`
`1305-008
`
`
`
`1
`
`4,970,653
`
`2
`
`VISION METHOD OF DETECTING LANE
`BOUNDARIES AND OBSTACLES
`
`FIELD OF THE INVENTION
`
`This invention relates to a vision method of detecting
`lane boundaries and obstacles close to a vehicle within
`the lane boundaries, and particularly to such a method
`employing image processing techniques and which is
`operative for moderately marked roads.
`BACKGROUND OF THE INVENTION
`
`The use of an on-board video camera and image pro-
`cessing of the roadway scenes allows useful information
`to be gathered for vehicle control. Detecting lane
`boundaries is a core capability for advanced automotive
`functions such as collision warning, collision avoidance
`and automatic vehicle guidance. If the lane boundaries
`and thus the road path can be detected several other
`functions can be implemented. Lane control uses the
`boundary information and vehicle dynamics knowledge
`to derive steering and braking commands for keeping
`the vehicle in the lane. Headway control uses a laser or
`radar system to track the vehicle ahead and keeps a safe
`driving distance. The lane boundary information can be
`used to prevent the detection of a vehicle in an adjacent
`lane on a curved road. Then the sensor beam can be
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The above and other advantages of the invention will
`become more apparent from the following description
`taken in conjunction with the accompanying drawings
`wherein like references refer to like parts and wherein:
`FIG. 1 is a block diagram of a computer vision system
`for carrying out the method of the invention,
`FIG. 2 is an illustration of a roadway scene in a cam-
`era image plane,
`FIG. 3 is an illustration of a roadway scene in a cam-
`era image plane, the scene having an obstacle in the
`roadway,
`FIGS. 4a and 4b together comprise a flow chart of an
`embodiment of the method of the invention employing
`template matching,
`FIG. 5 is a flow chart of the dynamic search area
`definition method used in the FIG. 4 method.
`FIGS. 6a and 6b together comprise a flow chart of
`another embodiment of the method of the invention
`
`employing a Hough transform, and
`FIGS. 7a and 7b are an image plane View of a line and
`a Hough space representation of the same line, respec-
`tively.
`DESCRIPTION OF THE PREFERRED
`EMBODIMENT
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`directed to points within the lane. To monitor driving
`performance, the behavior of the driver is tracked and
`evaluated using the estimated position of the vehicle
`with respect to the lane boundaries.
`.
`Lane boundary detection for guiding vehicles along
`roadways has been reported in the paper by Dickmanns,
`E. D. and Zapp, A , “A Curvature-based Scheme for
`Improving Road Vehicle Guidance by Computer Vi-
`sion,” Proc. SPIE on Mobile Robots, Vol. 727, October
`1986, which is incorporated herein by reference. Con-
`tour correlation and high order world models are the
`basic elements of that method, realized on a special
`multi-processor computer system. Perspective projec-
`tion and dynamical models (Kalman filter) are used in
`an integrated approach for the design of the visual feed-
`45
`back control system. That system requires good lane ’
`markings and thus is limited to only those roads having
`good lane markings. It is of course desirable to extend
`the benefits of the computer vision system to roads with
`less good markings and to incorporate other features
`such as obstacle detection.
`
`50
`
`SUMMARY OF THE INVENTION
`
`It is therefore an object of the invention to provide a
`computer vision method of lane boundary detection
`operable on roads having moderate markings and even
`with some markers missing. It is a further object to
`provide such a method with obstacle detection capabil-
`ity.
`The invention is carried out in an automotive vehicle
`having a computer vision system and an associated
`camera for viewing the scene ahead of the vehicle, by a
`method of detecting lane markers on a roadway com-
`prising the steps of: obtaining an image of the scene and
`digitizing the image, normalizing the image, defining a
`search area in the image, searching for lane markers in
`the search area of the image, estimating the position of
`any missing lane marker, and defining lane boundaries
`from the position of the lane markers.
`
`55
`
`65
`
`As shown in FIG. 1, the hardware used to implement
`the method of the invention comprises a black and
`white CCD video camera 10 mounted in a vehicle, say
`at the upper center of the windshield to capture the
`driver’s view of the road ahead, an analog-to-digital
`converter 12 for coupling the camera output to a com-
`puter 14, and output devices driven by the computer
`including a display 16, an obstacle warning alarm 18 and
`a utilization circuit 20 which may be any device using
`the lane boundary information for vehicle guidance,
`performance monitoring or headway control, for exam-
`ple.
`The computer is programmed with algorithms for
`processing the images sensed by the camera. Two main
`algorithms for processing the image are disclosed
`herein. One uses a Hough transform and the other uses
`template matching. Both algorithms, however, dynami-
`cally define the search area for lane markers based on
`the lane boundaries of the previous frame, and provide
`estimates of the position of missing markers on the basis
`of current frame and previous frame information.Also
`in both cases preprocessing procedures detect obstacles
`in the lane within about 50 feet of the vehicle and give
`a warning via the alarm 18.
`FIG. 2 is an example of a typical image as projected
`on the camera image plane and includes the vehicle
`hood 22 and broken or solid stripes or lane markers 24
`painted on the road and terminating at the horizon 26.
`Since the image spaces above the horizon and below the
`hood line do not contain useful lane boundary informa-
`tion those regions are ignored by the image processing
`algorithms. A special calibration procedure enables
`range estimation, based on known characteristics of the
`road and the camera. It assumes flat roads with parallel
`markers at known look-ahead distances. The calibration
`determines the relationships between the positions of
`the actual markers and the images of these markers.
`This process defines a set of “reference markers” which
`are used as an initial set of lane boundaries seen by the
`vehicle when it is in the middle of the lane.The refer-
`ence markers are also used when consistency of width is
`
`13205-009‘
`
`1305-009
`
`
`
`3
`violated or when the two estimated lane boundaries
`cross each other.
`The broken line boxes 28 around the markers 24 de-
`fine the area to be searched for detecting markers and is
`determined by the position of the lane boundaries in the
`previous image frame. In reality many other objects are
`in the image such as trees, other vehicles, overpasses
`and signs. The defined search area helps eliminate much
`of them from the processed image as well as to minimize
`the processing time. FIG. 3 illustrates the presence of a
`vehicle 30 or other obstacle in the roadway of FIG. 2.
`If the other vehicle is close, say, within 50 feet of the
`trailing vehicle, it tends to obscure the lane markers 24
`to such an extent that there is insufficient information to
`determine the boundaries. In that case an obstacle warn-
`
`ing is given and no further image processing is done on
`that frame. When the obstacle 30 is more than 50 feet
`
`away the image is processed but the obstacle is effec-
`tively erased from the image by removing horizontal
`and vertical lines, thereby making the subsequent pro-
`cessing steps simpler.
`The template matching algorithm is widely used in
`image recognition and computer vision applications. In
`this algorithm, a template or window of desired inten-
`sity and shape is correlated with the image to create a
`correlation matrix. The elements of this correlation
`matrix indicate the quality of the match between the
`template and the image at all locations, according to
`some metric. Here, the absolute difference metric is
`used as the correlation measure. Other types of mea-
`sures such as normalized correlation can also be used.
`The matching can also be done in the image-edge do-
`main obtained by filtering the raw image with a Sobel
`filter. The advantage of edge matching is its insensitiv-
`ity to light variations; however, with this method, pot-
`holes, streaks, road texture and shadows may generate
`false objects. The template matching algorithm de-
`scribed below uses both. image data and edge data.
`FIGS. 4a and 4c comprise a flow chart of an image
`processing method which uses a template matching
`algorithm. The figures are joined at nodes A and B.
`Numerals in the text marked by angle brackets <>
`refer to reference numerals on the flow chart boxes.
`The raw image is digitized <40> into a 5l2X512><8
`image. The processing area is reduced in the vertical
`and horizontal directions. In the vertical direction, it is
`limited by the horizon line 26 and by the hood 22. These
`limits are obtained from the camera geometry and the
`calibration image. Then the gray level distribution of
`the processing area is contrast expanded or normalized
`<42> to have standard statistics (e.g., a gray level
`distribution with mean= 100 and standard devia-
`tion=20; the gray level range is O to 255 with 0 being
`full black and 255 being full white). Next, a 3X3 Sobel
`filter is used <44> to generate edge data 124x 512
`pixels in size.
`An obstacle in the lane ahead will give rise to strong
`edges in addition to those presented by the roadway.
`The presence of obstacles is detected by counting the
`number of strong edge points within the area defined by
`the lane boundaries of the previous frame <4-6> and
`comparing to a threshold <48> for a given look-ahead
`distance. For example, the number of pixels within 50
`feet of the vehicle having a gray level greater than 150
`are counted and compared to a threshold. If an obstacle
`is detected within 50 feet, then a warning to the driver
`is given <50) and the control returns back to the first
`step for a new frame. The location of lane boundaries is
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`65
`
`4,970,653
`
`4
`not updated since the presence of the obstacle obscures
`them. The obstacle distance is determined by the
`ground level obstacle image since the image plane cali-
`bration does not take into account the vertical height of
`an object. As seen in FIG. 3, the top of the vehicle 30
`appears to be beyond the horizon as seen in the image
`plane although the vehicle 30 is actually close as is
`realistically shown near ground level. Thus for obstacle
`detection, only the bottom of the vehicle 30 image is
`scanned since it is in the near portion of the image plane.
`Obstacles detected at distances greater than 50 feet
`(in the far field portion of the image) are effectively
`deleted from both the image and the edge data <52>
`to facilitate the execution of the following steps. For
`this purpose it is assumed that the obstacle has a high
`content of vertical and horizontal lines whereas the lane
`
`markers to be detected comprise diagonal lines. The
`obstacles are deleted from the image using the follow-
`ing criteria: lines to be deleted must be either vertical or
`horizontal, their length must exceed 40 pixels, and their
`intensity must exceed 150.
`After the obstacle deletion step the search area is
`defined dynamically <54>. Given the previous lane
`boundaries, a trimcated triangular search region is de-
`fined for each marker location such that the search
`region for the two sides of the lane do not overlap.-
`Moreover, the search area changes with marker posi-
`tion MP, and increases in size if the marker was not
`found in the search of the previous frame. This limits
`the search space in which the markers can move from
`one frame to another and shortens the processing time.
`The dynamic search method is shown separately in
`FIG. 5. The search is conducted one row at a time and
`
`the search field is selected for each row. It begins by
`defining an increment for a given row as one fifth of the
`distance between the right and left lane coordinates for
`that row <54a>. It is then decided <54b> whether
`either a lane marker was not found or an old marker was
`used in the previous image. If so, the increment is in-
`creased by 20 pixels to enlarge the search area <54c>.
`Then the right bound BR and the left bound BL (for
`each search area) is found <54d> by adding and sub-
`tracting the increment from the marker position MP for
`that line. Next a correlation procedure <56) is fol-
`lowed for the defined search field in the current line and
`if all the rows have not been processed <57> the
`search continues in another row <58> and the search
`field is redefined for that row.
`
`When the search area is defined a template matching
`operation begins <56). For the defined search area, a
`window of size 5X5 pixels with constant gray level of
`240 is correlated with the image. The correlation mea-
`sure is based on the absolute difference of the window's-
`gray level and the image’s gray level. The window is
`moved horizontally across the image pixel by pixel to
`obtain a correlation measure at every point in the tra-
`verse and is then indexed vertically to make another
`horizontal traverse. To reduce computations, the win-
`dow may be indexed vertically by more than one line so
`that strips of the search area are sampled rather then the
`complete search area. Using the absolute difference
`metric, a perfect match will yield a zero element in the
`correlation matrix. The correlation matrix is therefore
`searched for its minimum values. Elements with values
`under some threshold are selected to represent
`the
`marker positions.
`This correlation procedure will not yield any match
`for yellow or faded white lane markers. If such markers
`
`1305-010
`
`1305-010
`
`
`
`4,970,653
`
`5
`are present or markers are missing altogether, there will
`be only a few match points. The match points are
`counted <59> and if there are too few, say, 6 points
`out of a possible 100, the correlation is repeated with the
`edge data <60>. Then if the marker is found <62> or
`there were sufficient match points <59), the centroid
`of each marker is calculated < 64>. If the marker is not
`found <62> and the marker position on the opposite
`side of the lane is known <66>, the missing marker
`position is calculated based on the opposite marker
`position and the lane width <68> and its centroid is
`calculated <64->. When the marker position on the
`opposite side of the lane is not known <66> the lane
`boundary from the previous frame is used <70>. After
`the centroid of each marker is calculated <64), if the
`markers are not found where expected <72> based on
`the previously detected lane geometry, the marker loca-
`tions from the previous frame are used <74>.
`The determination of expected market position
`<72> involves comparing the position of each marker
`centroid with that of the previous frame. If the change
`is more than nine pixels a flag is set. Then a check is
`made for the presence of a second marker in each search
`area as may occur, for example, when an exit lane is
`sensed. The distance between two such markers is com-
`puted and if it is above a threshold for some distance (15
`rows of the image) the outside marker (i.e., the right-
`most marker in the right search area) is rejected and a
`flag is set. Then for each row, if a marker is not found
`the previous marker is used, provided no flag has been
`set. If a flag is set, only new marker locations are used.
`The result of this procedure is a set of x,y coordinates
`specifying lane markers. Then a least-squares line fit is
`performed to generate the lane boundary <76).
`To validate the estimated lane boundaries they are
`checked for consistency of angles and width <78>. If
`the angle between the present lane boundary and the
`previous lane boundary is greater than 35', then the
`present lane boundary is rejected. It is then estimated
`from the other lane boundary based on lane width.
`Similarly, the width of the lane is checked at two differ-
`ent look-ahead distances for upper and lower bounds.
`The algorithm also checks for lane changes by deter-
`mining if the lower end of a lane boundary is near the
`middle of the image <80>. If it is a lane change, warn-
`ing is given <82> and depending on the lane change
`direction (left or right), the markers are exchanged, i.e.,
`for a left lane change, the left boundary is redefined as
`the right boundary and a new left boundary is predicted
`based on the road/camera geometry.
`The estimated lane boundaries are used to compute
`the vanishing point (where the left and right boundaries
`intersect) and the intersection of the right marker at a
`fixed distance. These points and the vehicle dynamics
`information are then used by a Kalman filter for estimat-
`ing the road curvature, the lateral shift from the center
`of the lane, and the heading error as disclosed in the
`Dickmaims et al paper, supra. These estimates deter-
`mine a coordinate transformation which determines the
`look-ahead path required for autonomous vehicle con-
`trol <86>. For display purposes the lane boundaries
`are overlaid on the image <88>.
`FIGS. 6a and 6b comprise a flow chart showing an
`image processing algorithm using a Hough transform.
`The two figures are joined at node A. The process starts
`by digitization of the camera signals <100>, contrast
`expansion <102> and edge detection by a Sobel filter
`<104> as described for the previous method. The
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`65
`
`S 6
`image area between the vehicle hood and the horizon is
`the only area operated upon. A thresholding operation
`<l06> generates a binary image by setting all gray
`levels above 150 to one and the rest to zero. Noise
`cleaning and boundary tracing <108> removes iso-
`lated, single pixels and traces the boundaries of the lane
`and road markers. The boundary tracing algorithm
`saves connected or adjacent pixels and follows the con-
`tour of a lane marker or a road edge in all directions.
`This is done by dividing the image into three equal
`segments representing the left, center and right portions
`of the image. For each segment, the left and right edges
`of the lane or road markers are searched for at least two
`adjacent edge pixels. After marking the edge pixels, the
`boundaries of the lane markers are traced until either
`the top of the image is reached or until no edge pixels
`are found. The resulting boundary traced image has
`clean lane markers and road edges. Then obstacle detec-
`tion is performed by counting edge pixels within the
`lane boundaries and within 50 feet of the vehicle
`<110> and comparing the count
`to a threshold
`<112>. If the count is greater than the threshold an
`obstacle is detected and an obstacle warning is given to
`the driver < 114> and the program returns to START.
`If no obstacle is detected a line Hough transform is
`performed <116>.
`The Hough algorithm uses the boundary traced lane
`markers to estimate the location of the lane boundaries.
`The Hough transform has been used extensively for
`image processing applications. By knowing the desired
`shape a priori, the transform maps complex shapes,
`whether lines or curves, into simple ones in the Hough
`domain. The simplest transform in terms of computa-
`tional burden is a line transform.Accordingly the lane
`boundaries are approximated by several straight lines.
`FIGS. 7a and 7b show a line in the image plane and the
`Hough transform of the same line. The normal parame-
`teiization of a straight line is given by R x cosA + y sinA
`where R is the shortest distance from the origin to the
`line, and A is the angle between the normal to the line
`and the x axis. Each pixel in the Hough domain repre-
`sents a potential line in the image space. After a line
`Hough transform is performed on the boundary traced
`image <116> a search area is defined around the previ-
`ous boundary position <118> in Hough space as indi-
`cated by the box 32 in FIG. 7b.
`The intensity of each pixel in the Hough domain
`corresponds to the strength of a line in the image space
`or the number of edge elements contributing to that
`line. To select the n strongest lines in the image, one
`needs to identify the n pixels with the highest intensity
`in the Hough domain. The following criteria are used to
`select the correct lines: (a) the intensity of the selected
`pixels should have local maxima, (b) the separation of
`the selected pixels should exceed a certain threshold,
`and (c) the intensity of the selected pixels should exceed
`20% of the maximum intensity. The selection algorithm
`is implemented by identifying the maximum pixel inten-
`sity and deleting those that are less than 20% of that
`maximum intensity <119>. Then the Hough domain is
`filtered with an ll X 11 non-maximum suppression filter
`<120>, i.e., for a given pixel, if the other 120 pixels
`around it were higher in intensity, then the intensity of
`the subject pixel is set to zero. After filtering, only one
`local maximum pixel for each 11><1l neighborhood
`could survive in the Hough domain but some points can
`still be close together. The number of Hough pixels is
`further reduced by deleting pixels which would gener-
`
`1305-011
`
`1305-011
`
`
`
`4,970,653
`
`8
`The description of a preferred embodiment of the
`invention for the purpose of illustrating the invention is
`not to be considered as limiting the invention since
`many modifications may be made by the exercise of skill
`in the art without departing from the scope of the inven-
`tion.
`The embodiments of the invention in which an exclu-
`
`7
`ate near horizontal or vertical lines (A=0°, A=90°,
`A= 180’) < 122>. Additional pixels are removed if the
`distance between two pixels is less than 34 <124>.
`Finally the pixels are sorted based on intensity or the
`number of contributing edge elements and the top four
`pixels are chosen <l26>.
`In a typical application, the roadway may include
`discernible lane markers defining two lanes, and road
`edges may also be visible even if not painted. In this case
`four edges will be searched for to define the road and
`the vehicle relationship to the road. When the Hough
`transform is performed about 300 points may be present
`in Hough space which represent minor edges, major
`edges including the ones of interest, and noise. Each
`operation reduces the number of remaining edges. For
`example, the deletion of those less than 20% of maxi-
`mum intensity may eliminate all but 50 points. The
`non-maximum filter may leave 10 points, and the dele-
`tion of pairs that are close together and of those repre-
`senting horizontal and vertical lines may remove a few
`more points. Finally the strongest four pixels of the
`remainder are selected as the lane markers and road
`edges.
`These pixels represent the lane boundaries. The pixels
`are checked to determine if any lane boundary parame-
`ters are missing <128> . If they are, the average of the
`last two frames parameters are used
`If the parameters are not missing they are validated
`by comparison with parameters of previous frames
`<132> and if they are within 10% they are considered
`to be valid. If not, the average parameters of the last
`two frames is used <130>. Then lane change is de-
`tected <134> and the look-ahead path is estimated
`<l36> using the Kalman filter as described for the
`previous method, and an inverse Hough transform gen-
`erates lane boundaries <138> in the image space
`which are overlaid on the image for display purposes
`<l40>.
`The Hough domain is quantized to a 256x256 space
`for this implementation. Since pixels are selected based
`on a distance of 34, it is evident that the quantized space
`can easily be reduced to 128x256 for decreasing the
`processing time. For real
`time applications, systolic
`array chips could be used for fast calculations of the
`Hough transform. The image could be divided into
`several sub-images with several systolic chips working
`in parallel, mapping the results into several Hough do-
`mains. Since the Hough transform operation is linear for
`the continuous case, the final transform can be obtained
`by summing the partial Hough results. Another advan-
`tage of the transform is that a priori knowledge about
`lane and road boundaries can easily be embedded in the
`algorithm.
`The method of the invention performed by either
`algorithm, template matching or Hough transform, has
`several advantages:
`(a) The method works well with missing and discon-
`tinuous markers, both yellow and white.
`(b) The search area in both algorithms is defined
`dynamically and adapts to the location of the lane mark-
`ers, which saves processing time.
`(c) This method estimates road boundaries with low
`radius of curvature (e.g., entrance and exit highway
`ramps).
`(d) This method detects lane changes and warns the
`driver when lanes are crossed.
`(e) This method can detect certain obstacles in the
`lane.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`sive property or privilege is claimed are defined as
`follows:
`
`1. In an automotive vehicle having a computer vision
`system and an associated camera for viewing a scene
`ahead of the vehicle, a method of detecting lane mark-
`ers on a roadway comprising the steps of:
`obtaining an image of the scene and digitizing the
`image,
`normalizing the image,
`defining a search area in the image,
`searching for lane markers in the search area of the
`image to locate lane marker positions,
`estimating the position of any missing lane marker
`from said located lane marker positions, and
`defining lane boundaries from the said located lane
`marker positions and the estimated position of the
`missing lane marker.
`2. The invention as defined in claim 1 wherein the
`
`step of estimating the position of missing lane markers
`comprises averaging the corresponding marker posi-
`tions from previous images.
`3. In an automotive vehicle having a computer vision
`system and an associated camera for viewing a scene
`ahead of the vehicle and obtaining a series of images of
`the scene, a method of locating lane boundaries on a
`roadway comprising for each image the steps of:
`digitizing the image and normalizing the image,
`dynamically defining a search area in the image based
`on the lane boundaries located in a previous image,
`searching for lane markers in the search area of the
`image to locate lane marker positions,
`estimating the position of any missing lane marker
`based on information selected from the current
`image and previous images, and
`defining lane boundaries from the said located lane
`marker positions and the estimated position of the
`missing lane marker.
`4. The invention as defined in claim 3 wherein the
`step of estimating the position of missing lane markers
`comprises the steps of:
`determining the lane width from said previous image,
`and
`'
`
`calculating the estimated position of a missing lane
`marker from the position of a marker on the oppo-
`site side of the lane and the lane width.
`5. The invention as defined in claim 2 including the
`s