throbber
EXHIBIT 1205
`
`U.S. PATENT NO. 4,970,653, ISSUED TO KENUE ON NOV. 13, 1990
`
`("KENUE")
`
`TRW Automotive U.S. LLC: EXHIBIT 1205
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NUMBER 8,599,001
`
`

`

`United States Patent [191
`Kenue
`
`[11] Patent Number:
`[45] Date of Patent:
`
`4,970,653
`Nov. 13, 1990
`
`[54] VISION METHOD OF DETECTING LANE
`BOUNDARIES AND OBSTACLES
`[75] Inventor: Surender K. Kenue, South?eld,M1ch.
`[73] Assignee: General Motors Corporation, Detroit,
`Mich-
`
`4,868,752 9/1989 Fujii et a1. .................... .. 364/424.()2
`OTHER PUBLICATIONS
`Dickmanns, E_ D and Zapp’ A’, “A Curvature_BaSed
`Scheme for Improving Road Vehicle Guidance by
`Computer Vision,” Proc. SPIE on Mobile Robots, vol.
`
`[22] Filed;
`[51] I t Cl 5
`
`Aim 6, 1989
`
`15/50
`
`Primary Examiner-(Sary Chin
`Attorney, Agent, or Fzrm-—Howard N. Conkey
`
`n .
`
`.
`
`............................................ .. G06F
`
`[56]
`
`[52] US. Cl. ............................. .. 364/461; 364/424.02;
`_
`358/103
`[58] Flew of Search
`364é/4246'o_234g1’5g;4'1(;;
`"""
`80/1 7’ 1 8’ 1 9’
`1 /
`’
`References Cited
`U s PATENT DOCUMENTS
`l
`'
`_
`ggifitilg 2' """""" " 3641/
`4:8l9:169 4/1989 Saitoh et a1. ......... ..
`. 364/424.02
`4,847,774- 7/1989 Tomikawa et al.
`. 364/424.02
`4,862,047 8/1989 Suzuki et a1.
`318/587
`
`.
`
`ABSTRACT
`[57]
`An image processing method operates on an image from
`a ccd camera viewing a roadway scene in front of a
`vehicle to detect lane markers and determine the rela
`tionship of the vehicle to the lane. Obstacles in the lane
`near the vehicle are detected and a warning is given to
`the driver. The method uses template matching tech
`niques or a Hough algorithm to detect the lane markers
`or road edges‘
`
`10 Claims, 7 Drawing Sheets
`
`COUNT eoce PIXELS
`w m LANE. BOUNDARlES
`AND > so r1:
`
`OBSTACLE
`WARNING
`
`DELETE VERT. AND HOR.
`roses AT msrmce
`> 50 FT.
`
`DEFINE SEARCH AREA
`
`PREVIOUS FRAME
`
`IN SEARCH AREA
`CORRELATE A 5+5
`WINDOW WITH IMAGE TO
`IDENTIFY MARKERS ~~5a
`
`CALCULATE CENTROID l
`OF EACH‘ MARKER
`
`CALCULATE MARKER
`POSITION BASED ON
`OPPOSITE MARKER
`AND LANE WlDTH
`___.__.__._l
`
`\
`
`68
`
`sumclgm-
`MATCH fom'rs
`
`NO
`
`REPEAT CORRELATTON
`WITH EDGE DATA
`
`6'2
`
`USE LEAST SQUARE
`FIT TO GENERATE
`LANE BOUNDARV
`
`USE LANE BOUNDARIES
`FROM PREVIOUS FRAME
`
`CHECK LANE EOUNDARIS
`FOR CONSISTENO‘ OF
`ANGLE AND WIDTH
`
`IS
`ENOLOVEER a
`0 LAN
`BOUNDARV NEAR
`MIDDLE OF
`IMAGE. z
`
`NO
`
`BTIMATE LOOK
`AHEAD PATH
`
`\
`\
`
`PREDICT
`NEW
`BOUNDARIES
`
`OVERLAY LANE
`BOUNDARIES
`
`ON (MAGE @
`
`1205-001
`
`

`

`US». Patent Nov. 13, 1990
`
`Sheet 1 of7
`
`4,970,653
`
`CESEDRA
`
`/
`
`/0/
`j‘
`
`'
`
`A D
`1
`
`/2
`
`4‘
`//
`
`'76
`/
`
`DlSPLAY
`
`COMPUTER
`
`ALARM
`
`2
`
`UTIL'ZATION
`CIRCUIT
`
`"8
`/
`29
`
`1205-002
`
`

`

`US, Patent Nov. 13, 1990
`
`Sheet 2 of 7
`
`4,970,653
`
`DIGI'I'IZATION
`
`I w
`#2
`
`CONTRAST EXPANSION
`
`EDGE IMAGE ./W
`SOBEI. FILTER
`
`4%
`
`COUNT EDGE PIXELS
`WIN LANE BOUNDARIES
`AND > 50 FT
`
`j'yféz
`
`OBSTACLE
`WARNING
`
`YES
`
`NO
`
`DELETE VERT. AND HOR.
`EDGES AT DISTANCE
`~42
`> so FT.
`I
`DEFINE SEARCH AREA
`FOR EACH MARKER
`BASED ON MARKER
`POSITION MP IN
`PREVIOUS FRAME
`
`IN SEARCH AREA
`CORRELATE A 5+5
`WINDOW WITH IMAGE TO
`IDENTIFY MARKERS '-56
`
`SUF FICIENT
`MATCH E’OINTS
`
`‘REPEAT CORRELATION‘
`WITH EDGE DATA
`
`1205-003
`
`

`

`U.SI Patent Nov. 13, 1990
`O
`
`Sheet 3 of 7
`
`4,970,653
`
`/
`I
`CALCULATE CENTROID 1
`OF EACH MARKER
`
`SIDE OF LANE
`I KNOWN
`
`'
`
`?
`
`CALCULATE MARKER
`POSITION BASED ON
`OPPOSITE MARKER
`AND LANE WIDTH
`____—____I
`USE MARKERS FROM
`PREVIOUS FRAME
`
`I
`(
`68
`
`ARE
`MARKERS
`FOUND WHERE
`
`EXPEC7TED
`
`YES
`
`_E_____I
`
`71"
`
`I
`USE I_ANE BOUNDARIES
`FROM PREVIOUS FRAME
`
`I
`79
`
`USE LEAST SQUARES
`FIT TO GENERATE
`LANE BOUNDARY
`
`'~’76
`
`CHECK LANE BOUNDARIES
`FOR CONSISTENCY OF
`ANGLES AND WIDTH
`
`"-175
`
`60
`
`YES‘
`
`5;
`
`INDICATE /
`LANE
`CHANGE
`
`84‘
`I
`
`PRED'CT
`NEW
`BOUNDARIES
`
`OVER LAY LANE
`BOUNDARIES
`ON IMAGE
`
`I START
`
`I
`
`\
`
`\
`
`IS
`LOWER
`END OF LANE
`BOUNDARY NEAR
`MIDDLE OF
`IMAGE _?
`
`NO
`
`LOOK“
`ESTIMATE
`AHEAD PATH
`
`5g, 4%
`
`1205-004
`
`

`

`S US“ Patent Nov‘, 13, 1990
`
`Sheet 4 of 7
`
`(ENTER -)
`
`4,970,653
`
`54%
`
`INCREMENT=
`
`R LANE COORDINATE-L LANE COORDINATE
`5
`
`WAS
`EITHER
`LANE MARKER
`NOT FOUND OR AN
`OLD MARKER USED IN
`PREVIOUS
`IMAGE
`
`5446
`
`INCREASE INCREMENT BY 20
`(LIMIT INCREMENT TO 128)
`
`DEFINE BOUNDS BR AND BL
`FOR EACH SEARCH AREA
`BL=MP- INCREMENT
`BR MP+INCREMENT
`
`CORRELATE
`IMAGE
`BL AND BR
`BETWEEN
`
`56
`
`lNDEX TO
`ANOTHER
`ROW
`
`{I
`55’
`
`ARE
`~
`ALL Rows
`
`57
`
`1205-005
`
`

`

`US. Patent N0v.13,1990
`
`Sheet 5 of7
`
`4,970,653
`
`( START -)
`
`DIGITIZATION /
`;
`CONTRAST EXPANSION
`
`//00
`M2
`
`EDGE DETECTION ///04
`
`
`
`THRESHOLDING Y
`
`NOISE CLEANING
`
`isxamm /--~/06’
`
`COUNT EDGE PIXELS
`W/IN LANE BOUNDARIES
`AND < 50 FT.
`
`‘ OBSTACLE '
`WARNING
`
`(TO START )
`
`LINE HOUGH TRANS F'O RM
`
`7
`
`DEFlNE SEARCH AREA
`IN
`HOUGH SPACE BASED ON
`PREVIOUS MARKER
`POSITION
`
`1205-006
`
`

`

`US” Patent Nov. 13,1990
`
`Sheet 6 of7
`
`4,970,653
`
`‘---@ w
`
`DELETE PIXELS < 20 "4
`OF MAXIMUM INTENSITY L
`
`I
`NoN- MAXIMUM
`SUPP RESS ION FILTER
`
`l/ep
`
`DELETE PIXELS _//Z2
`REPRESENTING vERT.
`AND HORIZ. LINES
`
`REMovE PIXELS
`
`’
`
`cLosER THAN
`
`sET AMoUNT
`
`SELECT PIXEL WITH
`MOST CONTRIBUTING
`EDGE ELEMENTS
`
`ARE
`LANE
`BOUNDARY
`PARAMETERS WITHIN
`10% OF PREVIOUS
`PARAMETERS
`
`YES
`
`‘
`
`g 65
`
`A
`BOUNDARY
`PARAMETERS
`NO
`
`YES
`
`/
`
`UsE AVERAGE OF /
`LAsT TWO FRAMES
`PARAMETERS
`I
`
`DETECT LANE {/g?
`CHANGE
`
`U6
`
`ESTIMATE LOOK
`AHEAD
`TH
`PA
`+
`INVERSE HOUGH
`TRA NsFoRM
`
`w
`OVERLAY LANE
`N
`mum/[1325255 ON
`
`w
`
`1205-007
`
`

`

`IU.SI Patent N0v..13, 1990
`
`Sheet 7 01'7
`
`4,970,653
`
`HA
`
`\
`
`R
`
`A
`
`STRAIGHT LINE IN
`IMAGE DOMAIN
`
`\ x
`
`AII
`
`180° -
`
`90° --
`
`/32
`
`RT L__J
`
`o
`
`STRAIGHT LINE IN
`HOUGH DOMAIN
`
`r
`R
`
`J7 1"
`
`1205-008
`
`

`

`VISION METHOD OF DETECTING LANE
`BOUNDARIES AND OBSTACLES
`
`FIELD OF THE INVENTION
`This invention relates to a vision method of detecting
`lane boundaries and obstacles close to a vehicle within
`the lane boundaries, and particularly to such a method
`employing image processing techniques and which is
`operative for moderately marked roads.
`
`25
`
`BACKGROUND OF THE INVENTION
`The use of an on-board video camera and image pro
`cessing of the roadway scenes allows useful information
`to be gathered for vehicle control. Detecting lane
`boundaries is a core capability for advanced automotive
`functions such as collision warning, collision avoidance
`and automatic vehicle guidance. If the lane boundaries
`and thus the road path can be detected several other
`functions can be implemented. Lane control uses the
`boundary information and vehicle dynamics knowledge
`to derive steering and braking commands for keeping
`the vehicle in the lane. Headway control uses a laser or
`radar system to track the vehicle ahead and keeps a safe
`driving distance. The lane boundary information can be
`used to prevent the detection of a vehicle in an adjacent
`lane on a curved road. Then the sensor beam can be
`directed to points within the lane. To monitor driving
`performance, the behavior of the driver is tracked and
`evaluated using the estimated position of the vehicle
`with respect to the lane boundaries.
`.
`Lane boundary detection for guiding vehicles along
`roadways has been reported in the paper by Dickmanns,
`E. D. and Zapp, A , “A Curvature-based Scheme for
`Improving Road Vehicle Guidance by Computer Vi
`sion,” Proc. SPIE on Mobile Robots, Vol. 727, October
`1986, which is incorporated herein by reference. Con
`tour correlation and high order world models are the
`basic elements of that method, realized on a special
`multi-processor computer system. Perspective projec
`tion and dynamical models (Kalman ?lter) are used in
`an integrated approach for the design of the visual feed
`back control system. That system requires good lane
`markings and thus is limited to only those roads having
`good lane markings. It is of course desirable to extend
`the bene?ts of the computer vision system to roads with
`less good markings and to incorporate other features
`such as obstacle detection.
`
`SUMMARY OF THE INVENTION
`It is therefore an object of the invention to provide a
`computer vision method of lane boundary detection
`operable on roads having moderate markings and even
`with some markers missing. It is a further object to
`provide such a method with obstacle detection capabil
`ity.
`The invention is carried out in an automotive vehicle
`having a computer vision system and an associated
`camera for viewing the scene ahead of the vehicle, by a
`method of detecting lane markers on a roadway com
`prising the steps of: obtaining an image of the scene and
`digitizing the image, normalizing the image, de?ning a
`search area in the image, searching for lane markers in
`65
`the search area of the image, estimating the position of
`any missing lane marker, and de?ning lane boundaries
`from the position of the lane markers.
`
`60
`
`1
`
`4,970,653
`
`2
`BRIEF DESCRIPTION OF THE DRAWINGS
`The above and other advantages of the invention will
`become more apparent from the following description
`taken in conjunction with the accompanying drawings
`wherein like references refer to like parts and wherein:
`FIG. 1 is a block diagram of a computer vision system
`for carrying out the method of the invention,
`FIG. 2 is an illustration of a roadway scene in a cam
`era image plane,
`FIG. 3 is an illustration of a roadway scene in a cam
`era image plane, the scene having an obstacle in the
`roadway,
`FIGS. 4a and 4b together comprise a ?ow chart of an
`embodiment of the method of the invention employing
`template matching,
`FIG. 5 is a ?ow chart of the dynamic search area
`de?nition method used in the FIG. 4 method.
`FIGS. 60 and 6b together comprise a ?ow chart of
`another embodiment of the method of the invention
`employing a Hough transform, and
`FIGS. 7a and 7b are an image plane view of a line and
`a Hough space representation of the same line, respec
`tively.
`DESCRIPTION OF THE PREFERRED
`EMBODIMENT
`As shown in FIG. 1, the hardware used to implement
`the method of the invention comprises a black and
`white CCD video camera 10 mounted in a vehicle, say
`at the upper center of the windshield to capture the
`driver’s view of the road ahead, an analog-to-digital
`converter 12 for coupling the camera output to a com
`puter 14, and output devices driven by the computer
`including a display 16, an obstacle warning alarm 18 and
`a utilization circuit 20 which may be any device using
`the lane boundary information for vehicle guidance,
`performance monitoring or headway control, for exam
`ple.
`The computer is programmed with algorithms for
`processing the images sensed by the camera. Two main
`algorithms for processing the image are disclosed
`herein. One uses a Hough transform and the other uses
`template matching. Both algorithms, however, dynami
`cally de?ne the search area for lane markers based on
`the lane boundaries of the previous frame, and provide
`estimates of the position of missing markers on the basis
`of current frame and previous frame information.Also
`in both cases preprocessing procedures detect obstacles
`in the lane within about 50 feet of the vehicle and give
`a warning via the alarm 18.
`FIG. 2 is an example of a typical image as projected
`on the camera image plane and includes the vehicle
`hood 22 and broken or solid stripes or lane markers 24
`painted on the road and terminating at the horizon 26.
`Since the image spaces above the horizon and below the
`hood line do not contain useful lane boundary informa
`tion those regions are ignored by the image processing
`algorithms. A special calibration procedure enables
`range estimation, based on known characteristics of the
`road and the camera. It assumes flat roads with parallel
`markers at known look-ahead distances. The calibration
`determines the relationships between the positions of
`the actual markers and the images of these markers.
`This process de?nes a set of “reference markers” which
`are used as an initial set of lane boundaries seen by the
`vehicle when it is in the middle of the lane.The refer
`ence markers are also used when consistency of width is
`
`1205-009
`
`

`

`4,970,653
`4
`3
`not updated since the presence of the obstacle obscures
`violated or when the two estimated lane boundaries
`them. The obstacle distance is determined by the
`cross each other.
`ground level obstacle image since the image plane cali
`The broken line boxes 28 around the markers 24 de
`bration does not take into account the vertical height of
`?ne the area to be searched for detecting markers and is
`determined by the position of the lane boundaries in the
`an object. As seen in FIG. 3, the top of the vehicle 30
`previous image frame. In reality many other objects are
`appears to be beyond the horizon as seen in the image
`plane although the vehicle 30 is actually close as is
`in the image such as trees, other vehicles, overpasses
`realistically shown near ground level. Thus for obstacle
`and signs. The de?ned search area helps eliminate much
`detection, only the bottom of the vehicle 30 image is
`of them from the processed image as well as to minimize
`scanned since it is in the near portion of the image plane.
`the processing time. FIG. 3 illustrates the presence of a
`Obstacles detected at distances greater than 50 feet
`vehicle 30 or other obstacle in the roadway of FIG. 2.
`(in the far ?eld portion of the image) are effectively
`If the other vehicle is close, say, within 50 feet of the
`deleted from both the image and the edge data <52>
`trailing vehicle, it tends to obscure the lane markers 24
`to facilitate the execution of the following steps. For
`to such an extent that there is insuf?cient information to
`this purpose it is assumed that the obstacle has a high
`determine the boundaries. In that case an obstacle warn
`ing is given and no further image processing is done on
`content of vertical and horizontal lines whereas the lane
`markers to be detected comprise diagonal lines. The
`that frame. When the obstacle 30 is more than 50 feet
`away the image is processed but the obstacle is effec
`obstacles are deleted from the image using the follow
`tively erased from the image by removing horizontal
`ing criteria: lines to be deleted must be either vertical or
`and vertical lines, thereby making the subsequent pro
`horizontal, their length must exceed 40 pixels, and their
`cessing steps simpler.
`intensity must exceed 150.
`The template matching algorithm is widely used in
`After the obstacle deletion step the search area is
`image recognition and computer vision applications. In
`de?ned dynamically <54>. Given the previous lane
`boundaries, a truncated triangular search region is de
`this algorithm, a template or window of desired inten
`sity and shape is correlated with the image to create a
`?ned for each marker location such that the search
`region for the two sides of the lane do not overlap.
`correlation matrix. The elements of this correlation
`matrix indicate the quality of the match between the
`Moreover, the search area changes with marker posi
`template and the image at all locations, according to
`tion MP, and increases in size if the marker was not
`found in the search of the previous frame. This limits
`some metric. Here, the absolute difference metric is
`the search space in which the markers can move from
`used as the correlation measure. Other types of mea
`one frame to another and shortens the processing time.
`sures such as normalized correlation can also be used.
`The dynamic search method is shown separately in
`The matching can also be done in the image-edge do
`main obtained by ?ltering the raw image with a Sobel
`FIG. 5. The search is conducted one row at a time and
`?lter. The advantage of edge matching is its insensitiv
`the search ?eld is selected for each row. It begins by
`ity to light variations; however, with this method, pot
`de?ning an increment for a given row as one ?fth of the
`distance between the right and left lane coordinates for
`holes, streaks, road texture and shadows may generate
`false objects. The template matching algorithm de
`that row <54a>. It is then decided <54b> whether
`scribed below uses both. image data and edge data.
`either a lane marker was not found or an old marker was
`used in the previous image. If so, the increment is in
`FIGS. 4a and 4c comprise a ?ow chart of an image
`processing method which uses a template matching
`creased by 20 pixels to enlarge the search area <54c>.
`Then the right bound BR and the left bound BL (for
`algorithm. The ?gures are joined at nodes A and B.
`each search area) is found <54d> by adding and sub
`Numerals in the text marked by angle brackets <>
`tracting the increment from the marker position MP for
`refer to reference numerals on the ?ow chart boxes.
`that line. Next a correlation procedure <56> is fol
`The raw image is digitized <40> into a 5l2><512><8
`image. The processing area is reduced in the vertical
`lowed for the de?ned search ?eld in the current line and
`and horizontal directions. In the vertical direction, it is
`if all the rows have not been processed <57> the
`limited by the horizon line 26 and by the hood 22. These
`search continues in another row <58> and the search
`?eld is rede?ned for that row.
`limits are obtained from the camera geometry and the
`calibration image. Then the gray level distribution of
`When the search area is de?ned a template matching
`operation begins <56>. For the de?ned search area, a
`the processing area is contrast expanded or normalized
`window of size 5X5 pixels with constant gray level of
`<42> to have standard statistics (e.g., a gray level
`240 is correlated with the image. The correlation mea
`distribution with mean= 100 and standard devia
`tion=20; the gray level range is 0 to 255 with 0 being
`sure is based on the absolute difference of the window's
`gray level and the image’s gray level. The window is
`full black and 255 being full white). Next, a 3X3 Sobel
`moved horizontally across the image pixel by pixel to
`?lter is used <44> to generate edge data 124x 512
`pixels in size.
`obtain a correlation measure at every point in the tra
`An obstacle in the lane ahead will give rise to strong
`verse and is then indexed vertically to make another
`edges in addition to those presented by the roadway.
`horizontal traverse. To reduce computations, the win
`The presence of obstacles is detected by counting the
`dow may be indexed vertically by more than one line so
`number of strong edge points within the area de?ned by
`that strips of the search area are sampled rather then the
`the lane boundaries of the previous frame <46> and
`complete search area. Using the absolute difference
`comparing to a threshold <48> for a given look-ahead
`metric, a perfect match will yield a zero element in the
`distance. For example, the number of pixels within 50
`correlation matrix. The correlation matrix is therefore
`searched for its minimum values. Elements with values
`feet of the vehicle having a gray level greater than 150
`are counted and compared to a threshold. If an obstacle
`under some threshold are selected to represent the
`marker positions.
`is detected within 50 feet, then a warning to the driver
`This correlation procedure will not yield any match
`is given <50> and the control returns back to the ?rst
`for yellow or faded white lane markers. If such markers
`step for a new frame. The location of lane boundaries is
`
`60
`
`55
`
`65
`
`20
`
`25
`
`35
`
`45
`
`1205-010
`
`

`

`20
`
`4,970,653
`5
`image area between the vehicle hood and the horizon is
`are present or markers are missing altogether, there will
`the only area operated upon. A thresholding operation
`be only a few match points. The match points are
`<106> generates a binary image by setting all gray
`counted <59> and if there are too few, say, 6 points
`out of a possible 100, the correlation is repeated with the
`levels above 150 to one and the rest to zero. Noise
`cleaning and boundary tracing <108> removes iso
`edge data <60>. Then if the marker is found <62> or
`lated, single pixels and traces the boundaries of the lane
`there were suf?cient match points <59>, the centroid
`and road markers. The boundary tracing algorithm
`of each marker is calculated < 64>. If the marker is not
`found <62> and the marker position on the opposite
`saves connected or adjacent pixels and follows the con
`side of the lane is known <66>, the missing marker
`tour of a lane marker or a road edge in all directions.
`This is done by dividing the image into three equal
`position is calculated based on the opposite marker
`segments representing the left, center and right portions
`position and the lane width <68> and its centroid is
`of the image. For each segment, the left and right edges
`calculated <64>. When the marker position on the
`opposite side of the lane is not known <66> the lane
`of the lane or road markers are searched for at least two
`adjacent edge pixels. After marking the edge pixels, the
`boundary from the previous frame is used <70>. After
`boundaries of the lane markers are traced until either
`the centroid of each marker is calculated <64>, if the
`the top of the image is reached or until no edge pixels
`markers are not found where expected <72> based on
`are found. The resulting boundary traced image has
`the previously detected lane geometry, the marker loca
`clean lane markers and road edges. Then obstacle detec
`tions from the previous frame are used <74>.
`tion is performed by counting edge pixels within the
`The determination of expected marker position
`<72> involves comparing the position of each marker
`lane boundaries and within 50 feet of the vehicle
`centroid with that of the previous frame. If the change
`<110> and comparing the count to a threshold
`<112>. If the count is greater than the threshold an
`is more than nine pixels a ?ag is set. Then a check is
`obstacle is detected and an obstacle warning is given to
`made for the presence of a second marker in each search
`the driver < 114> and the program returns to START.
`area as may occur, for example, when an exit lane is
`If no obstacle is detected a line Hough transform is
`sensed. The distance between two such markers is com
`performed <116>.
`puted and if it is above a threshold for some distance (15
`The Hough algorithm uses the boundary traced lane
`rows of the image) the outside marker (i.e., the right
`markers to estimate the location of the lane boundaries.
`most marker in the right search area) is rejected and a
`The Hough transform has been used extensively for
`?ag is set. Then for each row, if a marker is not found
`image processing applications. By knowing the desired
`the previous marker is used, provided no ?ag has been
`shape a priori, the transform maps complex shapes,
`set. If a flag is set, only new marker locations are used.
`whether lines or curves, into simple ones in the Hough
`The result of this procedure is a set of x,y coordinates
`domain. The simplest transform in terms of computa
`specifying lane markers. Then a least-squares line ?t is
`performed to generate the lane boundary <76>.
`tional burden is a line transform.Accordingly the lane
`boundaries are approximated by several straight lines.
`To validate the estimated lane boundaries they are
`FIGS. 7a and 7b show a line in the image plane and the
`checked for consistency of angles and width <78>. If
`the angle between the present lane boundary and the
`Hough transform of the same line. The normal parame
`previous lane boundary is greater than 35', then the
`terization of a straight line is given by R x cosA + y sinA
`present lane boundary is rejected. It is then estimated
`where R is the shortest distance from the origin to the
`line, and A is the angle between the normal to the line
`from the other lane boundary based on lane width.
`and the x axis. Each pixel in the Hough domain repre
`Similarly, the width of the lane is checked at two differ
`sents a potential line in the image space. After a line
`ent look-ahead distances for upper and lower bounds.
`Hough transform is performed on the boundary traced
`The algorithm also checks for lane changes by deter
`image <116> a search area is de?ned around the previ
`mining if the lower end of a lane boundary is near the
`ous boundary position <118> in Hough space as indi
`middle of the image <80>. If it is a lane change, warn
`ing is given <82> and depending on the lane change
`cated by the box 32 in FIG. 7b.
`The intensity of each pixel in the Hough domain
`direction (left or right), the markers are exchanged, i.e.,
`corresponds to the strength of a line in the image space
`for a left lane change, the left boundary is rede?ned as
`or the number of edge elements contributing to that
`the right boundary and a new left boundary is predicted
`line. To select the n strongest lines in the image, one
`based on the road/camera geometry.
`needs to identify the 11 pixels with the highest intensity
`The estimated lane boundaries are used to compute
`the vanishing point (where the left and right boundaries
`in the Hough domain. The following criteria are used to
`select the correct lines: (a) the intensity of the selected
`intersect) and the intersection of the right marker at a
`pixels should have local maxima, (b) the separation of
`?xed distance. These points and the vehicle dynamics
`the selected pixels should exceed a certain threshold,
`information are then used by a Kalman ?lter for estimat
`and (c) the intensity of the selected pixels should exceed
`ing the road curvature, the lateral shift from the center
`20% of the maximum intensity. The selection algorithm
`of the lane, and the heading error as disclosed in the
`is implemented by identifying the maximum pixel inten
`Dickmanns et al paper, supra. These estimates deter
`sity and deleting those that are less than 20% of that
`mine a coordinate transformation which determines the
`maximum intensity <119>. Then the Hough domain is
`look-ahead path required for autonomous vehicle con
`?ltered with an ll X 11 non-maximum suppression ?lter
`trol <86>. For display purposes the lane boundaries
`<120>, i.e., for a given pixel, if the other 120 pixels
`are overlaid on the image <88>.
`around it were higher in intensity, then the intensity of
`FIGS. 6a and 6b comprise a flow chart showing an
`image processing algorithm using a Hough transform.
`the subject pixel is set to zero. After ?ltering, only one
`local maximum pixel for each 11X 11 neighborhood
`The two ?gures are joined at node A. The process starts
`by digitization of the camera signals <100>, contrast
`could survive in the Hough domain but some points can
`still be close together. The number of Hough pixels is
`expansion <102> and edge detection by a Sobel ?lter
`further reduced by deleting pixels which would gener
`<104> as described for the previous method. The
`
`50
`
`55
`
`60
`
`65
`
`25
`
`35
`
`45
`
`1205-011
`
`

`

`4,970,653
`8
`7
`The description of a preferred embodiment of the
`ate near horizontal or vertical lines (A=0°, A=90°,
`invention for the purpose of illustrating the invention is
`A= 180') < 122>. Additional pixels are removed if the
`distance between two pixels is less than 34 <124>.
`not to be considered as limiting the invention since
`Finally the pixels are sorted based on intensity or the
`many modi?cations may be made by the exercise of skill
`number of contributing edge elements and the top four
`in the art without departing from the scope of the inven
`pixels are chosen <126>.
`tion.
`In a typical application, the roadway may include
`The embodiments of the invention in which an exclu
`discernible lane markers de?ning two lanes, and road
`sive property or privilege is claimed are de?ned as
`edges may also be visible even if not painted. In this case
`follows:
`four edges will be searched for to de?ne the road and
`1. In an automotive vehicle having a computer vision
`the vehicle relationship to the road. When the Hough
`system and an associated camera for viewing a scene
`transform is performed about 300 points may be present
`ahead of the vehicle, a method of detecting lane mark
`in Hough space which represent minor edges, major
`ers on a roadway comprising the steps of:
`edges including the ones of interest, and noise. Each
`obtaining an image of the scene and digitizing the
`operation reduces the number of remaining edges. For
`image,
`example, the deletion of those less than 20% of maxi
`normalizing the image,
`mum intensity may eliminate all but 50 points. The
`de?ning a search area in the image,
`non-maximum ?lter may leave 10 points, and the dele
`searching for lane markers in the search area of the
`tion of pairs that are close together and of those repre
`image to locate lane marker positions,
`senting horizontal and vertical lines may remove a few
`20
`estimating the position of any missing lane marker
`more points. Finally the strongest four pixels of the
`from said located lane marker positions, and
`remainder are selected as the lane markers and road
`de?ning lane boundaries from the said located lane
`edges.
`marker positions and the estimated position of the
`These pixels represent the lane boundaries. The pixels
`missing lane marker.
`are checked to determine if any lane boundary parame
`2. The invention as de?ned in claim 1 wherein the
`ters are missing <128> . If they are, the average of the
`step of estimating the position of missing lane markers
`last two frames parameters are used
`comprises averaging the corresponding marker posi
`If the parameters are not missing they are validated
`tions from previous images.
`by comparison with parameters of previous frames
`3. In an automotive vehicle having a computer vision
`<132> and if they are within 10% they are considered
`system and an associated camera for viewing a scene
`to be valid. If not, the average parameters of the last
`ahead of the vehicle and obtaining a series of images of
`two frames is used <130>. Then lane change is de
`the scene, a method of locating lane boundaries on a
`tected <134> and the look-ahead path is estimated
`roadway comprising for each image the steps of:
`<136> using the Kalman ?lter as described for the
`digitizing the image and normalizing the image,
`previous method, and an inverse Hough transform gen
`dynamically de?ning a search area in the image based
`erates lane boundaries <138> in the image space
`on the lane boundaries located in a previous image,
`which are overlaid on the image for display purposes
`searching for lane markers in the search area of the
`(140).
`image to locate lane marker positions,
`The Hough domain is quantized to a 256x256 space
`estimating the position of any missing lane marker
`for this implementation. Since pixels are selected based
`based on information selected from the current
`on a distance of 34, it is evident that the quantized space
`image and previous images, and
`can easily be reduced to 128><256 for decreasing the
`de?ning lane boundaries from the said located lane
`processing time. For real time applications, systolic
`marker positions and the estimated position of the
`array chips could be used for fast calculations of the
`missing lane marker.
`Hough transform. The image could be divided into
`4. The invention as defined in claim 3 wherein the
`several sub-images with several systolic chips working
`step of estimating the position of missing lane markers
`in parallel, mapping the results into several Hough do
`comprises the steps of:
`mains. Since the Hough transform operation is linear for
`determining the lane width from said previous image,
`the continuous case, the ?nal transform can be obtained
`by summing the partial Hough results. Another advan
`tage of the transform is that a priori knowledge about
`lane and road boundaries can easily be embedded in the
`algorithm.
`The method of the invention performed by either
`algorithm, template matching or Hough transform, has
`several advantages:
`(a) The method works well with missing and discon—
`tinuous markers, both yellow and white.
`(b) The search area in both algorithms is de?ned
`dynamically and adapts to the location of the lane mark
`ers, which saves processing time.
`(c) This method estimates road boundaries with low
`radius of curvature (e.g., entrance and exit highway
`ramps).
`(d) This method detects lane changes and warns the
`driver when lanes are crossed.
`(e) This method can detect certain obstacles in the
`lane.
`
`calculating the estimated position of a missing lane
`marker from the position of a marker on the oppo
`site side of the lane and the lane width.
`5. The invention as de?ned in claim 2 including the
`steps of:
`checking the

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket