`
`S. TSUGAWA, “Vision-Based Vehicles in Japan: Machine Vision Systems and
`
`Driving Control Systems,”
`
`IEEE, Vol. 41, No. 4 (August 1994)
`
`TRW Automotive U.S. LLC: EXHIBIT 1067
`PETITION FOR INTER PARTES REVIEW
`OF U.S. PATENT NUMBER 8,599,001
`IPR2015-00436
`
`
`
`398
`
`IEEE TRANSACTIONS O N INDUSTRIAL ELECTRONICS, VOL. 41, NO. 4, AUGUST 1994
`
`Vision-Based Vehicles in Japan: Machine
`Vision Svstems and Driving Control Svstems
`
`U
`Tsugawa
`
`J
`
`11. THE INTELLIGENT VEHICLE
`The Intelligent Vehicle of Mechanical Engineering Labora-
`tory, developed since the mid 1970’s, has a machine vision
`system for obstacle detection and a dead reckoning system for
`autonomous navigation system on a compact car.
`
`A. Obstacle Detection System
`The machine vision system includes a stereo TV camera
`assembly and a processing unit. It detects obstacles in real time
`within its field of view in a range from 5 m to 20 m ahead of
`the vehicle with a viewing angle of 40 degrees. The cameras
`are arranged vertically at the front part of the vehicle. The
`system locates obstacles in the trapezoidal field of view. The
`scanning of each camera is synchronized and the processing
`unit uses hard-wired logic in stead of a programmable device
`in order to realize high speed processing of video signals from
`the cameras.
`The principle of the obstacle detection is parallax. When
`two images from both of the cameras are compared, the two
`images of an obstacle are identical except the positions in the
`frames. On the other hand each image of figures on the ground
`differs due to the positions of the cameras. Fig. 1 illustrates
`the principle of the obstacle detection. The video signals are
`differentiated regarding time and the signals are shaped to
`obtain pulses that correspond to edges in the images. Each
`time intervd of the pulses from each cameras, (signal 1 and
`signal 2 in Fig. l), discriminates an obstacle from a figure on
`a road. An obstacle generates same time intervals, but a figure
`on a road generates different time intervals. The cameras have
`to be, thus, synchronized with each other, and have to employ
`vertical and progressive scanning techniques. The position of a
`scanning line corresponds to the direction to the obstacle, and
`the point where the optical axes of the cameras are crossing
`indicates the distance to the obstacle.
`Delaying of one of the signals from the TV cameras is
`equivalent I O rotation of the optical axis of the camera. Thus,
`varying the delay time enables us to detect obstacles at other
`locations. For enlargement of the field of view and detection
`of obstacle:; in the two-dimensional field of view during one
`scanning period, parallel processing with 16 kinds of delay
`time is employed, which yields the field of view of 16 zones
`arranged longitudinally at intervals of 1 m. Time required to
`detect obstacles is 35.6 ms, which consists of 33.3 ms of
`scanning OF one frame and 2.3 ms of processing to detect
`and locate obstacles. Fig. 2 shows an example of the obstacle
`detection. The guardrail is identified as a series of obstacles
`that are indicated by black elements in the figure at the bottom.
`
`J
`
`Sadayuki
`
`Absbuct- This paper surveys three intelligent vehicles devel-
`oped in Japan, and in particular the configurations, the machine
`vision systems, and the driving control systems. The first one
`is the Intelligent Vehicle, developed since the mid PRO’S, which
`has a machine vision system for obstacle detection and a dead
`reckoning system for autonomous navigation on a compact car.
`The machine vision system with stereo TV cameras is featured by
`real time processing using hard-wired logic. The dead reckoning
`function and a new lateral control algorithm enable the vehicle
`to drive from a starting point to a goal. It drove autonomously
`at about 10 km/h while avoiding an obstacle. The second one
`is the Personal Vehicle System (PVS), developed in the late
`1980’s, which is a comprehensive test system for a vision-based
`vehicle. The machine vision system captures lane markings at
`both road edges along which the vehicle is guided. The PVS has
`another machine vision system for obstacle detection with stereo
`cameras. The PVS drove at 1 6 3 0 km/h along lanes with turnings
`and crossings. The third one is the Automated Highway Vehicle
`System (AHVS) with a single TV camera for lane-keeping by
`PD control. The machine vision system uses an edge extraction
`algorithm to detect lane markings. The AHVS drove at 50 km/h
`along a lane with a large curvature.
`
`I. INTRODUCTION
`T IS necessary for an autonomous intelligent vehicle to
`
`I have functions of obstacle detection and navigation in order
`
`to drive safely from a starting point to a goal. Machine vision
`systems play an important role in both obstacle detection and
`navigation because of the flexibility and the two-dimensional
`field of view.
`The first intelligent vehicle that employed the machine
`vision system for obstacle detection was the Intelligent Vehicle
`[l] that we developed in mid 1970’s. It was followed by the
`Personal Vehicle System (PVS) [2]. However, little work on
`obstacle detection using machine vision has been done until
`now. The Intelligent Vehicle and the PVS are the typical, but
`only a few examples. The principle of the obstacle detection
`of the vehicles is parallax with the stereo vision.
`On the other hand, machine vision for lateral control is
`employed in many intelligent vehicles. The PVS [2], ALV [3],
`Navlab [4], VaMoRs [SI, and the Automated Highway Vehicle
`System (AHVS) [6], [7] employed machine vision to detect
`road edges or lane markings for lateral control. However, the
`algorithms of lane detection differ from each other.
`This paper surveys the configurations, the machine vision
`systems, and the driving control systems of the vehicles in
`Japan: the Intelligent Vehicle, the PVS, and the AHVS.
`
`Manuscript received May 31, 1993; revised March 7, 1994.
`The author is with the Mechanical Engineering Laboratory, AIST, MITI,
`Namiki 1-2, Tsukuba-shi, Ibaraki-ken, 305 Japan.
`IEEE Log Number 9403296.
`
`02784046/94$04.00 0 1994 IEE:E
`
`1067-001
`
`
`
`TSUGAWA: VISION-BASED VEHICLES IN JAPAN
`
`399
`
`AMERA
`
`d/dt H SHAPER W D E L A
`
`OBSTACLE
`
`FIGURE ON GROUND
`
`@ SIGNAL FROM
`LOWER CAMERA
`
`Q SIGNAL FROM
`UPPER CAMERA
`
`@ DELAYED SIGNAL
`OF SIGNAL 2
`
`0 ANDED SIGNAL
`
`C3 COUNT OF SIGNAL 4
`
`I l I i li
`
`I
`
`OBSTACLE
`
`I
`4
`
`FIGURE
`
`Fig. 1. The principle of the real time obstacle detection.
`
`Since the system had no measures against brightness, shad-
`ows, and shades, the operating condition was restricted.
`
`B. Lateral Control System
`At the early stage the Intelligent Vehicle was steered based
`on locations of obstacles [8]. The control was retrieved from
`a table with a key word generated with locations of obsta-
`cles. It drove at the maximal speed of 30 km/h. After the
`vehicle was equipped with a dead reckoning function with
`differential odometers, it drove along a designated path with
`an autonomous navigation function.
`The navigation system is featured by the steering control
`algorithm assuming the dead reckoning. The algorithm is
`named a target point following algorithm [9] after that the
`vehicle is steered so as to hit designated points representing
`the path sequentially to the goal. The designated points, called
`target points, are defined on a map that the vehicle has in the
`on-board computer.
`I ) Target Point Following Algorithm: In the derivation of
`the algorithm, the dynamics of a vehicle of an automobile
`type is described as follows:
`
`x = v cos 0,
`y = vsin8,
`.
`U
`
`8 = - t a n 0
`1
`
`Fig. 2.
`The obstacle detection: a road scene (top) and ObStdCleS in the scene
`(bottom).
`
`where (x,y) is the position of the vehicle, 0 is the heading
`of the vehicle, 'U is the speed of the vehicle, a is the steering
`angle, and 1 is the wheelbase of the vehicle. The relations
`hold when the vehicle drives without slip. As shown in Fig. 3,
`
`1067-002
`
`
`
`400
`
`I
`
`IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 41, NO. 4, AUGUST 1994
`
`The vehicle drives with the steering control for one
`control period.
`until the vehicle approaches the vicinity of the current
`target point.
`The target point is updated.
`until the vehicle arrives at the goal.
`end.
`This algorithm is applicable to obstacle avoidance by putting
`a temporary target point beside an obstacle. The speed of the
`vehicle is independently controlled from the steering, which
`is one feature of the algorithm. However, the steering control
`has open-loop structure.
`3) Experiments: The navigation system includes a 16-bit
`microcomputer system. Pulses generated by rotary encoders
`attached to both the rear wheels for dead reckoning are counted
`without asynchronous errors to measure precise speeds of the
`wheels. The computer integrates the speeds of the wheels
`to provide the position and the heading of the vehicle. Data
`regarding obstacles are also fed into the computer. Then, the
`navigation system finds optimal control of a steering angle and
`a speed of the vehicle. The control period of the system was
`204.8 ms.
`Driving experiments of the Intelligent Vehicle were con-
`ducted under some conditions. Fig. 4 shows results the trajec-
`tories of the vehicle when it drove along a designated path
`while avoiding a obstacle, and Table I shows the series of
`target points for the driving. When the vehicle approached
`within 3 m from a target point, the vehicle aimed at the next
`target point. On obstacle avoidance a temporary target point
`was put beside the obstacle when it came into the field of
`view. The speed of the vehicle was about 10 km/h.
`
`111. THE PERSONAL VEHICLE SYSTEM
`The Personal Vehicle System (PVS) was developed in the
`late 1980’s by Fujitsu and Nissan under support of Mechanical
`Social Systems Foundation in Japan. It was a comprehensive
`test system for a vision-based vehicle. It comprises a TV
`camera for lane detection, a stereo TV camera assembly for
`obstacle detection, an image processor, and control computers.
`At the early stage of the research, three TV cameras were
`attached on the roof to detect lanes in the left, central, and
`right directions, but they were replaced by one TV camera on
`a swivel inside the windshield for experiments under a rainy
`condition and in the nighttime. The TV camera captures lane
`markings at both road edges in the field of view from 5 m
`to 25 m ahead of the vehicle for lateral control. It drove at
`the maximal speed of 60 km/h. Several algorithms of lane
`detection and lateral control were studied, but the latest ones
`will be described here.
`
`A. Lane Detection System
`The lateral control of the PVS is based on lane markings of
`white lines along both road edges. Lane markings are captured
`by a TV camera and the scene is processed by the image
`processor to detect white lines in every control period as
`follows:
`
`0
`
`I
`
`xi
`
`I
`.Yl
`
`x
`
`Fig. 3. Derivation of the lateral control algorithm.
`
`let (XO,YO) and 0 0 be the current position and heading of
`the vehicle in the fixed reference frame (the X - Y system),
`(Xl, Y l ) be the current target point, and 01 be the expected
`heading of the vehicle at the point. In the new coordinate
`system (the z - y system), where the position of the vehicle is
`at the origin and its heading is zero, let (z1, VI) be the present
`target point and 81 be the heading (assume that 191 # &7r/2).
`The headings at the origin and the target point are assumed to
`be tangential angles of a curve going through the origin and
`the target point at these points. Then, a cubic curve that goes
`through the two points is uniquely defined as follows:
`y = ax3 + b 2
`
`(4)
`
`where
`
`z1 tan01 - 2yl
`4
`.:
`3yl - x 1 t a n &
`
`I
`
`U =
`
`b =
`
`(5)
`
`By use of the cubic curve, the steering control angle at the
`origin in the :E - y system that leads the vehicle to hit the
`point ?l(xl, yl) with the heading 81 is given as follows:
`a = arctan 21b.
`(6)
`2 ) Procedure for Autonomous Navigation: When the vehi-
`cle autonomously drives from its starting point to its goal, the
`procedure for autonomous navigation is designed as follows:
`begin
`A path is planned with an on-board map and the
`designated goal.
`A series of target points is placed along the path.
`Let the first target point be a current target point.
`repeat
`repeat
`The .I; - y system is defined by translation and
`rotation of the X - Y system to make the current
`position of the vehicle be the origin of the z - y
`system and the current heading be the z axis.
`The steering control is found with (6).
`
`1067-003
`
`
`
`TSUGAWA: VISION-BASED VEHICLES IN JAPAN
`
`Driving experiments: trajectories of the vehicle along a path (top)
`Fig. 4.
`and obstacle avoidance (bottom).
`
`TABLE I
`T H E SERIES 01 TARGET POINTS I-OR THE EXPERIMFNT
`
`92 0 100 0
`Tar@ points B
`C 167Q 1180
`
`0 2443
`02$43
`
`~
`
`~~
`
`I ) The scene is spatially differentiated by two filters:
`
`[ k
`
`[: :I I;] and
`
`-1
`-1
`-1
`1 0
`-1
`:I]
`to extract vertical edges and horizontal edges. Two
`image frames comprising edges are obtained.
`2 ) White lines are searched from a point 5 m ahead of the
`vehicle in one of the two frames referring to the result of
`the last period. When they are found, directional vectors
`of edges are calculated.
`3) White lines will generate a pair of edges, and each edge
`that makes pairs of the edges is searched in the direction
`
`The lane detection: a road xene (top) and ;i white linc detecled
`Fig. 5 .
`(bottom).
`
`of the normal vector of the directional vector. White
`lines are identified by referring the width of the paired
`edges.
`4) One of edges generated from a white line is traced using
`the directional vector to compensate a part or parts where
`the white line is not obtained or missing due to shadows,
`shades, and something covering the line.
`Fig. 5 shows the result of the lane detection where a shadow is
`covering the white line. The detection is performed at a video
`rate with the image processor.
`
`B. Literal Control System
`The lateral control of the PVS is based on a driver behavior
`model which simulates driving by a human driver. The steering
`control is, therefore, determined with following three factors:
`a target point, weighting at each observed point, and weighting
`on left and right white lines tu be followed. The target point
`is defined as a point that the vehicle is headed for, and the
`observed point is defined as a point where a human driver is
`mainly looking at while steering. The driving control system of
`the PVS consists of work stations, which the image processor
`is connected to.
`After sensors including the vision system and a speedometer
`of the vehicle acquire locations of white lines or lane infor-
`mation and the vehicle speed, the information i \ delivered to
`calculation of steering control and camera swivel control. The
`lane information includes not only locations of white lines but
`also the tangential angles at every I
`in from 5 m to 25 m
`ahead of the vehicle.
`The driving control system has originally had data for
`planning of navigation including the path. the heading, and the
`speed, and rules of driving. Thus, global driving commands
`and geographical data of the route generated in the driving
`
`1067-004
`
`
`
`402
`
`IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 41, NO. 4. AUGUST 1994
`
`control system are used to define observed points, a target
`point, and weighting at each observed point.
`The steering control is calculated using an angle between
`the vehicle and the white line ahead of it, and the distance
`to the white line. Referring to Fig. 6, an element of steering
`control is defined as follows:
`S l [ i ] = {f(Li) . (Xi - Xi)
`+ g ( L , ti)) . (Ti - ti) + h(Li, $)} . &(U) (7)
`
`where
`sl[z] : an element of steering control based on the left
`white line,
`L, : the distance to the target point i,
`f ( L , ) : a quadratic function of L,,
`X , : the target lateral distance,
`x, : the distance to the left white line,
`t, : the tangential angle,
`g(L,, t,) : a quadratic function of L, and t,,
`T, : the target yaw angle,
`h(L,, 4) : a cubic function of L, and 4,
`$ : the camera swivel angle,
`v : the speed of the vehicle, and
`~ ( v ) : a quadratic function.
`Then, steering control S is defined as weighted sum of sl [ A ]
`and sr[i] as follows:
`
`where:
`sT[i] : an element of steering control based the right white
`line, and
`wl[i], wr[i] : weightings for left and right observed points i.
`The experiments of the PVS were conducted on a proving
`ground to confirm the lateral control algorithm in conjunction
`with the swivel control of the TV camera not only under fair
`weather in the daytime but also in the nighttime or under a
`rainy condition. Fig. 7 shows a result of an experiment under
`a fair condition. The control period was 200 ms. Eleven target
`points between 5 m and 15 m from the vehicle were defined,
`and the largest weight was on the point at 7 m from the vehicle.
`The rate of successful detection of white lane lines was 100%
`under the fair weather in the daytime, but it became 70% on
`average in the nighttime. However, the PVS drove stably in the
`nighttime as well as in the daytime, because missed observed
`points were interpolated by varying the weightings.
`
`IV. THE AUTOMATED HIGHWAY VEHICLE SYSTEM
`Some automobile manufacturers in Japan have been con-
`ducting research on vision-based vehicles similar to the Intel-
`ligent Vehicle and the PVS, aiming at a possible solution to
`issues caused by automobile traffic. One example is a vision-
`based vehicle developed by Toyota, named the Automated
`Highway Vehicle System (AHVS). It has a function of lane-
`keeping with machine vision, and drove at a speed of 50
`km/h.
`
`Fig. 6. The lateral control algorithm.
`
`- CdC"I.l4
`...............
`
`M<.Nld
`
`0
`
`100
`300 [.I
`Traveling distance
`A driving experiment: the trajectory on the test site (top) and the
`Fig. 7.
`steering angle and the speed of the vehicle (bottom).
`
`200
`
`The AHVS is built on a medium-sized car. The driving
`control system includes a multiprocessor system, which con-
`sists of a host electronic control unit (ECU) as well as an
`image processor and an actuator controller, both of which
`are connected to the ECU. The image processor functions to
`process data from a CCD camera and to detect white lines
`on a road.
`
`A. Lane Detection System
`The AHVS employs machine vision to detect lane markings
`or white lines along both sides of a lane as well as the PVS.
`An algorithm for lane detection based on edge extraction
`[6] has been developed to have robustness against changes
`of brightness of such as road scenes, the position of the
`sun, shadows, and shades of guardrails, other vehicles, and
`constructicm.
`A road scene is input through a monochrome camera, and
`quantized 10 256 x 256 pixels, each of which is represented by
`8 bit data. A window of 256 x 30 pixels is set corresponding
`to the field of view from 10 m to 20 m ahead of the
`vehicle. Special hardware was made for real time processing.
`
`1067-005
`
`
`
`TSUCAWA. VISION-BASED VEHICLES IN JAPAN
`
`Fig. 8.
`Experiments of white line detection under various conditions by the edge extraction \ystem: the lield of view (left) and the segmented lines (right).
`The conditions are: a shadow (top), lens flare (middle), and a shade (bottom).
`
`It operates with a period of 100 ms from input of a road scene
`to output of locations of white lines.
`The processing for edge extraction comprises two steps of
`preprocessing and white line detecting. At the step of the
`preprocessing, the input scene is differentiated with a 3 x :I
`Sobel operator to get values and directions of each edge.
`and then, the differentiated scene is thresholded, followed by
`peak extraction and line segmentation processing. The line
`segmentation processing generates II list of segmented lines
`based on continuity of the peaks. At the detecting step, white
`lines are detected among the list of segmented lines with
`following characteristics of white lines:
`- White lines are continuous.
`- The locations of the white lines vary continuously.
`- White lines can be approximated with straight lines.
`- Curvature of a route does not change rapidly.
`
`- The edges are much longer than other noisy pieces of
`white lines.
`Experiments were conducted with a 2/3 in CCD monochrome
`TV camera with a l'unction of auto-iris. The focal length of
`the lens is I O min. Fig. 8 shows three experiments with the
`lane detection system under conditions: a ) there is a shadow
`of a tree and marks of tires; b) there is lens flare; and c) there
`is a shade under a construction. White lines in each condition
`have been detected. An experimental result to measure a valid
`range on an electronic shutter speed of the camera for a fixed
`image shows that the speed was between 1/250-1/2000 s in
`the edge extraction system and it shows the robustness.
`13. Lrrtesd Cor1tsol SjYtenl
`The vehicle is steered to follow a target lane by keeping
`the heading along the lane. Fig. 9 shows the lateral control
`
`1067-006
`
`
`
`LANE LINE c
`
`IEEE TRANSACTIONS ON INDUSTRIAL ELECTRONICS, VOL. 41, NO. 4, AUGUST 1994
`
`STEERNG ANGLE
`
`404
`
`0
`
`5
`
`10
`
`15
`
`[SEI
`
`LATERAL
`
`I
`
`-
`
`1
`
`
`
`SAMPLER
`
`The lateral control: definition of lateral deviation (top), the block
`Fig. 9.
`diagram of the lateral control system (bottom).
`
`system. The control period was 300 ms. The lateral control is
`based on PD control:
`O ( t ) = k(L, v)e(t) + g(L, v)[e(t) - e(t - I)]
`
`(9)
`
`where
`B : the steering angle,
`L : a distance to a point which is observed for control,
`U : a speed of the vehicle,
`e ( t ) : lateral deviation of time t at the distance L from
`the vehicle,
`k(L? 71) : a proportional gain, and
`g(L, U ) : a differential gain.
`The distance L depends on shapes of the lanes. It was
`found that L = 25 m was optimal along a straight lane, and
`L = 20 m along a curved lane under limited experiments
`on a proving ground at 50 km/h. Thus, the distance is fixed
`to L = 20 m in the experiments, and the proportional and
`differential gains are varied with the speed of the vehicle.
`Fig. 10 shows results of automatic driving with machine
`vision and manual driving by a human driver at the speed
`of 50 km/h. The edge extraction system was used to detect
`the lane. Compared to the result of the manual driving, the
`automatic driving yielded early steering at the entrance to the
`curvature and late steering along the curvature.
`
`V. DISCUSSION
`Two points in the vision-based vehicles surveyed here are to
`be discussed. One point is the obstacle detection system in the
`Intelligent Vehicle. It can process video signals at a video rate.
`However, it does not have robustness due to the principle of
`the obstacle detection. It does not have measures to avoid an
`optical illusion and to protect against the influence of shadows,
`shades, and brightness. Active machine vision may be one of
`the measures.
`The other point is delay caused by image processing. Even
`if the processing is achieved in real time, it takes some time
`from input of a road scene to output of control. The delay in
`the closed-loop control systems will cause instability, even if
`
`z t.
`
`-
`z t
`
`LINE POSITDN(20m AHEAD)
`
`STEERHG ANGLE
`
`-
`
`LATERAL ACCELERATON
`
`-
`
`D
`
`5
`
`10
`
`15
`
`[sec1
`
`it is not large. Although the influence was not explicit in the
`experiments of the PVS and the AHVS, a simulation study
`[IO] indicates existence of unstable motion of a vehicle in a
`visual navigation system. Compensation of the delay will be
`necessary in the visual navigation.
`
`VI. CONCLUSION
`Three vision-based vehicles developed in Japan have been
`introduced. The machine vision was used for obstacle detection
`and lateral control. The obstacle detection system in the
`Intelligent Vehicle operates in real time to locate obstacles in
`the field of view from 5 m to 20 m ahead of the vehicle. The
`lane detection systems in the PVS and the AHVS are robust
`enough to some extent to be influenced by optical noises.
`The navigation system of the Intelligent Vehicle depends
`on the dead reckoning and, thus, is an open-loop control
`system. However, the algorithm has been extended to a closed-
`loop visual navigation algorithm [ 101. The algorithm in the
`PVS show:< a driving performance similar to that by a human
`driver, though it is complicated. On the other hand the simple
`PD lateral control algorithm in the AHVS shows a different
`performance from that of a human driver.
`Research on intelligent vehicles or vision-based vehicles
`will be much more important, because in the future they
`will provide a possible solution to automobile traffic issues:
`accidents, congestion, and pollution.
`
`ACKNOWLEDGMENT
`I would like to thank T. Ozaki of Fujitsu Limited, A. Hosaka
`of Nissan ]Motor Co., Ltd., and N. Komoda of Toyota Motor
`Corporation for their cooperation in preparation of the paper.
`I also acknowledge the Society of Automotive Engineers of
`Japan, Inc. for permission to reprint from the publications.
`
`1067-007
`
`
`
`TSUGAWA: VISION-BASED VEHICLES IN JAPAN
`
`405
`
`REFERENCES
`
`S . Tsugawa, T. Hirose, and T. Yatabe, “Studies on the intelligent
`vehicle,” Rep. Mechanical Eng. Lab., no. 156 Nov. 1991 (in Japanese).
`A. Hattori, A. Hosaka, and M. Taniguchi, “Driving control system for
`an autonomous vehicle using multiple observed point information,” in
`Proc. Intell. Vehicles ’92, June-July 1992, pp. 207-212.
`M. A. Turk, D. G. Morgenthaler, K. Gremban, and M. Mama,
`“VITS-A
`vision system for autonomous land vehicle navigation,”
`IEEE Trans. Pattem Anal. Mach. Intell., vol. 10, no. 3, pp. 342-361,
`May 1988.
`C. Thorpe, Ed., Vision and Navigation-The Camegie Mellon Navlab.
`Norwell, MA: Kluwer Academic, 1990.
`V. Graefe and K. Kuhnert, “A high speed image processing system
`utilized in autonomous vehicle guidance,” in Proc. IAPR Workshop
`Comput. Vision, Tokyo, Japan, Oct. 1988, pp. 10-13.
`T. Suzuki, K. Aoki, A. Tachibana, H. Moribe, and H. Inoue, “An
`automated highway vehicle system using computer vision-Recognition
`in 1992 JSAEAutumn Convention Proc. 924, vol.
`of white guidelines-,”
`1, Oct. 1992, pp. 161-164 (in Japanese).
`A. Tachibana, K. Aoki, and T. Suzuki, “An automated highway vehicle
`system using computer vision-A
`vehicle control method using a lane
`in 1992 JSAE Autumn Convention Proc. 924,
`line detection system-,”
`vol. 1, Oct. 1992, pp. 157-160 (in Japanese).
`S. Tsugawa, T. Yatabe, T. Hirose, and S . Matsumoto, “An automobile
`with artificial intelligence,” in Proc. 6th Int. Joint Con$ Art$ Intell.,
`Tokyo, Japan, Aug. 1979, pp. 893-895.
`
`[9] S. Tsugawa and S . Murata, “Steering control algorithm for autonomous
`vehicle,” in Proc. 1990 Japan-U.S.A. Symp. Flexible Automat., Kyoto,
`Japan, July 1990, pp. 143-146.
`[lo] K. Tomita, S . Murata, and S . Tsugawa, “Preview lateral control with
`machine vision for intelligent vehicle,” in IEEE Proc. Intell. Vehicles
`’93 Symp., Tokyo, Japan, July 1993, pp. 467-472.
`
`Sadayuki Tsugawa was born on April 24, 1944 in
`Hiroshima, Japan. He received the bachelor degree,
`the master degree, and the doctor degree all from the
`Department of Applied Mathematics and Physical
`Instrumentation, Faculty of Engineering, University
`of Tokyo, in 1968, 1970, and 1973, respectively
`In 1973 he joined the Mechanical Engineering
`Laboratory of Japan’s Ministry of International
`Trade and Industry. Now, he is the director of
`the Machine Intelligence Division of the Applied
`Physics and Information Science Department. His
`interests are in informatics for vehicles that includes the Intelligent Vehicle
`with machine vision, vehicle-to-vehicle communication systems, and visual
`navigation of intelligent vehicles.
`Dr. Tsugawa is a member of the Society of Instrument and Control
`Engineers (SICE), the Japan Society of Mechanical Engineers (JSME), and
`the Institute of Electrical Engineers of Japan (IEEJ). He received the best
`paper award from SICE in 1992.
`
`1067-008