`
`SAMSUNG EXHIBIT 1008
`Samsung v. Image Processing Techs.
`
`
`
`Copyright and Reprint Pemiission: Abstracting is pertnitted with credit to the
`source. Libraries are permitted to photocopy beyond the limit of U.S. copyright
`law for private use of patrons those articles in this volume that carry a code at the
`bottom of the first page. provided the per-copy fee indicated in the code is paid
`through Copyright Clearance Center, 27 Congress Street, Salem, MA 01970.
`Instructors are permitted to photocopy isolated articles for non-commercial
`classroom use without fee. For other copying, reprint or republication permission,
`write to IEEE Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O.
`Box 1331. Piscataway, NJ 08855-1331. All rights reserved. Copyright 1990 by
`the Institute of Electrical and Electronics Engineers, Inc.
`
`IEEE Catalog Number 92TI-10468-9
`ISBN 0-7 803—0747—X (softbound)
`ISBN 0-7803-0748-8 (microfiche)
`Library of Congress Number 92-5440]
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 2 of 14
`
`
`
`Acknowledgments
`
`Many people contributed to this symposium by serving as organizing committee members andior by giving
`suggestions. Some of them are listed here in alphabetic order.
`
`(U of Tokyo)
`S. Amati
`M. Bell (U ofhlottingham)
`C. de Benito (Ford)
`J .C. Bezdek (U W Florida)
`B. Bosacchi (ATT Bell Lab)
`D.E. Boyse (U of Illinois)
`J.M. Brady (Oxford)
`D. Brand (CRA)
`P.J. Burt (Samofl)
`D.L. Christiansen (Texas Trans Inst)
`L.S. Davis (U of Maryland)
`P. Davies (Castle Rock)
`E.D. Dickrnanns (U D B Mtlnchen)
`G.G.Dodd (GM)
`C.L. Dudek (Texas A&M)
`L.A.Fe-ldiramp (Ford)
`'1'. Fulmda (U of Nagoya)
`WJ. Gillan (UK DOT)
`V. Graefe (U D B Mtinchen)
`.l.A.Greenberg (Ford)
`F. Harashima (U of Tokyo)
`G. Hartmann (U Paderbom)
`H. I-Iashimoto (U of Tokyo)
`K. Hashimoto (Honda)
`B. Heydecker (U College London)
`A. Hosaka (Nissan)
`T. I-Iryoej (Daimler-Benz)
`T. Inoue (Toyota)
`R. Iain (U oflvlichigan)
`D.C. Judycki (US DOT)
`T. Kanade (CMU)
`A. Kanafani (UC Berkeley)
`.l.l.. Kay (JHK)
`D. Keirsey (Hughes)
`A. Kemeny (Renault)
`C. Koch (Cal Tech)
`M. Koshi
`(U of Tokyo)
`S.Y. Kang (Princeton)
`'I‘.N. Lam (U of Hong Kong)
`J-C Latombe (Stanford)
`
`C. Laugier (LIFIA)
`D.T. Lawton (GeorgialT)
`R.N. Lee (NASA)
`P.A. Ligomenides (U of Maryland)
`G. Lindstrorn (Honda)
`H.S. Mahmassani (U of Texas)
`K.A. Marko (ford)
`B. Mathur (Rockwell)
`W. Mellis (Daimler-Benz)
`P. Michalopoulos (U of Minnesota)
`H. Okamoto (Japan Traffic Management)
`C.K. Orski (Urban Mobility)
`MJ. Patyza (CMU)
`T. Poggio (MIT)
`W.B. Powell (Princeton)
`W.W.Recker (UC Irvine)
`W. Rernrnele (Siemens)
`E. Riseman (U of Massachusetts)
`S.G. Ritchie (UC Irvine)
`E.H. Ruspini
`(SR!)
`M.de Saint Blancard (Peugeot)
`1. Sakai (Honda)
`P. Scbonfeld (U of Maryland)
`Y. Shirai (OsakaU)
`M. Shulman (Ford)
`K.C. Sinha (CUTC)
`W.M. Spreitzcr (GM)
`M. Sugeno (TIT)
`A.Takanisl1i (Wasedatl)
`C. ‘Thorpe (CMU)
`M. Togai (Togai IL)
`S.Tsugawa (MITI)
`S. Tsuji (Osaka U)
`P. Werbos (NSF)
`B.S.Widn:ann (Hughes)
`J. L. Wyatt. Jr. (MIT)
`M. Yoshida (Fujitsu)
`S. Yuta ('I‘sul:ubaU)
`L.A.Zadeh (UC Berkeley)
`
`I would like to express my appreciation to all of them who contributed to this symposium.
`
`Ichiro Masaki. General Chair. (GM)
`
`SAMSUNG EXHIBIT 1008
`Page 3 of 14
`
`
`
`Program
`
`Monday, June 29
`
`Session 1
`
`(9:l0an1 ~ 10:30am)
`
`Chairpersons:
`
`V. Graefe (U.derBundeswehr Milnchen)
`
`Traffic Sign Recognition in Color Image Sequences
`W. Ritter (Daimler Benz)
`
`A Hierarchical Vision System
`(3. Hartmann. B. Mertsching (U. Paderborn)
`
`Vision~Ba;ed Car-Following: Detection, Tracking. and Identification
`M. Schwat-zinger. T. Zielke. D. Noll. M. Brauckmano. W. v. Seelen (R-U. Bochum)
`
`A Stmcnire-from-motion Algorithm for Robot Vehicle Guidance
`H. Wang. M. Brady (Oxford U.)
`
`Session 2
`
`(10:50am - 12:10pm)
`
`Chairpersons:
`
`J. L. Wyatt. Jr
`
`(MIT)
`
`VITA - An Autonomous Road Vehicle (ARV) for Collision Avoidance in Traffic
`B. Ulmer (Daimler-Benz)
`
`Obstacle Detection Using Bi-Spectrurn CCD Camera and Image Processing
`H. G. Nguyen.J. Y. Laisne (Renault)
`
`Disparity Analysis for Real Time Obstacle Detection by Linear Stereoi/ision
`J. -L. Bruyelle, J. -G. Postaire (CAL)
`
`Automatic Recognition of Vehicles Appmaching from Behind
`W. Efenberger. Q. -H. Ta, L. Tsirtas. V.Graefe (U. oer Bundeswehr Milnchen)
`
`Lunch
`
`(12:10pm - 1:10pm)
`
`Special invited Session
`
`(l:10pm - 2:10pm)
`
`Speaker:
`Chairperson:
`
`R. A. Frosch (Vice President. GM)
`G. G. Dodd (GM)
`
`Session 3
`
`(2:10pm - 3:30pm)
`
`Chairpersons:
`
`R. Jain (U of Michigan)
`
`A High Perfonnance Modular Architecture for Hardware Implementation of Neural
`and Digital Applications
`Y-S. Chiou. P. A. Ligornenides (U. Maryland. Caelum Res. Corp)
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 4 of 14
`
`
`
`Small. Fast Analog VLSI Systems for Early Vision Processing
`1. L. Wyatt. .lr., C. Keast, M. Seidel_ D. Standley, B. Horn, T. Knight, C. Sodini,
`H-S Lee (MIT)
`
`Object-Based Analog VLSI Vision Circuits
`C. Koch (CalTech); B. Mathur. S.-C. Liu (Rockwell International Science Center):
`1. G. Harris (MIT): J. Luo. M. Sivilotti (Tanner Research)
`
`ColIision—Avo'tdance System Based on Optical Flow
`N. Hatsopoulos. J. A. Anderson (Artemis Associates. Brown U.)
`
`Session 4
`
`(3:50pm — 5:30pm)
`
`Chairpersons:
`
`L. A.Feldkarnp (Ford)
`M. Togai
`(Togailnfral..ogic)
`
`Lateral Control of a Autonomous Road Vehicle in 3 Simulated Highway Environment
`Using Adaptive Resonance Neural Networks
`J. M. Lubin, E. C. Huber, 5. Gilbert. A. I... Komltauser (Princeton U)
`
`Fuzzy Control for Active Suspension Design
`E. C. Yeh, Y. J. Tsao (National Tsing Hua U)
`
`Adaptive Traffic Signal Control Using Fuzzy Logic
`S.Chiu (Rockwell)
`
`Estimating Ignition Timing from Engine Cylinder Pressure with Neural Networks
`8. Willson, C. Anderson. J. Whitham (Colorado State U)
`
`Representation and Recovery of Road Geometry in YARF
`K. Kluge. C. ‘Thorpe (Carnedie Mellon U)
`
`Tuesday, June 30
`
`Session 5
`
`(9:0Oam - 10:40am)
`
`Chairpersons:
`
`K. A. Marko (Ford)
`8. Ulmer
`(Daimler-Benz)
`
`Design Method for an Automotive Lmer Radar System and Fuuire Prospects for Laser Radar
`M. Sekine, T. Senoo. 1. Morita (Nissan); H. Bndo(l(ansci Co.)
`
`Vision for Vehicle Guidance Using Two Road Cues
`G. Funlta-Lea. R. Bajcsy (U of Pennsylvania)
`
`Super Smart Vehicle System: AVCS Related Systems for the Future
`S. Tsugawa (MITI); T. Sajto (Toyota); A. Hosalta (Nissan)
`
`A Simulation Based Methodology for Analyzing Network-Based
`Intelligent Vehicle Control Systems
`N. Bottstany. M. Follretts. K. Rao, A. Ray. L. Troxel. Z. Zhang (GM)
`
`Impact of Automatic and Semi-Automatic Vehicle Longitudinal Control on Motorway Traffic
`F.Broqua (Renault)
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 5 of 14
`
`
`
`Session 6
`
`(1 1:00am - 12:20pm)
`
`Chairpersons:
`
`M. Shulman (Ford)
`
`Development of Automatic Driving System on Rough-Road
`-- Realization of High Reliable Automatic Driving System
`K. Ohrtishi. I. Komura. T. lshibashi (Toyota)
`
`Development of Automatic Driving System on Rough Road
`-- Automatic Steering Control by Fuzzy Algorithm
`T. Shigematu. Y. Hashimoto. T. Watanabe (Toyota)
`
`Development of Automatic Driving System on Rough Road
`-- Fault Tolerant Structure for Electronic Controller
`N. Oolca. T. Tsuboi. I-l. Oka (N ippondenso)
`
`Road Tracking. Lane Segmentation. and Obstacle Recognition by Mathematical Morphology
`S. Beucher, X. Yu, M. Bilodeatt (CMM)
`
`Lunch
`
`(l2:20pm - 1:20pm)
`
`Session 7
`
`(1:20pm - 2:40pm)
`
`Chairpersons:
`
`J. A. Greenburg (Ford)
`3. Mathttr
`(Rockwell)
`
`Intelligent Cruise Control with Fuzzy Logic
`R. Milller. G. Nocker (Daimler-Benz)
`
`Neural Network Modeling and Control of an Anti-Lock Brake System
`L. I. DavisJr.. G. V. Puskorius. F. Yuan, L. A. Feldlutmp (Ford)
`
`Obstacle Detection for a Vehicle Using Optical Flow
`G.-S. Young(NlST. U of Maryland); T-H. Hong(NlST. American U); M. Herman (NIST);
`J. C. S. Yang (U of Maryland)
`
`Application of Genetic Programming Application to Control of Vehicle Systems
`R. J. Hantpo. K. A. Marine (Ford)
`
`Session 8
`
`(3:00pm - 4:20pm)
`
`Chairpersons:
`
`(NIST)
`J. S. Albus
`S. E. Shladover
`(UC Berkeley)
`
`Lane Recognition System for Guiding of Autonomous Vehicle
`A. Suzuki, N. Yasui, N. Nalrano. M. l(aneko(Matsusl1ita)
`
`Computer Architecture and Implementation of Vision-Based Real-Time Lane Sensing
`O. D. Altan. H. K. Patnailt. R. P. Roesser (GM)
`
`Driving Control System for Autonomous Vehicle Using Multiple Observed Point Information
`A. I-lattori. A. Hosaka. M. Taniguclri (Nissan)
`
`202
`
`20?
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 6 of 14
`
`
`
`An Image-Processing Architecture and a Motion Control Method for an Autonomous Vehicle
`K. Hashimoto. S. Nalrayama. T. Saito. N. Oono. S. lshida. K. Unoura, J. Ishii.
`Y. Okada (Honda)
`
`Evening Session
`
`(Presentations will be held in parallel from 4:30pm to 6:00pm)
`
`Texture Analysis for Road Detection
`C. Fernandez-Maloigne, D. Laugier. A. Bekkhoucha (U de Tech de Cornpiegne)
`
`219
`
`Development of Route Calculation System
`T.Hashirnoto (Sumitomo)
`(Paper is not available for proceedings.)
`
`A Real Time Information Processing Algorithm for the Evaluation
`and Implementation of ATMS Strategies
`1. D. Leonard II. B. Ramanathan. W. W. Recker (U of Cal. Irvine)
`
`A Real Time Distance Headway Measurement Method Using Stereo and Optical Flow
`T. Ito. T. Saltagami. S. Kawakatsu (Dailtatsu)
`
`Simulation Solenoid Valve for Controller Design and Evaluation
`M. A. Arain. P. G. Scotson (Lucas)
`
`Learning Autonomous Navigation Abilities Using Radical Basis Functions Networks
`M. Aste. B. Caprile (IRST)
`
`A Neural Network Texture Segmentation System for Open Road Vehicle Guidance
`A. Catala. A. Gran. B. Morcego. J. M. Fuertes (U Politecnica de Catalunya)
`
`3D Scene Modeling for Autonomous Mobile Robot Sell‘-Location Using Passive Landmarks
`L. Ottaviano. G. Attolico. T. D'Orazio. E. Stella. A. Distante (IESI)
`
`Reactive Motion Planning for an Intelligent Vehicle
`M. Hassount. C. Laugier (LIFIA-IIRIMAG)
`
`Navigation by Combining Reactivity and Planning
`A. Cirnatti, P. Traverso, S. Dalbosoo (IRST): A. Armando (U Genoa)
`
`A Generic Traffic Priority Language for Autonomous Robots Navigation in an Unknown Space
`N. G. Bourba.lu's (U of New York)
`
`How Smart Can a Car Be?
`S. Moite
`
`Design of a Behavior-Based Micro-Rover Robot
`W. 0. Troxell. S. Cherian. M. M. Ali (Colorado State U)
`
`Odysseus Robot: 'I‘he Wlieeleditegged Version
`N. G. Bourbakis (U of New York)
`
`World Model Representations for Mobile Robots
`T-H. I-long, E. Angelopoulou. A. Y. Wu (American U)
`
`Intelligent Manoeuvring and Execution Control
`D. Ramamonjisoa, N. Le!‘-‘on. D. Meizel ('UTC)
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 7 of 14
`
`
`
`Road Reconstruction by Image Analysis for An Autonomous Vehicle
`for Protection of Mobile Working Sites
`C. Lailler. J.-G. Postaire (USTL). J.-P. Deparis (CRESTA INRETS)
`
`A Netual Network Estimating the Psycltomoustical Annoyance from Physical Data
`1. M. Parot (IMDYS); C. Thirard (MIETRAVIB RDS): E. Vincent (INRETS)
`
`Fuzzy Logic Object Oriented Classes
`B. S. Widrnann. G. R. Widmann (Hughes)
`
`A New Approach of Fuzzy Control by Using‘Neural Network
`S. Baocheng (Chinese Academy of E. l. . Chinese Academy of SINICA); L. xiliui (Chinese
`Academy of E. 1.); Z-F. Zhang (Chinese Academy of SINICA. U of Science and Tech of China)
`
`Extracting Viewpoint Invariance Relations Using Fuzzy Sets
`8. Ton (Arizona State U)
`
`Integrator Decoupling Applied to Power System Load Frequency Control
`S. Kawashima (Kolrusai Jr College)
`
`An Implementation of Redundancy Resolution and Stability Monitoring
`for a Material Handling Vehicle
`A. L. Bangs. F. G. Pin. 8. M. Killottgh (Oak Ridge National Lab)
`
`Parallel Parking with Curvature and Nonholonornic Constraints
`D. Lyon (RP!)
`
`Wednesday, July 1
`
`Session 9
`
`(9:00am - 11:00am)
`
`Chairpersons:
`
`D. E. Boyse (U of Illinois)
`G. Siegle (Rabat Bosch)
`
`The California PATH Program of IVHS Research
`and Its Approach to Vehicle- Highway Automation
`S. E. Sltladover (U of Cal., Berkeley)
`
`Travel-Time Data Provision System Using Vehicle License Number Recognition Devices
`Y. Tanaka(Osaka Prefecture] Police); K. Kanayama. I-l. Sugintura (OMRON Corp)
`
`A Communications Architecture Concept for IV!-IS
`D. J. Chadwick. V. Patel (MITRE Corp)
`
`TravTek Advanced Driver Information System
`.l.Riliings (GM)
`(Paper is not available for proceedings.)
`
`A Dynamic Route Guidance System Based on I-listorical and Current Traffic Pattem
`T. N. Lam. C. O. Tong (U of Hong Kong)
`
`Communication Architectures Supporting Proactive Driver Safety
`G. L. Mayhevv. K. L. Shirirey (Hughes)
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 8 of 14
`
`
`
`Special Invited Session on [VHS
`
`(ll:20am - 12:20pm)
`
`Speaker:
`
`T. D. Larson
`(Federal Highway Administmtor,
`United States Depanment of Transportation)
`
`Chairperson:
`
`W. M. Spreitzer (GM)
`
`Lunch
`
`(12:20pm - 1:20pm)
`
`Session 10
`
`(1:20pm - 3:(I)pm)
`
`Chairpersons:
`
`CasildadeBenito (Ford)
`B.S.Widmann (Hughes)
`
`A Reference Model Architecture for Intelligent Vehicle and Highway Systems
`J. S. Albus. M. Jubens. S. Szabo (NIST)
`
`Road and Relative Ego-State Recognition
`R. Behringer. V. v. Holt. D. Dickmanns (U der Bundeswehr Miinchen)
`
`Progress in Neural Network-based Vision for Autonomous Robot Driving
`D.A.Pornerleau (Carnegie Mellon U)
`
`Visual Processing for Vehicle Control Functions
`E. M. Riseman, A. R. Hanson. H- S. Sawhney. J. R. Beveridge,
`R. Kurnar ( U of Massachussetts)
`
`Autonomous Driving on a Road Network
`G. Siegle (Robert Bosch); 1. Geisler. F. Laubenstein. H.-H. Nagel, G. Struck (ITTB)
`
`Rm! Time Task Manager for Communications and Control in Multicar Platoons
`K. S. Chang. W. Li. I. R. Potche, P. Varaiya (U of Cal., Berkeley)
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 9 of 14
`
`
`
`COMPUTER ARCHITECTURE AND IMPLEMENTATION OF
`
`VISION—BASED REAL-TIME LANE SENSING
`
`O. D. Altan, H. K. Patnaik, R. P. Roesser
`General Motors Research Laboratories
`
`Vehicle Systems Research Department
`30500 Mound Road
`
`Warren, MI 48090-9055
`Phone: (313) 986-9100
`Fax: (313) 986-3003
`
`INTRODUCTION
`
`Lane sensing is the detection of lane boundaries from a
`video image of the roadway and the determination of the
`vehicle's position and orientation with respect to the lane.
`The detection of lane boundaries
`involves
`the digital
`processing of a
`roadway image to bring out points
`comprising the lane edges and a search of the processed
`images to find those points. The determination of the
`vehicle-lane relationship consists of estimating vehicle offset,
`heading and lane curvature from the detected lanes and
`vehicle measurements. Lane sensing supports several other
`functions, including collision warning. collision avoidance,
`driving performance monitoring and lane control.
`
`An algorithm that was developed at G.M. Research
`Laboratories [1] was specifically tailored for the purpose of
`demonstration on a vehicle in real-time.
`It
`involves
`
`genera ting two processed images that are separately searched
`for
`lane boundary points. The first processed image is
`generated by computing a local 5-by-5 pixel average of the
`intensity-normalized input image. The second processed
`image is generated by applying a Sobel edge operation [2]
`to the intensity-normalized image and then computing a local
`5-by—5 intensity average. The first processed image is
`intended to bring out points
`from well marked lane
`boundaries, while the second image is intended to bring out
`transitions at
`the lane boundaries when they are not well
`marked.
`
`This lane-detection algorithm was developed on a mainframe
`computer operating in non-real-time with images taken from
`a video tape of highway scenes. Numerous heuristic
`techniques have been incorporated to handle particular
`situations such as merging lanes, shadows, missing markers,
`and false targets. Limited attention was given to real-time
`implementation considerations.
`
`A second algorithm was developed to estimate lane
`curvature, vehicle offiet with respect to the center of the
`lane and vehicle heading. The algorithm uses information
`obtained from lane detection and from measurement of
`
`vehicle speed and steering angle. The estimation algorithm
`is in the form of a Kalman filter using linear models ofboth
`the vehicle dynamics and the road, plus men and vehicle
`geometry. This estimation algorithm was also developed on
`a mainframe computer.
`
`This paper describes the implementation of these two
`algorithms to achieve lane sensing in real—time on board a
`vehicle. This required developing the computer architecture,
`electronics and software to perform the processing steps in
`the algorithms. Attention was given to multi-tasking, multi-
`processing and interprocess communication.
`In particular,
`use of the VME bus [3] provides an open architecture and
`the disc-based real-time operating system [4] provides
`multitasking and a flexible environment.
`
`The lane sensing system was installed in a vehicle and the
`real-time implementation was demonstrated on a test track
`and limited access highways.
`
`COMPUTATIONAL REQU'lREl\£EN'I‘S
`The computational requirements of the real-time system are
`determined by the required processing time and latency.
`According to initial studies, the processing time or the time
`elapsed between the starting points of two successive process
`cycles should be no more than 0.2 seconds. Also,
`the
`latency or the time from the point an input image is sampled
`to the point
`that
`the processed results become available
`should he no more than 0.5 seconds.
`
`The camera produces an image frame in approximately 33
`milliseconds. Each image frame is digitized such that there
`are 512 lines with 512 pixels in each line. This is equivalent
`to over a quarter of a million pixels in a frame to be
`processed, each represented by 8 bits.
`A histogram
`measurement and several filtering operations are performed
`concurrently, as well as table look-ups and general arithmetic
`operations. The filtering operations involve manipulating
`pixels in the neighborhood of a given pixel. Most of the
`computation is in the form of simple integer arithmetic
`operations, but the amount of data to be processed is quite
`large. The total computation rate required is approximately
`900 million arithmetic operations per semnd. This rate is
`equal to the computation rate of several supercomputers,
`although it involves low-precision integer arithmetic. The
`images are processed by special-purpose boards optimized
`for specific image processing functions. These boards are
`capable of performing at this very high rate only for these
`given functions, and are in fact very limited in regard to
`other functions. That is, it is the application-specific nature
`of these boards, where the design is tailored for a given
`
`202
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 10 of 14
`
`
`
`function, that leads to such high performance.
`
`The computer system currently consists of several special-
`purpose boards, and is not suitable for a production vehicle
`at
`this
`time.
`However, with the
`continued rapid
`advancement in integrated circuit technology we believe that
`it will soon be possible to implement all of these functions
`in a small chip set or chip package with reasonable cost for
`production.
`sufficient
`performance,
`and
`low power
`consumption.
`
`SYSTEM ARCHITECTURE
`
`The block diagram of the system is shown in Figure 1. The
`system consists of several boards on a VMEbus, housed in
`a standard full-height VME rack. There are two sections to
`this system: the general-purpose computer and the real-time
`image processor.
`
`The main task of the general-purpose computer section is to
`function as the master controller for the real-time image
`processor.
`It consists of an M68030 processor running at a
`clock speed of 15 MHz, 4 Mbytes of dynamic random
`access memory (DRAM), a disk controller, a 20 Mbyte hard
`disk, and a floppy diskette drive. The M68030 processor
`communicates with all the peripherals through the VMEbus.
`The real-time image processing boards are interfaced to the
`VMEbus and to the video bus. The processor is capable of
`accessing any of these boards to send commands, to read
`status, or to access data. The processor periodically sends
`commands to the image processing boards at specific times
`so
`that
`they
`perform their
`intended
`functions
`in
`synchronization.
`
`real-time
`a
`runs under
`The general-purpose computer
`operating system [4]. This is a multi—tasking operating
`system with fast
`interrupt handling and system utilities.
`There is a screen oriented text editor for developing and
`modifying programs. An assembler enables the programs
`developed on the system to be assembled and loaded into the
`memory for execution. A debugger on the system is an aid
`for developing the software while it is running irt real-time.
`
`The real-time image processor consists of a camera, a set of
`processing boards and a display. Each of the steps in
`processing the image is performed at video rate. After an
`image is acquired by the camera, pixels are passed from
`board to board in serial form. The boards are synchronized
`to the pixel times and operate concurrently with each other.
`Each board performs a particular process on the image by
`operating on the pixels sequentially as they arrive. Shortly
`after the last pixel from the input image to a process is
`encountered, the process reaches completion.
`
`information flow among the boards is accomplished through
`high-speed video data paths, as shown in Figure I. Eadt
`data path conveys 8- or 16-bit data and shares common
`timing and control information with the other data paths.
`The separate data paths provide for concurrent data transfers
`among the processing modules. This increases the overall
`
`data transfer rate significantly compared to a conventional
`bus, in which only one transfer at a time can occur. The
`network of boards and data paths is said to act as a pipeline
`since
`information appears at various process
`stages
`throughout the network and the pieces of information flow
`through the network synchronously.
`
`The detailed block diagram of real-time image processor is
`shown in Figure 2. Each of the blocks in this diagram
`performs a specific function, and most of them reside on a
`single board. A charge-coupled device (CCD) camera is
`used to acquire the image. This is a monochrome image in
`RS-l'?0 format [5]. The digitizer is an 8-bit flash analog-to-
`digital (AID) converter with a sampling rate of 10 MHz.
`The output of the digitizer is a 512x512 image. The
`digitized image is simultaneously fed into a framestore and
`the masking board. The masking board passes only a
`portion of the image whose location is chosen to be
`representative of the road intensity. The output of the
`masking board is the input
`to a histogram board, which
`generates the intensity histogram of the unmasked image
`portion. The M68030 processor uses this histogram to
`compute the mean intensity over this portion.
`
`Once the mean from the histogram is computed, the M68030
`processor generates
`a
`linear
`function
`for
`intensity
`normalization and updates a look-up table with this function.
`As the original image is passed through this loolt-up table,
`its intensity is modified according to this linear function so
`that
`the average intensity over the road region is nearly
`constant. This helps to remove undesired intensity variation
`in an image caused by, for example, an underpass, a bright
`or cloudy day, shadows from other vehicles, etc..
`
`The output of the look-up table is presented to two paths in
`which the image is processed concurrently by different
`boards. in the first path, the normalized image is passed
`through a convolver that computes a 5-by-5 local average of
`the pixels. The output of the convolver is stored in a
`framestore as a processed image for later analysis.
`
`The normalized image is concurrently fed into a Sobel edge
`detector. The edge detector consists of two convolvers and
`a look-up table. The convolvers operate on a 3x3 window.
`The
`look-up table
`combines
`the
`edges
`in
`the
`two
`perpendicular directions by adding their absolute values
`described by the formula:
`S(x.y)=|H|+|Vl
`where the first absolute-value term corresponds to a vertical
`edge and the second to a horizontal edge.
`
`The output of the edge detector is the input to another
`convolver, which also computes a 5—by-5 local average. The
`output of this convolver is stored in a framestore as a second
`processed image for later analysis
`
`There are six Eramestores in the system. Each framestore is
`capable of holding one full image, and is accessible from the
`VM'Ebus and the video bus. The real-time image processor
`accesses the framestores through the video bus at video
`
`203
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 11 of 14
`
`
`
`rates. The CPU accesses them from the VMEbus at a
`
`siower rate because the franrestore is implemented as a
`single-port memory, and the VMEbus has a lower priority.
`
`One framestore is allocated for the original image, shown as
`"raw" in Figure 2. The two processed images generated
`from the special processing boards are stored in two separate
`framestores. These are labeled “averaged raw" image and
`‘averaged edge‘
`image.
`The rest of the lane sensing
`algorithm is executed directly by the M68030 processor in
`software. The processor uses the two processed images as
`input and estimates the lane boundaries.
`Finally,
`the
`processor executes an algorithm which uses the lane
`boundaries and vehicle dynamics information as inputs to
`estimate the curvature of the road, offset of the vehicle from
`the center of the lane, and the heading of the vehicle. Once
`all of this information is computed,
`it
`is placed in a
`framestore called ‘graphics overlay". As the image for the
`next process cycle is being acquired and stored, the contents
`of
`the
`‘raw’
`framestore and the
`‘graphics overlay“
`framestore are transferred to two other framestores which
`
`function as display buffers. These two images are displayed
`on the monitor as processing begins on the newly acquired
`nnage.
`
`SYSTEM TIMING
`
`The timing diagram of the lane sensing function is shown in
`Figure 3. The timing signals are generated by the video
`digitizer. All of the operations are synchronized to the
`beginning of image frames which are indicated by markers
`in the figure. Each frame consists of two fields and the
`digitizer generates a program interrupt to the processor at the
`beginning of each field. The frame markers in the figure
`correspond to the program interrupts for just the even field.
`The lane sensing function digitizes and processes the road
`images once every six frames which yields a total processing
`time of 0.2 seconds.
`
`The timing diagram indicates the processing performed on
`each acquired image during different frame periods. The
`image which is acquired at frame number 0 is shown as
`image #n in the timing diagram. As soon as the even field
`interrupt is received, the processor instructs a framestore to
`accept
`the
`raw digitized
`image
`from the
`camera.
`Simultaneously.
`it instructs the other fmnestore to get the
`raw image of the previous processing cycle for display. The
`processor activates the histogram board so that the histogram
`of the pixel
`intensity of the image to be processed is
`generated. This process continues until the next even field
`interrupt arrives. At this time the processor freezes the raw
`image in a framestore and stops the histogram operation.
`During the second frame period the processor computes a
`linear
`transformation of the raw image for
`intensity
`normalization. The transformation function for this specific
`image is
`loaded into the look-up table by writing the
`computed values into 256 memory locations where each
`corresponds to a grey level of the image. The processor
`waits for the next even field interrupt which arrives at the
`
`beginning of the third frame period. At this time the raw
`image is passed through the look-up table, then through two
`independent process chains, and finally the results are placed
`into two separate framestores which were instructed by the
`processor to accept this data at the beginning of this frame
`period. At the beginning of the next even field interrupt, the
`processor freezes the two processed images already stored in
`the framestores. No further processing is performed by the
`image processing boards until
`the beginning of the next
`process cycle, which is shown as frame #6 in the timing
`diagram.
`
`During the remaining three frame periods the M68030
`processor uses these two partially processed images that
`reside in the framestores. The program in the processor
`determines the lane boundaries, executes the Kalman filter
`algorithm, and places the graphics information related to this
`computation into the designated framestore. During this
`processing, program interrupts still arrive from the digitizer,
`but the processor uses this information only to keep track of
`time relative to the processing cycle.
`In some cases,
`it is
`possible that the total processing is completed earlier than
`six frame periods. In such a case the program will wait
`before starting the next processing cycle in order to maintain
`the overall system timing. If processing is not completed in
`six frame periods, the current computation cycle is aborted,
`so that the next cycle can be started on time. The results of
`the previous cycle are used to substitute for those of the
`aborted cycle.
`In any case,
`the original
`image and the
`computed graphics information are transferred to the display
`buffers during the
`first
`frame period
`for
`the next
`computation cycle.
`
`REAL-Tlltfll SOFTWARE
`
`The image processing section produces two images to be
`further processed by the real-time software. Real-time
`software is an assembly language program executing on the
`M68030 processor. This task is divided into five main tasks:
`Search, Line Fit, Decision Making, Estimation,
`and
`Graphics. A search is performed within the two processed
`images for points corresponding to lane markers or lane
`edges for each of the two lane boundaries. A list of such
`points is formed and is presented to the line fit procedure.
`The line fit procedure determines the slope and offset of a
`line that best represents the points in the list. Decisions are
`made throughout
`the task, such as to search the edge
`processed image when insufficient points have been found
`in the non-edge processed image and to reject either or both
`lane boundaries if certain angie and width criteria are not
`met. Estimation uses a Kalrnan filter to compute values for
`road curvature, vehicle heading and vehicle offset based on
`sensed lane position, measured vehicle speed and measured
`steering angle.
`Finally, graphics procedures display the
`results in the form of lane boundaries and estimation values
`
`superimposed on the road scene. Various parts of the real-
`time software are described below.
`
`SAMSUNG EXHIBIT 1008
`
`SAMSUNG EXHIBIT 1008
`Page 12 of 14
`
`
`
`Main Loop:
`This task initializes the image processing section, and
`various data and conditions.
`The software enters
`a
`
`repeated every six frames.
`is
`loop that
`continuous
`Depending on lane boundary candidates provided by
`processed image, one path of a four way branch is taken. If
`there is no candidate for ten successive cycles,
`then the
`default lane boundary is used with appropriate indication.
`
`Search:
`
`from a
`list of points
`The search process produces a
`processed image that correspond to a lane marker or lane
`edge. It establishes an area within the image over which the
`search is to be performed. The area is trapezoidal in shape
`and is centered around the previously found lane boundary.
`The search is carried out by scanning each horizontal line in
`the search area
`looking for at most one point
`that
`corresponds to a lane edge. This step is repeated for all
`horizontal lines in the search area.
`
`Line Fit:
`
`The search process supplies a list of points which are fitted
`in a line using the least mean square deviation process. This
`process is performed in combination of l'1xed- and floating-
`point format to compromise between speed, accuracy, and
`dynamic range.
`
`Lane Change:
`Each time new boundaries are determined, the condition for
`a possible lane change is checked.
`If one is detected the
`boundaries are replaced with those projected for the new
`lane. This process is repeated for both left and right lane
`change.
`
`Estimation:
`
`road curvature,
`Three roadlvehicle variables are estimated:
`vehicle heading error and vehicle lateral offset from lane
`center. A vehiclefroad interaction model has been developed
`for the estimation of the above variables. This model
`is
`
`divided into the cascade of two sut;-models each represented
`in linear state-space form. The first submodei involves the
`dynamics of the vehicle including the interaction of steering
`angle, yaw rate,
`lateral acceleration,
`roll angle, vehicle
`speed, etc. The second submodel involves the interaction of
`the vehicle motion and the geometry of the roadway. A
`Kalman filter is used with the second submodel to produce
`the three output variables.
`
`Computational Burden:
`The time allotted for the real-