throbber
Proceedings
`
`n ~~~ llIBIBIBrrr~unIkunIIDID lllIDftCSIflIDIDftit@lIDIDll W@IfIk~Thl@, @lID
`A~WIDlID©®~ m.@IID@ftit~
`
`- Can robots contribute to preventing environmental deterioration? -
`
`November 8-9, 1993
`AIST Tsukuba Research Center
`Tsukuba, Japan
`
`ENGINEFR!NG SOCIETIES LIBRARY
`
`AUG 8 - 1994
`
`LIBRARY
`
`1937
`
`IEEE
`
`Cosponsored by:
`Mechanical Engineering Laboratory (MEL)
`IEEE Industrial Electronics Society
`Robotics Society of Japan (RSJ)
`Society of Instrument and Control Engineers (SICE)
`
`Technically cosponsored by:
`IEEE Robotics and Automation Society
`IEEE Neural Network Council
`Japan Society of Mechanical Engineers (JSME)
`Japan Society of Precision Engineering OSPE)
`
`IPR2013-00419 - Ex. 1010
`Toyota Motor Corp., Petitioner
`
`1
`
`

`

`1993 IEEE/Tsukuba International Workshop on Advanced Robotics
`- Can robots contribute to preventing environmental deterioration? -
`
`Copyright and Reprint Permission: Abstracting is permitted with credit
`to the source. Libraries are permitted to photocopy beyond the limits of
`U.S. copyright law for private use of patrons those articles in this volume
`that carry a code at the bottom of the first page, provided the per-copy
`fee indicated in the code is paid through Copyright Clearance Center, 27
`Congress Street, Salem, MA 01970.
`Instructors are permitted
`to
`photocopy isolated articles for non-commercial classroom use without
`fee. For other copying, reprint or republication permission, write to IEEE
`Copyrights Manager, IEEE Service Center, 445 Hoes Lane, P.O. Box 1331,
`Piscataway, NJ 08855-1331. All rights reserved. Copyright © 1993
`by the Institute of Electrical and Electronics Engineers, Inc.
`
`Robotics Society of Japan (RSJ) and The Society of Instrument and
`Control Engineers (SICE) reserve the right to make limited distribution of
`the proceedings during and after meeting.
`
`IEEE Catalog Number:
`
`93TH0589-2
`
`ISBN:
`
`0-7803-1441-7
`0-7803-1442-5
`
`Softbound Edition
`Microfiche Edition
`
`Library of Congress:
`
`93-80011
`
`2
`
`

`

`Preface
`
`Welcome to the 1993 IEEEffsukuba International Workshop on Advanced Robotics. The
`workshop is subtitled "Can robots contribute to preventing environmental deterioration?" The
`word 'environment' has been used by robotics researchers for many years. It usually means a
`model space in which robots are supposed to work. In this workshop the word 'environment'
`has a different meaning. It no longer means a model space but the real space in which we are
`bound to live: the earth.
`
`We are now seriously aware that the resources of the earth are limited and we have to use them
`sparingly. We have to reduce pollutions by promoting recycling and by reducing refuse and
`waste. We have to maintain natural environments such as forests and seashores. We hope
`robots can help us to achieve this just like they do in manufacturing products.
`
`If we, the robotics researchers, want robots to be more useful, why don't we try to use robots to
`solve some of the environmental problems? This is the motivation of this workshop. Perhaps
`this is the first meeting in the world about robotics for the environment. We will call this
`'environmental robotics.'
`
`Robots are romantic machines and robot researchers are romantic people. Some of them may
`not like environmental robotics because environmental problems are not romantic, but the others
`may like it because the solution of environmental problems is romantic for it helps to create an
`utopia.
`
`In order to create environmental robots, we have to clear two steps. The first step is to make
`robots to perform certain necessary tasks. If they cannot perform these tasks, there will be no
`such robots. The other step is to justify the existence of such robots. They must save more than
`they consume. Should they be justified in terms of costs? Maybe, but costs depend on policies.
`Should they be justified in terms of natural resources? Absolutely. We will be better off without
`environmental robots if they consume more resources than they save. This is a severe demand
`for the robotics researchers.
`
`We do not have to be serious all the time talking about this problem. We can just enjoy the
`presentations, discussions, and our friendships in this workshop.
`
`Finally, I would like to thank all the participants of the workshop for coming all the way to
`Tsukuba from around the world. I am also grateful to all the workshop executives for their
`efforts to organize the workshop. Special thanks are due to the Foundation for Promotion of
`Advanced Automation Technology and the Electro-Mechanic Technology Advancing Foundation
`for their financial supports.
`
`Kazuo Tani
`General Chair
`
`LINDA Hf\LL L\ RARY
`
`3
`
`

`

`Workshop Executives
`
`Honorary Chair:
`Dr. Hisayoshi Sato, Ex-Director-General, MEL
`General Chair:
`Dr. Kazuo Tani, Mechanism Division, MEL
`Programming Committee:
`Dr. Kazuo Tanie, Biorobotics Division, MEL
`Dr. Kiyoshi Komoriya, Cybernetics Division, MEL
`Dr. Tatsuo Arai, Autonomous Machinery Division, MEL
`Dr. Kunikatsu Takase, Intelligence Systems Division, Electrotechnical
`Laboratory
`Dr. Mitsuo Wada, Human Informatics Department, National Institute of
`Bioscience and Human Technology
`Prof. Tamio Arai, University of Tokyo
`Prof. Shin'ichi Yuta, University of Tsukuba
`Mr. Yukiyoshi Hatori, Environmental & Safety Engineering Department,
`Nissan Motor Co. Ltd.
`Prof. P. Dario, Scuola Superiore S. Anna, Pisa
`Prof. A. Halme, Helsinki University of Technology
`Prof. R. Schraft, Head, Robotics Group, IP A
`Dr. G. Giralt, Robotics and AI Group Head, LAAS
`Advisory Committee Chair:
`Prof. Fumio Harashima, Director-General, Institute of Industrial Science,
`University of Tokyo
`Advisory Committee:
`Dr. Ken-ichi MatsUDo, Director-General, MEL
`Dr. Taketoshi Nozaki, Robotics Department, MEL
`Dr. Hideo Inoue, Manufacturing Systems Department, MEL
`Dr. Masao Kubota, Foundation for Promotion of Advanced Automation
`Technology
`Dr. Masakazu Ejiri, Mechanical Engineering Research Laboratory,
`Hitachi Ltd.
`Dr. Tsuneji Yada, Tsukuba R&D Laboratory, OMRON Corporation
`Mr. Hirotaka Miura, Tsukuba Research Laboratory, Yaskawa Electric
`Corporation
`Prof. Toshio Fukuda, Nagoya University
`Prof. T. J. Tarn, Washington University
`Prof. Robert Marks, University of Washington
`
`This workshop is supported by:
`
`Foundation for Promotion of Advanced Automation Technology (P AA T)
`Electro-Mechanic Technology Advancing Foundation
`
`4
`
`

`

`Table of Contents
`
`Global Environmental Problem and its Implication to Technology
`S. Nishioka (National Institute for Environmental Studies, Japan)
`Concept of Ecofactory
`M. Hattori, H. Inoue (MEL, Japan)
`Recycling on Network: an information-control architecture for ecologically-conscious industry
`K. Kamejima, M. Ejiri (Hitachi, Ltd., Japan)
`New Robot Applications in Production and Service
`R.D. Schraft, E. Degenhart, M. Hagele, M. Kahmeyer (IPA, Germany)
`Conceptual Design of Disassembly Automation System for Automated Manufacturing with Ecological
`Recycling
`T. Shibata, K. Tanie (MEL, Japan)
`The Concept of Robot Society and Its Utilization
`A. Halme, P. Jakubik, T. SchOnberg, M. Vainio (Helsinki University of Technology, Finland)
`An Experimental Robot System for Investigating Disassembly Problems
`P. Dario, M. Rucci, C. Guadagnini, C. Laschi (Scuola Superiore S. Anna, Pisa, Italy)
`Intelligent Robotic Technology for Environment Conscious Reusable Manufacturing
`T. Fukuda, K. Shimojima (Nagoya University, Japan)
`Motion Planning for Robotic Spray Cleaning with Environmentally Safe Solvents
`Y.K. Hwang, L. Meirans, W.D. Drotning (Sandia National Laboratories, USA)
`Recycling of Printed Wiring Board Waste
`S. Yokoyama, M. Iji (NEC Corporation, Japan)
`Vision System for Part Disassembly Using a High-Speed Range Sensor
`T. Arai, K. Umeda (University of Tokyo, Japan)
`An Image Recognition Method for Rusty and Damaged Car Parts After Road Traffic Accident
`K.H.L. Ho (University of Bristol, UK), K. Yamaba (MEL, Japan)
`Seashore Robot for Environmental Protection and Inspection
`T. Nakamura (Mie University, Japan), T. Tomioka (Suzuka College of Technology, Japan)
`Synthesis of Parallel Manipulators Using Lie-Groups: Y -STAR and H-Robot
`F. Sparacino (politecnico di Milano and Ecole Centrale Paris, Italy),
`J .M. Herve (Ecole Centrale Paris, France)
`Integrated Limb Mechanism of Manipulation and Locomotion for Dismantling Robot
`- Basic concept for control and mechanism -
`N. Koyachi, T. Arai, H. Adachi (MEL, Japan), Y. Itoh (Nisshimbo Co. Ltd., Japan)
`The Concept of Model Free Robotics for Robots to Act in Uncertain Environments
`K. Tani, K. Ikeda, T. Yano, S. Kajita, O. Matsumoto (MEL, Japan)
`Dynamics and Control of Aerial Mobile Legs
`T. Tsujimura, T. Manabe, T. Yabuta (NTT, Japan)
`ProLab 2 : a driving assistance system
`M. Rombaut (Universite de Tecbnologie de Compiege, France)
`Future Use of Robotics in Forestry
`C. Asplund (Swedish University of Agricultural Science, Sweden),
`A. Fukuda (Forestry and Forest Products Research Institute, Japan)
`Leg-Wheel Robot: A Futuristic Mobile Robot Platform for Forestry Industry
`Nakano E., Nagasaki S. (Tohoku University, Japan)
`Some Considerations on Robotics for Environmental Friendliness
`F.G. Pin (Oak Ridge National Laboratory, USA)
`
`1
`
`3
`
`9
`
`15
`
`25
`
`29
`
`37
`
`43
`
`49
`
`55
`
`59
`
`65
`
`69
`
`75
`
`81
`
`85
`
`91
`
`97
`
`103
`
`109
`
`Late
`
`5
`
`

`

`Proceedings of the 1993 IEEErrsukuba International Workshop on Advanced Robotics
`- Can robots contribute to preventing environmental deterioration? -
`Tsukuba. Japan November 8-9, 1993
`
`ProLab 2
`
`a driving assistance system
`
`M. Rornbaut *
`Heudiasyc URA CNRS il17 - UTe
`Centre de recherches de Royallieu, BP 649, 60206 Cornpiegne Cedex, France
`
`Abstract-~This paper deals with a driving assis(cid:173)
`tance system developed by the French Pro Art group
`of Prometheus. The system gives advices about risks
`and possibilities when the driver executes a maneu(cid:173)
`ver. It is embedded in a normal car (605 Peugeot of
`PSA). It is composed by a perception system and a
`decision one. This system diagnoses the situation and
`determines the potential or real risks. The resulting
`messages are sent to the driver via a "driver/vehicle"
`interface. The perception system determines the state
`of our vehicle, the state of the static environment
`(road, lines, ... ) ,and the state of the dynamical en(cid:173)
`vironment (moving obstacles, ... ). The global system
`works out in the standard driving situations in motor(cid:173)
`way, two carriage ways and in a crossroad in a town.
`
`1.1 The global functionalities
`
`ProLab 2 is a driving assistance system. Its goal is te, as(cid:173)
`sist the driver during a given maneuver, and to inform him
`about the potentiel or real risks involved. We assume that
`the others vehicles are the standard ones, with no specific
`equipment. The infrastl'llcure is also standard excepted
`active beacons that can informs the system about the
`static infrastructure of the road like the typ~~ of crossroad
`(the priority, the topology, ... ). None information is given
`about the moving obstacles or vehicles. The embedded
`system must understand the situation, foresee the driver
`behavior and the other vehicles' one, determine the real
`risks (over speed limit) or the potential ones ( dangerous
`overt.ake ). Then, it informs the driver on concequence.
`
`1
`
`Introduction
`
`The security on the road is a very important problem
`in the industrialised countries because of the deaths and
`injured, but also of the financial costs of accidents. In Eu(cid:173)
`rope, the "Prometheus" program has been created to find
`solutions for this problem. These solutions are studied
`for different types of systems. Ones improve the security
`inside the vehicle like ABS system but also systems to
`avoid slipping on ice. Others ones create a communica(cid:173)
`tion system between vehicles or between the vehicle and
`the road infrastructure. The subgroup" ProArt France" is
`currently developping a driving assistance system to help
`the driver in his maneuvers. The research group is com(cid:173)
`posed of nine research teams in France and two french cars
`compagnies (Renault and PSA). A firts demonstrator,
`ProLab1, has been developped on a R21 of Renault and
`presented at the Board Members' Meeting (BMM'91) at
`Torino in 1991. ProLab2 is the continuation of ProLabl,
`with more perceptive systems, more situations, dealt with
`more teams. It is developped on a 605 Peugeot of PSA.
`This demonstrator will be presented at the BMM'94 at
`Morte Fontaine (France).
`
`"e-mail rombaut@hds.ull.iv-compiegne.fr
`
`1.2 The global structure
`
`The system is decomposed in two main parts which per(cid:173)
`form the evaluation of the situation (perception part) and
`the interpretation of the sit.uation ( copilot part). To be
`able to interpret the sit.uation, it is important to get a
`"good image" of it. Then, some perceptioll systems in
`the demonstrator evaluate the situation of the vehicle (
`speed, acceleration, ... ) , the static environment situation
`(road, lines, ... ), and the dynamic environment situation
`(position of the others vehicles, ... ).
`The provided information is fused (temporal multi(cid:173)
`sensor fusion) and studied to maintain the colwrenc,' of
`the situation, as, for instance, in case of vehicle',; disap(cid:173)
`pearance in a blind space. The goal is to give to the
`copilot, the better image of the real situation. A dynami(cid:173)
`cal image of our vehicle (potential acceleration, ... ) is also
`built.
`The copilot analyses this situation and when a risk is
`detected, a message is transmitted to the driver. A rep(cid:173)
`resentation of the architecture is given figure 1.
`
`2 The copilot
`
`The copilot is based on the current analysis situation ,
`and its possible evolution. The risks can be deduced,
`and t.he messages are sent to inform the driver. It also
`allows to foresee the future situations and to calculate
`
`0-7803-1441-7/93/$3.00 © 1993 IEEE
`
`~)7
`
`6
`
`

`

`Dynamic data
`manager
`
`Copilot
`
`Driver/vehic1e
`inteface
`
`Figure 1: The global architecture
`
`the realisability or the risks of a maneuver. The global
`architecture of the copilot is presented at figure 2.
`
`Figure 2: The copilot architecture
`
`2.1 The situation diagnosis
`
`The goal of this module is to determine the global situ(cid:173)
`ation according to the information provided by the per(cid:173)
`ception modules. It is constituted by several real time
`expert systems, named SUPER [1]. The rules are hierar(cid:173)
`chicaly decomposed in several packages activated accord(cid:173)
`ing to the situation. This method allows to get speeder
`conclusions. The system accepts interrutions, and main(cid:173)
`tains the reasonning coherence. This mechanism allows to
`treat in priority the important events which disturb the
`evolution.
`According to the situation, we deduce the actions to
`work out, for exemple to send a message, or to change
`the sensors working mode.
`
`All along the time, the system evaluates the irnliledi(cid:173)
`ated risks of the situation. For any situation, t h<, p'ltell(cid:173)
`tial risks are continuously tested. For instance. (hirillg
`a pedestrian passage approach, a pesdestriall presellce is
`tested. In such a case, the speed should of course 1.(' re(cid:173)
`duced.
`
`2.2 Perception control
`
`Environment perception, especially artificial vision,
`IS
`highly time consumming, hence, it is not possible to COJl(cid:173)
`tinuollsly activate all the perception systems. Accord(cid:173)
`ing to the situation, some data sets are more important.
`Then, the perception system must be controlled to focus
`on them for a moment. The different data acquisitions
`are the follow:
`
`• continuous observation : the data evolution IS ob(cid:173)
`served up to be stopped.
`
`• one-off observation : the data is obs('fved only olle
`time
`
`• event detection: an event can change the n'asonning
`so it is important to detect it (expl: exceeding speed
`limit). The per.ception system interrupts the copilot.
`
`2.3 Help for manoeuver
`
`This system can help the driver during or before the ma(cid:173)
`neuver excution. It estimates if it is possible or danger(cid:173)
`ous. This system is decomposed in three parts: a plallning
`system which calculates the optimal realisation of the ma(cid:173)
`noeuver, an execution monitoring which checks the actual
`execution to the planned one [2], and a danger analysis
`module which verifies the security distances.
`If no possible solution is find, the planning system in(cid:173)
`forms the copilot of the risks. Otherwise, if the difference
`between the actual and planned realisation are too dif(cid:173)
`ferent, the execution monitoring system asks for a new
`calculation with new actual data to the planner.
`Those analysis are done at the situation diagnosis mod(cid:173)
`ule's demand for the planning step, and periodicall) with
`a 1 second sampling for the execution monitorin~. In
`addition, the danger analysis module checks almosl COIl(cid:173)
`tinuously (sampling period at 200ms) the safdy distances
`with respect to the surrounding obstacles independantly
`of the plan currently executed.
`
`2.4 pilot message sender
`
`For all tlH' modules, the pilot message sender recewd dif(cid:173)
`ferent messages which can be futher, redundant or even
`inconsistant. For instance, one can ask for acceleration
`because of an overtakE', and another one ask, for a decel(cid:173)
`eration because of the speed limit.
`
`7
`
`

`

`The system must choose between these messages ac(cid:173)
`cording to the situation of the vehicle and send them to
`the driver/vehicle interface.
`
`• 2 stereo linear cameras placed beside the lights to
`detect front and lateral obstacle slike pedestrians and
`bicycles
`
`3 The perception system
`
`• 2 lateral CCD cameras to detect lateral vehicles in a
`crossroad
`
`The copilot needs information to understand and foresee
`the situation. This information must be the as complete
`and accurate as possible. This specification induces the
`use of a large number of sensors of various types and tech(cid:173)
`nologies. Neverless, the quantity of devices involved in the
`perception system is limited because of the space, the en(cid:173)
`ergy, the computing power, and the financial constraints.
`
`3.1 The sensors
`The environment of the vehicle is the same as the one of
`our personal car. There is no special equipment in the
`infrastructure as well as the other vehicles or obstacles on
`the road. So, the system must be able to perceive as well
`as the actual driver.
`The first type of data perceived by the driver concerns
`the sensations when its vehicle is moving, like accelera-
`tion, braking, turning, the rate of engine rotation. Some
`proprioceptive sensors are used in the vehicle like lateral
`and longitudinal acceleration sensors or brake state sen-
`sor. These sensors give information about vehicle internal
`state.
`The driver displaces it vehicle on the road according to
`the driving rules. He must then know what type of road
`he is driving on, what line mark is the border of the lane
`and he should always be able to localise its vehicle in the
`lane. A CCD camera has been placed beside the rear view
`mirror like on figure 3 to get this information.
`
`Figure 3: The front camera
`
`The driver must also take into account the surround(cid:173)
`ing environment composed of the other vehicles, and the
`different obstacles like pedestrians, bicycles, and animals.
`To estimate the situation (position and velocity) of the
`obstacles, we use several sensors placed in our vehicle:
`
`• a 3D sensor composed by a laser telemeter associated
`to a CCD camera placed in the bumper to detect
`front vehicles
`
`\l9
`
`• a rear CCD camera to detect rear vehicles
`
`These different sensors are presented in figure 4.
`
`Figure 4: TIlt' obstacles sensors
`
`All the information is refered according to the vehicle
`reference.
`
`3.2 Static environment
`
`The static environment information is given by the front
`and rear CCD cameras. The software modules of the front
`one give:
`
`• the number and the location of the lanes and the type
`of lines
`
`• the vehicle's position and orientation according to the
`lane
`
`• the special horizontal signs like stop bend, arrows,
`peclestrian crossing
`
`The method consists of an outline extraction, and an
`linear approximation to detect the lines.
`The software modules of the rear camera give:
`
`• the structure of the road
`
`• the lane positioning
`
`• the type of the lines
`
`The image is segmented in several inteft~sting areas.
`Then special marks are searched. The software has been
`implemented on special morphologic real time computer
`based on all ASIC chip (see [3]).
`
`8
`
`

`

`3.3 Dynamic environment
`
`4.1 The temporal multisensor fusion
`
`These perception systems give information about moving
`obstacles independently of our vehicle motion. Several
`systems are used.
`
`3.3.1 TelemeterfCCD camera sensor
`
`This sensor is composed of a laser telemeter associated to
`a CCD camera. It has two ways of working:
`
`• detection mode : all the front space is scanned and
`the relative position of the obstacles is given
`
`• focused mode: the laser beam is focused on a specific
`obstacle and its relative position and velocity is given
`
`The method consists of an image partitioning in differ(cid:173)
`ent depth plans corresponding to the different obstacles.
`The software algorithms are implemented in a transputer
`based system named Transvision [4].
`
`3.3.2 Stereo linear cameras
`
`This perception system allows to detect all sorts of obsta(cid:173)
`cles in front of our vehicle. It is composed of two linear
`cameras . The algorithms give the relative position and
`velocity of the front obstacles. The method is based on
`an edge detection on the two images to determine the
`interest points, and to match them.
`
`3.3.3 Lateral CCD cameras
`
`The lateral cameras allow to detect the vehicles on the
`transversal roads in a crossroad. The algorithmes are
`based on a spatio-temporal segmentation, then an inter(cid:173)
`pretation of the dynamical areas. The method is pre(cid:173)
`sented in [5].
`
`3.3.4 Rear camera
`
`This camera is used to detect the rear obstacles. The
`algorithms allow to give position of these vehicles. They
`are the same that ones used in §3.2. These algorithms are
`also used for images from other cameras.
`
`4 The dynamic data manager
`
`The sensors work in asynchronous maner with different
`frequencies. The given information is sometime redun(cid:173)
`dant, sometime further. Some data can be unpresent be(cid:173)
`cause of the perception system. So, it is important to
`manage as well as possible the data to give to the copilot
`the best image of the actual situation. These modules are
`developped [8] by Heudiasyc (CNRS-UTC) and INRIA
`Sophia Antipolis.
`
`The system updates the image of the environment by a
`estimator / predictor filter when a new data is given by
`the perception system. According to the situation, a reli(cid:173)
`ability number is associated to each vehicle corresponding
`to its real presence. For instance, if a vehicle comes ill the
`blind zone, its number st.ays at the same value, because
`we cannot see it, but it really exists.
`With this filter, we can predict the future situation for
`a time sufficient for the copilot planning module presented
`in (§2.3).
`
`4.2 The dynamic module
`
`The calculated parameters represent the risks state of the
`vehicle like the security distance between the vehiclef>, the
`maximal speed in a curve or the wheels' slip. They depend
`on the dynamic state but also on the situation of the
`others vehicles. A 3 degrees of freedom dynamical model
`of the demonstrator has been elaborated for this purpose
`[6].
`
`4.2.1 Data recognition and transmission
`
`To understand the situation, it can be usefull to know
`the type of the obstacles (vehicle, truck, bicycle, pedes(cid:173)
`trian). According to their geometrical characteristics and
`behavior, its is possible to estimate their type.
`After that, the data must be sent to the copilot iII the
`manner to be easy used. For instance, the obstacleK are
`placed in interest zones like in figure 5. Depending of its
`zone, a vehicle must more or less important.
`
`Figure 5: The interest zones on highway
`
`4.2.2 The perception system controler
`
`According to the copilot needs, this module manages the
`sensory devices set hardware and software means, til!'
`hardware ones as well as the sofware ones. Getting a hight
`level information often needs the activity of several per(cid:173)
`ception modules. Sometime, it is possible to choose the
`modules to use, sometime, the needed modules art' not
`available, and sometime, the module must be shared be(cid:173)
`tween different activities. The goal of the perception sys(cid:173)
`tem controler is to choose the available perception mod(cid:173)
`ules to give the copilot the best possible information.
`This module also controls the activity or working modes
`of the different modules. That means it gives working
`
`--100
`
`9
`
`

`

`[2] Th. Fraichard and C. Laugier. Path- Velocity DecomposI.tion
`Revisited and Applied to Dynamic Trajectory Planning. icra,
`Atlanta, GA (USA), Mai 1993.
`
`[3] X.Yu, S.Beucher and M.Bilodeau. Road Tracking, Lane Seg(cid:173)
`mentation and Obstacle Recognition by Mathematica/ Mor(cid:173)
`ph%gy. In Proc Intelligent Vehicles '92 sYIllposium, Detroit,
`July 1992.
`
`[4] F.Collange, J.Alizon, J.Gallice , and L.Trassoudaine. A ,'am(cid:173)
`era Telemeter Multisensory System for obstacle Detection and
`Tracking. In Intelligent Vehicle Highway systems, 25th ISATA
`Silver Jubile, Florence(Italy), 1-5 June,1992.
`
`[5] R.Canals, J.P.Derutin, and F.Heitz
`SegmeJ,.tation spatio(cid:173)
`temporelle sur machine para/lele de vision TnANSVISJON.
`In 14e Colloque GRETSI, Juan Les Pins (France), Sept. 1993.
`[6J A.Alloum and M.Rombaut. A safety indicato'r sy<~tem for driv(cid:173)
`ing assistance. Road vehicle autoIllation, Bolton England, may
`1993.
`
`[7] P.Pleczon, and A.Kessaci. A Human-Machin, Inter/ac, for
`Driving Assistance. In Intelligent VelUcle Highway systems,
`25th ISATA Silver Jubile, Florence(Italy), 1-5 June,1992.
`
`[8] B.Eleter, and M.Rombaut. On-board real time driving sys(cid:173)
`In Road velUcle automation ROVA'93,
`tem architecture.
`Bolton(England), 24-25 May,1993.
`
`orders to the different modules. The working modes are
`continuolls mode, one time mode or event waiting mode.
`In this mode, the module works until the searching data
`appears. For instance, the question can be "give me the
`position of the next pedestrian crossing". The module is
`searching for it until it appears then calculates its position
`an stops.
`
`5 The copilot / vehicle interface
`
`This module transforms the information to be under(cid:173)
`standable by the driver. The system is designed to be
`fast undrstandable by the driver. It should be felt as a
`smart help rather than a constraint. A study has been
`made by [7] and has shown 3 help levels :
`
`• The alarm mode: the system informs the driver when
`the situation is or becomes dangerous,
`
`• the advice mode: the information is given when the
`driver asks for it. For instance, the question can be
`"is the overtake possible?",
`
`• the assistance mode: a type of driving can be pro(cid:173)
`posed to the driver like economical driving for exern(cid:173)
`pIe.
`
`The interface is realised with visual systems, but also
`with phonic ones.
`
`6 Conclusion
`
`This paper brievely presents the basic principles of the
`ProLab 2 demonstrator. This one is constituted by a
`large number of modules that works out all together and
`collaborate to give an efficient help to the driver during
`various driving maneuvers. The system can be used in
`a several real traffics conditions. The driver keeps the
`control of its vehicle, and can choose the type of help
`that he needs.
`The different modules are being implemented in the
`target system to be embedded in the real vehicle which
`is a 605 Peugeot of PSA corporation. The demonstra(cid:173)
`tor will be shown at the Board Members' Meeting at
`Morte Fontaine (France) on October 1994. ProLab 2 is
`the only demonstrator, with ProLabl (R21 Renault) to
`be realised by university laboratories group in Europe. In
`this demonstrator, the works of 9 different laboratories in
`France are implemented.
`
`References
`[1] N.LeFort , E.Piat and D.Ramamonjisoa. Toward a copilot
`Architecture Based on Embedded Real Time Expert System.
`In Proc. 1st Intelligent Autonomous Vehicle '93 SYIllposiuIll,
`Southampton, Great Britain, 1983.
`
`101
`
`10
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket