throbber
The Autonomous Land Vehicle (ALV)
`Preliminary Road-Following
`Demonstration
`
`Lowrie, James, Thomas, Mark, Gremban, Keith, Turk,
`Matthew
`
`James W. Lowrie, Mark Thomas, Keith Gremban, Matthew Turk, "The
`Autonomous Land Vehicle (ALV) Preliminary Road-Following Demonstration,"
`Proc. SPIE 0579, Intelligent Robots and Computer Vision IV, (11 December
`1985); doi: 10.1117/12.950819
`Event: 1985 Cambridge Symposium, 1985, Cambridge, United States
`
`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021 Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`PROCEEDINGS OF SPIE
`
`SPIEDigitalLibrary.org/conference-proceedings-of-spie
`
`VWGoA EX1039
`U.S. Patent No. 11,208,029
`
`

`

`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`The Autonomous land vehicle (ALV) preliminary road-following demonstration
`The Autonomous land vehicle (ALV) preliminary road -following demonstration
`
`James W. Lowrie, Mark Thomas, Keith Gremban, Matthew Turk
`James W. Lowrie, Mark Thomas, Keith Gremban, Matthew Turk
`
`Martin Marietta Denver Aerospace Advanced Automation Technology Section
`Martin Marietta Denver Aerospace Advanced Automation Technology Section
`PO Box 179 Denver CO 80201
`PO Box 179 Denver CO 80201
`
`The autonomous land vehicle program overview
`The autonomous land vehicle program overview
`
`The ALV project is sponsored by the Defense Advanced Research Project Agency (DARPA) as
`The ALV project is sponsored by the Defense Advanced Research Project Agency (DARPA) as
`part of its Strategic Computing Program and contracted through the Army Engineer Topo(cid:173)
`part of its Strategic Computing Program and contracted through the Army Engineer Topo-
`graphic Laboratories (ETL) under contract DACA76-84-C-0005. The purpose of the strategic
`The purpose of the strategic
`graphic Laboratories (ETL) under contract DACA76 -84 -C -0005.
`computing program is to advance the state of the art in artificial intelligence, image
`computing program is to advance the state of the art in artificial intelligence, image
`understanding, and advanced computer architectures and to demonstrate the applicability of
`understanding, and advanced computer architectures and to demonstrate the applicability of
`these technologies to advanced military systems.1
`these technologies to advanced military systems.l
`
`The strategic computing (SC) program is separated into three primary areas technology
`The strategic computing (SC) program is separated into three primary areas -- technology
`base, applications, and infrastructure. The technology base contractors are tasked with
`base, applications, and infrastructure.
`The technology base contractors are tasked with
`pursuing generic long-range high-payoff research in numerous disciplines including image
`pursuing generic long -range high -payoff research in numerous disciplines including image
`understanding, expert systems, planning and reasoning, symbolic processing architectures,
`understanding, expert systems, planning and reasoning, symbolic processing architectures,
`high-speed signal processing systems, and others. The application areas are being funded
`high -speed signal processing systems, and others.
`The application areas are being funded
`to transition the technology from the research domain to the military application domain
`to transition the technology from the research domain to the military application domain
`with the intent of demonstrating a series of progressively more complex operational capa(cid:173)
`with the intent of demonstrating a series of progressively more complex operational capa-
`bilities. Finally, the infrastructure of the SC program provides the framework for both
`Finally, the infrastructure of the SC program provides the framework for both
`bilities.
`the research community and the application programs. This framework includes information
`the research community and the application programs.
`This framework includes information
`networks, research machines, and system development tools.
`networks, research machines, and system development tools.
`
`The ALV project is one of the SC program's application areas aimed at advancing and
`The ALV project is one of the SC program's application areas aimed at advancing and
`demonstrating the state of the art in autonomous navigation and tactical decisionmaking.
`demonstrating the state of the art in autonomous navigation and tactical decisionmaking.
`The project is driven by the series of progressively more difficult demonstrations identi(cid:173)
`The project is driven by the series of progressively more difficult demonstrations identi-
`fied in Table 1. These successive demonstrations were selected because they drive the
`These successive demonstrations were selected because they drive the
`fied in Table 1.
`development of technology in artificial intelligence, image understanding, and advanced
`development of technology in artificial intelligence, image understanding, and advanced
`computer architectures.
`computer architectures.
`Table 1. ALV demonstration.
`Table 1. ALV demonstration.
`
`Year
`Year
`May 1985
`May 1985
`(Preliminary
`(Preliminary
`Road-Following
`Road -Following
`Demonstration)
`Demonstration)
`November 1985
`November 1985
`(Road-Following
`(Road -Following
`Demonstration)
`Demonstration)
`
`1986
`1986
`(Obstacle
`(Obstacle
`Avoidance
`Avoidance
`Demonstration)
`Demonstration)
`1987
`1987
`(Crosscountry
`( Crosscountry
`Demonstration)
`Demonstration)
`
`Distance
`Distance
`1 km
`1 km
`
`5km
`5 km
`
`20 km
`20km
`
`Speed
`Speed
`5km/h
`5 km /h
`
`Capability
`Capability
`The vehicle will traverse a uniform road with smooth curves at a constant speed.
`The vehicle will traverse a uniform road with smooth curves at a constant speed.
`During conditions where the vision subsystem is unable to locate the road, the
`During conditions where the vision subsystem is unable to locate the road, the
`vehicle may follow a prestored map of the track. The vehicle must navigate from
`vehicle may follow a prestored map of the track. The vehicle must navigate from
`visual data over 75% of the distance.
`visual data over 75% of the distance.
`10 km/h
`The vehicle will traverse a nonuniform road with sharp corners. The vehicle
`10 km /h The vehicle will traverse a nonuniform road with sharp corners. The vehicle
`speed will vary as a function of vision confidence and road geometry. The vehicle
`speed will vary as a function of vision confidence and road geometry. The vehicle
`must navigate from visual data 100% of the time. The vehicle must demonstrate an
`must navigate from visual data 100% of the time. The vehicle must demonstrate an
`autonomous counter-rotate capability.
`autonomous counter -rotate capability.
`20 km/h
`20 km /h The vehicle will traverse a nonuniform road with numerous intersections. The
`The vehicle will traverse a nonuniform road with numerous intersections. The
`vehicle must sense and model obstacles placed on the road surface and plan a path
`vehicle must sense and model obstacles placed on the road surface and plan a path
`to avoid them.
`to avoid them.
`
`10 km
`10km
`
`5 km/h
`5 km /h
`
`The vehicle must be capable of planning an a priori route through the terrain using
`The vehicle must be capable of planning an a priori route through the terrain using
`a prestored terrain database. The system must then use sensory data to model the
`a prestored terrain database. The system must then use sensory data to model the
`local terrain and avoid natural obstacles placed along the route. The position of
`local terrain and avoid natural obstacles placed along the route. The position of
`the vehicle with respect to the route must be monitored and updated as necessary.
`the vehicle with respect to the route must be monitored and updated as necessary.
`The vehicle must navigate through rough roadways.
`The vehicle must navigate through rough roadways.
`
`Success of the ALV project depends on careful coordination with the technology base
`Success of the ALV project depends on careful coordination with the technology base
`contractors to transfer technology from the research domain to the application domain as
`contractors to transfer technology from the research domain to the application domain as
`rapidly as possible. To simplify the technology transition process, ALV was designed as a
`To simplify the technology transition process, ALV was designed as a
`rapidly as possible.
`The intent
`flexible testbed that will enable rapid transition from hypothesis to testing. The intent
`flexible testbed that will enable rapid transition from hypothesis to testing.
`of the testbed is to encourage the technology base contractors to conduct experiments with
`of the testbed is to encourage the technology base contractors to conduct experiments with
`the vehicle in a realistic environment. The results of these experiments would then lead
`The results of these experiments would then lead
`the vehicle in a realistic environment.
`naturally into design of the demonstration system.
`naturally into design of the demonstration system.
`
`This paper describes the long-range ALV system concept that the project is building
`This paper describes the long -range ALV system concept that the project is building
`toward, the system requirements for road-following, gives an overview of the ALV system as
`toward, the system requirements for road -following, gives an overview of the ALV system as
`it was configured for the May 1985 demonstration, and contains detailed descriptions of
`it was configured for the May 1985 demonstration, and contains detailed descriptions of
`the vision, navigator, pilot, electronic, and vehicle subsystems.
`the vision, navigator, pilot, electronic, and vehicle subsystems.
`
`336 / SPIE Vol. 579 Intelligent Robots and Computer Vision (1985)
`336 / SPIE Vol 579 Intelligent Robots and Computer Vision (1985)
`
`

`

`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`ALV long-range system concept
`ALV long -range system concept
`
`The progressive system demonstration schedule, along with the requirement to transition
`The progressive system demonstration schedule, along with the requirement to transition
`capabilities from the strategic computing technology base contractors, makes it essential
`capabilities from the strategic computing technology base contractors, makes it essential
`to define a long-range generic system architecture. It is more beneficial to build each
`It is more beneficial to build each
`to define a long -range generic system architecture.
`demonstration system within the framework of the long-range system architecture than to
`demonstration system within the framework of the long -range system architecture than to
`discard each demonstration system following its completion. By defining a long-range
`By defining a long -range
`discard each demonstration system following its completion.
`system architecture and analyzing the long-term requirements, we can project the technol(cid:173)
`system architecture and analyzing the long -term requirements, we can project the technol-
`ogy voids that will become the topic for research by the technology base contractors.
`ogy voids that will become the topic for research by the technology base contractors.
`
`Definition of the long-range system architecture has been the topic of a series of
`Definition of the long -range system architecture has been the topic of a series of
`working group meetings between various technology base contractors and the ALV project
`working group meetings between various technology base contractors and the ALV project
`team. 2 The following technology base contractors participated in this definition
`The following technology base contractors participated in this definition- -
`team.2
`University of Maryland, Carnegie Mellon University, SRI International, Advanced Informa(cid:173)
`University of Maryland, Carnegie Mellon University, SRI International, Advanced Informa-
`tion and Decision Systems, Hughes AI Center, and Honeywell.
`tion and Decision Systems, Hughes AI Center, and Honeywell.
`
`Autonomous mobility in a dynamic unstructured environment requires that a system sense
`Autonomous mobility in a dynamic unstructured environment requires that a system sense
`its environment, model critical features using the sensed data, reason about the model to
`its environment, model critical features using the sensed data, reason about the model to
`determine a mobility path, and control the vehicle along that path. Evaluation of these
`determine a mobility path, and control the vehicle along that path.
`Evaluation of these
`basic mobility requirements resulted in the definition of the system concept shown in
`basic mobility requirements resulted in the definition of the system concept shown in
`Figure 1. Two additional requirements associated with the objective of the strategic
`Figure 1.
`Two additional requirements associated with the objective of the strategic
`computing program were factored into this configuration. First, the primary emphasis of
`computing program were factored into this configuration.
`First, the primary emphasis of
`the program is on perception and reasoning with minimal research being pursued in the
`the program is on perception and reasoning with minimal research being pursued in the
`areas of control and physical vehicles. Because it is also desirable to rapidly integrate
`areas of control and physical vehicles.
`Because it is also desirable to rapidly integrate
`and test numerous concepts on the testbed vehicle, we have defined a "virtual vehicle"
`and test numerous concepts on the testbed vehicle, we have defined a "virtual vehicle"
`consisting of the physical vehicle, the sensors, and the control subsystems. The hardware
`consisting of the physical vehicle, the sensors, and the control subsystems.
`The hardware
`and software interfaces at this level are well known and experiments that conform to these
`and software interfaces at this level are well known and experiments that conform to these
`interfaces can be rapidly integrated and tested.
`interfaces can be rapidly integrated and tested.
`
`Knowledge Base
`Knowledge Base
`- Digital Terrain Database
`- Digital Terrain Database
`- Long-Term Scene Model
`- Long -Term Scene Model
`- Vehicle State Estimate
`- Vehicle State Estimate
`
`Queries A Priori
`Queries A Priori
`Model
`Model
`
`Scene
`Scene
`Model
`Model
`
`Queries
`Queries
`
`Vehicle
`Vehicle
`State
`State
`
`Model
`Model
`Parameters
`Parameters
`
`Acquisition
`Acquisition
`Commands
`Commands
`
`Sensors
`Sensors
`
`Data
`Data
`
`Perception
`Perception
`
`Task
`Task
`Request
`Request
`
`Task
`Task
`Status
`Status
`
`Reflexive
`Reflexive
`Scene
`Scene
`Model
`Model
`
`Reference
`Reference
`Trajectory
`Trajectory
`
`Vehicle State
`Vehicle State
`Update
`Update
`
`Control
`Control
`
`Reasoning
`Reasoning
`
`Vehicle
`Vehicle
`State
`State
`Mission
`Mission
`Plan&
`Plan &
`Status
`Status
`
`Mission
`Mission
`Goals
`Goals
`
`Human
`Human
`Interface
`Interface
`
`Figure 1. Long-range ALV system architecture.
`Figure 1. Long -range ALV system architecture.
`The human operator will specify the mission goals and constraints that should be
`The human operator will specify the mission goals and constraints that should be
`factored into decisionmaking through the man/machine interface (MMI). The complexity of
`The complexity of
`factored into decisionmaking through the man /machine interface (MMI).
`these mission goals will increase with each successive demonstration. For May 1983 the
`For May 1983 the
`these mission goals will increase with each successive demonstration.
`goal specification was simply to follow the road for 1 km. In 1987 the goal becomes more
`In 1987 the goal becomes more
`goal specification was simply to follow the road for 1 km.
`complex travel to point A, perform task B, proceed to point C, .
`.
`Later demonstra-
`.
`Later demonstra(cid:173)
`complex -- travel to point A, perform task B, proceed to point C,
`tions will also include complex tactical situations that must be dealt with.
`tions will also include complex tactical situations that must be dealt with.
`
`.
`
`The reasoning system will interpret these mission goals and decompose them into the
`The reasoning system will interpret these mission goals and decompose them into the
`operations to be performed by the vision subsystem. As part of this decomposition
`As part of this decomposition
`operations to be performed by the vision subsystem.
`process, the reasoning subsystem will access a digital terrain database being developed by
`process, the reasoning subsystem will access a digital terrain database being developed by
`the Engineer Topographic Laboratories and plan an a priori route through the environment
`the Engineer Topographic Laboratories and plan an a priori route through the environment
`to achieve the mission goals. The perception system is considered to be a resource of the
`The perception system is considered to be a resource of the
`to achieve the mission goals.
`reasoning subsystem. In this capacity the reasoning subsystem will specify goals for the
`In this capacity the reasoning subsystem will specify goals for the
`reasoning subsystem.
`perception system to perform. These goals will include specification of the features of
`perception system to perform.
`These goals will include specification of the features of
`interest, a time allocation for the process, and a focus of attention defining the geome(cid:173)
`interest, a time allocation for the process, and a focus of attention defining the geome-
`tric area to be modeled. The perception subsystem will then decompose this goal into
`tric area to be modeled.
`The perception subsystem will then decompose this goal into
`
`SPIE Vol. 579 Intelligent Robots and Computer Vision (1985) / 337
`SPIE Vol 579 Intelligent Robots and Computer Vision (1985) / 337
`
`

`

`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`specific perception tasks. The perception subsystem will have sole control over all
`specific perception tasks.
`The perception subsystem will have sole control over all
`sensors and will produce only a high-level symbolic model of the environment for
`sensors and will produce only a high -level symbolic model of the environment for
`reasoning. Figure 2 illustrates the sensor/perception interface. Following completion of
`reasoning.
`Figure 2 illustrates the sensor /perception interface.
`Following completion of
`model generation, the perception subsystem will pass the model to the reasoning subsystem
`model generation, the perception subsystem will pass the model to the reasoning subsystem
`along with a status description. The status will indicate if the perception system was
`along with a status description.
`The status will indicate if the perception system was
`able to achieve the goal and if not will describe the potential reasons for failure.
`able to achieve the goal and if not will describe the potential reasons for failure.
`Vision
`Sensors
`Vision
`Sensors
`RGB Data
`RGB Data
`(512x480)
`(512x480)
`Two high-quality color TV cameras are provided.
`Two high -quality color TV cameras are provided.
`Format size is 512x480. Vision issues an acquisi(cid:173)
`ACQ Command
`Format size is 512x480. Vision issues an acquisi-
`tion command via a software interface.
`tion command via a software interface.
`
`ACQ Command _
`
`Color TV
`
`TV cameras are mounted on independent computer-
`TV cameras are mounted on independent computer -
`controlled pan/tilt drives. Vision issues pan- and
`controlled pan /tilt drives. Vision issues pan- and
`tilt-angle commands via a software interface and can
`tilt -angle commands via a software interface and can
`read actual position.
`read actual position.
`
`Two TVs mounted on a single sensor rail can be in(cid:173)
`Two TVs mounted on a single sensor rail can be in-
`dependently positioned. Vision issues position com(cid:173)
`dependently positioned. Vision issues position com-
`mands for each sensor via a software interface and
`mands for each sensor via a software interface and
`can read actual position.
`can read actual position.
`
`Status
`Status
`(Theta Actual)
`(Theta Actual)
`Theta Commands
`Theta Commands
`
`Status
`Status
`P1,P2 Commands
`P1, P2 Commands
`
`Status
`Status
`Theta Commands
`Theta Commands
`
`RGB Data
`RGB Data
`
`Pan /Tilt
`1
`
`Interocular
`Control
`
`Pan /Tilt
`2
`
`Color TV
`2
`
`Range
`Range
`(256x128)
`(256x128)
`0.5, 0.65f 0.85,1.5,10.0 M
`0.5, 0.65, 0.85, 1.5, 10.Oµ
`ACQ Command
`ACQ Command
`
`Multispectral
`Scanner
`
`Multispectral scanner provides range data and radio-
`Multispectral scanner provides range data and radio-
`metric data in the
`. 5-/i, 0.65-M, 0.85-ju, 1,5-/z and
`metric data in the .5 -p, 0.65 -p, 0.85 -p, 1.5 -p and
`10.0-A/ bands (preliminary). Vision issues an acquisi(cid:173)
`10.0 -µ bands (preliminary). Vision issues an acquisi-
`tion command via a software interface.
`tion command via a software interface.
`
`Other
`Sensors
`
`(Other sensors are processed outside the
`(Other sensors are processed outside the
`vision subsystem.)
`vision subsystem.)
`Figure 2 Sensor/Perception Interface
`Figure 2 Senso /Perception Interface
`For the 1987 time frame we anticipate there will be two forward-looking high-resolution
`For the 1987 time frame we anticipate there will be two forward -looking high -resolution
`color TVs mounted on independent pan/tilt mounts with a controllable interocular distance
`color TVs mounted on independent pan /tilt mounts with a controllable interocular distance
`ranging from 1 to 5 ft. Each camera will have an independent 3-channel 8-bit digitizer.
`ranging from 1 to 5 ft.
`Each camera will have an independent 3- channel 8 -bit digitizer.
`A 5-channel multispectral laser scanner being developed by the Environmental Research
`A 5- channel multispectral laser scanner being developed by the Environmental Research
`Institute of Michigan (ERIM) will also be incorporated.
`Institute of Michigan (ERIM) will also be incorporated.
`
`The reasoning subsystem will interpret the perception model and will plan a path far the
`The reasoning subsystem will interpret the perception model and will plan a path for the
`vehicle to avoid nontraversable regions and localized obstacles. Because of the signifi(cid:173)
`Because of the signifi-
`vehicle to avoid nontraversable regions and localized obstacles.
`cant amount of time involved in processing the sensory data to produce a symbolic model,
`cant amount of time involved in processing the sensory data to produce a symbolic model,
`it is not possible in the near future for the vehicle control system to close the high(cid:173)
`it is not possible in the near future for the vehicle control system to close the high-
`speed servoloop from visual data. Therefore we have introduced the concept of a reference
`Therefore we have introduced the concept of a reference
`speed servoloop from visual data.
`trajectory whereby the vehicle control system follows a selected path from one model until
`trajectory whereby the vehicle control system follows a selected path from one model until
`the next model is generated. Figure 3 illustrates the reasoning control interface portion
`Figure 3 illustrates the reasoning control interface portion
`the next model is generated.
`of the virtual vehicle. The control subsystem will be responsible for three activities.
`The control subsystem will be responsible for three activities.
`of the virtual vehicle.
`First, it will control the motion of the vehicle along the specified trajectory. Second,
`Second,
`First, it will control the motion of the vehicle along the specified trajectory.
`the control subsystem will evaluate the specified trajectory and determine such unsafe
`the control subsystem will evaluate the specified trajectory and determine such unsafe
`conditions as sudden high-speed turns. Third, the vehicle state estimate consisting of
`Third, the vehicle state estimate consisting of
`conditions as sudden high -speed turns.
`vehicle position, velocity, heading, pitch, and roll will be maintained within the control
`vehicle position, velocity, heading, pitch, and roll will be maintained within the control
`subsystem.
`subsystem.
`
`The physical vehicle consists of a drive chassis supplied by Standard Manufacturing, a
`The physical vehicle consists of a drive chassis supplied by Standard Manufacturing, a
`122-hour auxiliary power unit, and a 60,000-Btu air conditioner. This physical platform
`122 -hour auxiliary power unit, and a 60,000 -Btu air conditioner.
`This physical platform
`is considered to be sufficient to support the electronics for all projected demonstrations
`is considered to be sufficient to support the electronics for all projected demonstrations
`and experiments. Figure 4 illustrates the physical vehicle.
`Figure 4 illustrates the physical vehicle.
`and experiments.
`
`System requirements for the May 1985 demonstration
`System requirements for the May 1985 demonstration
`
`The May 1985 demonstration required the vehicle to autonomously travel on a paved road
`The May 1985 demonstration required the vehicle to autonomously travel on a paved road
`over a distance of 1 km at a speed of 5 km/hour. The 1-km distance requirement introduced
`over a distance of 1 km at a speed of 5 km /hour.
`The 1 -km distance requirement introduced
`the need for a robust vision subsystem capable of operating on hundreds of successive
`the need for a robust vision subsystem capable of operating on hundreds of successive
`scenes. The 5-km/hour speed requirement introduced the need for special-purpose computers
`scenes.
`The 5 -km /hour speed requirement introduced the need for special -purpose computers
`that could rapidly process imagery. This section summarizes the analyses conducted to
`This section summarizes the analyses conducted to
`that could rapidly process imagery.
`define the system requirements for the May 1985 demonstration.
`define the system requirements for the May 1985 demonstration.
`
`338 / SP/E Vol. 579 Intelligent Robots and Computer Vision (1985)
`338 / SPIE Vol 579 Intelligent Robots and Computer Vision (1985)
`
`

`

`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`Control law behavior can be modified
`Control law behavior can be modified
`by adjusting control parameters.
`by adjusting control parameters.
`
`Planning generates a time-tagged
`Planning generates a time -tagged
`sequence of reference points in the
`sequence of reference points in the
`vehicle coordinate system at the time
`vehicle coordinate system at the time
`of the last LIMS update.
`of the last LNS update.
`
`LIMS data are available at a 40-ms
`LNS data are available at a 40 -ms
`interval.
`interval.
`
`Planner is responsible for issuing up
`Planner is responsible for issuing up-
`dates to the LNS to meet system per(cid:173)
`dates to the LNS to meet system per-
`formance req u i rements.
`formance requirements.
`
`Figure 3. Reasoning control interface.
`Figure 3. Reasoning control interface.
`
`Control Parameter Modifiers
`Control Parameter Modifiers
`
`Reference
`Trajectory
`
`Position &
`Head ng
`Error
`
`i
`Lat, Lon, Az, El, Roll
`
`}
`
`LNS Updates
`
`Control
`Law
`
`Land
`Navigation
`System
`
`Reasonableness
`Checking
`
`VLl
`
`1t
`
`VR
`
`ALV
`
`0000
`
`Vehicle
`Vehicle
`Health &
`Health &
`Status
`Status
`Interface
`Interface
`
`I
`I
`
`.
`
`,5gitflfpflSf|S:3% /
`. ^,/fs^m^iiS^Mff^.
`
`Vehicle
`Vehicle
`Control
`Control
`nterface
`Interface
`
`Rack 1
`Rack 1
`Navigation
`Navigation
`Planner
`Planner
`
`Air-
`Air -
`Conditioning-
`Conditioning
`Ducts
`Ducts
`
`Rack 2
`Rack 2
`Vision
`Vision
`
`Laser
`Laser
`Scanner
`Scanner
`
`Evaporator/
`Evaporator/
`Cooler
`Cooler
`
`Cable Trays
`Cable Trays
`
`Rack 3
`Communications
`Communications
`Equipment
`Equipment
`
`Environmental..
`Environmental
`Control Unit
`Control Unit
`F igu re 4. Physical veh icle chassis.
`Figure 4. Physical vehicle chassis.
`Access Panel
`Access Panel
`The test track is 6 -m wide on the average and the vehicle is 3 -m wide.
`The test track is 6-m wide on the average and the vehicle is 3-m wide. To remain on the
`To remain on the
`road, the vehicle must maintain a lateral error no greater than +1.5 m.
`road, the vehicle must maintain a lateral error no greater than +1.5 m. To provide a
`To provide a
`it was decided that the vehicle must travel within +0.5 m of the road
`margin of safety it was decided that the vehicle must travel within +0.5 m of the road
`margin of safety
`centerline as shown in Figure 5.
`centerline as shown in Figure 5.
`
`As an additional safety factor it was decided that the vehicle could navigate off a
`As an additional safety factor it was decided that the vehicle could navigate off a
`prestored map of the road whenever the vision subsystem produced low-confidence models as
`prestored map of the road whenever the vision subsystem produced low- confidence models as
`long as visual information was used more than 75% of the time. To implement this vision
`long as visual information was used more than 75% of the time.
`To implement this vision
`override capability the entire test track was surveyed to an accuracy of 0.15 m. When
`override capability the entire test track was surveyed to an accuracy of 0.15 m.
`When
`
`SPIE Vol. 579 Intelligent Mobots and Computer Vision (1985} / 339
`SP /E Vol. 579 Intelligent Robots and Computer Vision (1985) / 339
`
`

`

`Downloaded From: https://www.spiedigitallibrary.org/conference-proceedings-of-spie on 15 Feb 2021
`Terms of Use: https://www.spiedigitallibrary.org/terms-of-use
`
`- Speed
`- Speed
`- Distance
`- Distance
`- Accuracy
`- Accuracy
`- Competency
`- Competency
`
`5 km/h
`5 km /h
`1 km
`1 km
`±0.5 m from Centerline
`±0.5 m from Centerline
`Vision Scene Model Used for Navigation
`Vision Scene Model Used for Navigation
`More Than 75% of the Time
`More Than 75% of the Time
`
`Vision Override
`Vision Override
`- Test track centerline has been surveyed to an
`- Test track centerline has been surveyed to an
`accuracy of 0.15 m.
`accuracy of 0.15 m.
`- If the vehicle wanders outside a ±0.5-m error band,
`- If the vehicle wanders outside a ±0.5 -m error band,
`the map data will be used to bring the vehicle back
`the map data will be used to bring the vehicle back
`to the centerline.
`to the centerline.
`- Vision override is expected to be used in conditions
`- Vision override is expected to be used in conditions
`where the vision subsystem cannot locate the road
`where the vision subsystem cannot locate the road
`edges.
`edges.
`
`±0.5-m Error Band
`Error
`Band
`±0.5 -m
`___L
`f~T
`
`/
`
`/
`
`/
`
`Road
`Center line
`Centerline
`
`'
`I
`/
`/
`/
`Figures. May demonstration performance requirements.
`Figure 5. May demonstration performance requirements.
`conditions did not allow the vision subsystem to segment the road in the image or derive
`conditions did not allow the vision subsystem to segment the road in the image or derive
`the 3-D geometry of the road edges, the map data were used to control the vehicle. Vision
`Vision
`the 3 -D geometry of the road edges, the map data were used to control the vehicle.
`override is an artifact of the May 1985 demonstration only and will not be incorporated in
`override is an artifact of the May 1985 demonstration only and will not be incorporated in
`future demonstration systems.
`future demonstration systems.
`
`To control the vehicle on a continuous basis at a fixed velocity we believed that the
`To control the vehicle on a continuous basis at a fixed velocity we believed that the
`vision-based scene model could not be generated at the servoloop update rates. Analysis
`Analysis
`vision -based scene model could not be generated at the servoloop update rates.
`indicated that the servoloop needed to operate at 40-ms update intervals and that the
`indicated that the servoloop needed to operate at 40 -ms update intervals and that the
`state-of-the-art vision algorithms were three orders of magnitude slower than that rate.
`state -of- the -art vision algorithms were three orders of magnitude slower than that rate.
`Therefore the concept of a reference trajectory was used (Fig. 6). This concept allows
`This concept allows
`Therefore the concept of a reference trajectory was used (Fig. 6).
`the servoloop to operate at 40 ms while the visual information is updated at a much slower
`the servoloop to operate at 40 ms while the visual information is updated at a much slower
`rate. At time TQ the vehicle acquires the i^1 image while the control system steers
`At time To the vehicle acquires the ith image while the control system steers
`rate.
`from the (i-l) tn trajectory. The i*-*1 image is processed up to time T]_ to generate a
`The ith image is processed up to time T1 to generate a
`from the (i -1)th trajectory.
`scene model and corresponding traj

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket