throbber
invariant. However, this multiplication may be detectable
`in our finite implementation:
`some copies on the edge of
`the tessellation may appear or disappear.
`ln the ideal im-
`plementation (requiring more computer power) these copies
`are barely visible, either being too small or too foggy.
`The alternative, to translate the tessellation to follow the
`observer, quickly leads in the hyperbolic case to severe nu-
`merical problems in the action in the group elements. The
`result is that the fourth, “homogeneous” coordinate of the
`transformed vertices grows exponentially large and the de-
`homogenization operation loses precision. This is avoided
`by the method outlined above.
`
`4.4 Stereo
`
`Modeling stereo vision presented challenges in the non-euclidean
`case. We first describe the more familiar solution available
`in the euclidean setting. The observer and the CAVE have a
`fixed physical reality which should be mirrored in the mod-
`els we apply to them. That is, the model coordinates for
`the navigator are the same as the model coordinates of the
`CAVE. In particular, the interocular separation of the ob-
`server stays at a fixed ratio to the CAVE size. We found
`empirically that an interocular distance of about 1 / 100 that
`of the diagonal of the CAVE is small enough to assure fu-
`sion. This translates to ‘a distance of about 2 inches, roughly
`corresponding to human anatomy. ln euclidean space, mak-
`ing the CAVE larger is equivalent
`to shrinking the scene
`while keeping the CAVE a Constant size. However, in non-
`euclidean settings, this equivalence no longer holds! In these
`spaces,
`there is no change of size without also changing
`shape. Consequently,
`it is the CAVE and observer that
`changes size (and shape!) while the scene remains the same.
`Of course there is no guarantee of fusion; it may become dif-
`ficult if the observer in H3 becomes too large while standing
`near the fixed geometry; but the danger is no different from
`the physically observed difficulty of fusing stereo when you
`move your hand closer to your eyes in everyday life.
`The pair of images for the stereo effect is produced by
`rendering each eye separately as described above by hyper-
`bolically translating the scene to locate the given eye at the
`origin.
`
`4.5 Efficiency measures
`
`To maintain the frame rate required in VR, we needed to
`disable the software lighting and shading for non-euclidean
`scenes (OOGL does lighting in software because of the dif-
`ferent metrics of the non-euclidean geometries). We kept the
`model of the tessellation simple — a wireframe, with simple,
`solid tiles inside. The discrete group software in OOGL au-
`tomatically culled the copies of the tessellation which lay
`outside the viewing frustum of a given wall of the CAVE.
`Also, we kept the number of layers of the tessellation great
`enough to produce a sense of depth, but small enough to
`maintain an adequate frame rate.
`
`5 Evaluation
`
`We have combined the discrete group capabilities of OOGL
`with VR,
`the only visualization paradigm for an immer-
`sive, direct experience of mathematical spaces,
`to extend
`the power of interactive 3-d visualization of such spaces. Ac-
`cess to 3—manifolds via a virtual environment is a significant
`addition to the tools available for mathematical research
`and education. For example, as pointed out in section 3,
`GeomCAVE allows direct observation of interesting prop-
`erties of non-euclidean spaces, such as the right angles of
`dodecahedra in hyperbolic space. GeornCAVE immediately
`
`makes features of OOGL available in VR, such as a col-
`lection of geometric models and discrete group operations.
`Thus, a mathematician who has built a manifold for viewing
`in maniview would be able to also explore it in GeomCAVE.
`
`6 Further Work
`
`0 Implement mixed mode navigation in H 3 (see conclu-
`sion of Section 4.2).
`
`0 Add more features of maniview:
`
`— Control over the size and shape of the Dirichlet
`domain.
`
`- Control over the depth of the tessellation.
`
`— As hardware improves, re-activate the software
`shading and fog effects.
`
`3 More sophisticated tools for mathematicians:
`
`— Connections with existing manifold software (such
`as snappea
`- Finer interactive control of the discrete group: se-
`lecting subgroups, use of color, deformation of the
`group.
`
`— Simulation of dynamical systems in non-euclidean
`spaces.
`
`— Extend the coverage to the other ‘live Thurston
`geometries.
`
`0 Experiment with audio tessellation along with the ge-
`ometric data. The resulting echo patterns could dis»
`tinguish differently-shaped manifolds.
`
`7 Acknowledgements
`
`We would like to extend special thanks to Stuart Levy of the
`Geometry Center for his help. Thanks are also due to Mark
`Phillips and Tamara Munzner, also of the Geometry Center,
`as well as Louis Kauffman, of the University of Illinois at
`Chicago.
`
`REFERENCES
`
`[1] Callahan, M..l., Hoffman, D. and Hoffman, J.T. Com-
`puter Graphics Tools for the Study of Minimal Surfaces.
`Communications of the Association for Computing Ma-
`chinery 31, 6 (1988), 548-661.
`
`J., DeFanti,
`[2) Cruz-Neira, Carolina, Sandin, Daniel
`Thomas A., Kenyon, Robert V. and Hart, John C.
`The CAVE: Audio Visual Experience Automatic Vir-
`tual Environment. Oommunications of the Association
`for Computing Machinery 35, 6 (June, 1992), 65-72.
`
`[3] Gunn, Charlie. Discrete Groups and Visualization of
`Three Dimensional Manifolds. Computer Graphics 27
`(July, 1993), 255-262. Proceedings of SIGGRAPH 1993.
`
`[4] Thurston, William. Three Dimensional Manifolds,
`Kleinian Groups and Hyperbolic Geometry. BAMS 19
`1.1932}, 417-431.
`applica-
`Macintosh
`snappea e a
`[._§l>"lNeel(s,
`Jeff.
`'
`tion for computing 3-manifolds”.
`(available
`from
`ftp@geom.umn.edu).
`
`170
`
`BUNGIE - EXHIBIT 1006 - PART 12 OF 14
`
`,_.4
`
`BUNGIE - EXHIBIT 1006 - PART 12 OF 14
`
`

`
`Tracking a Turbulent Spot in an Immersive Environment
`
`*David C. Banks, Institute for Computer Applications in Science and Engineering
`
`*Michael Kelley, Information Sciences Institute
`
`ABSTRACT
`
`We describe an interactive, immersive 3D system called Trackmr,
`which allows a viewer to track the development of a turbulent
`flow. Tracktur displays time-varying vortex stnictures extracted
`from a numerical flow simulation. The user navigates the space
`and probes the data within a windy 3D landscape. In order to sus-
`tain a constant frame rate, we enforce a fixed polygon budget on
`the geometry. In actual use by a fiuid dynamicist, Trackrur has
`yielded new insights into the transition to turbulence of a laminar
`flow.
`
`1
`
`Introduction
`
`Simulating the evolution of a turbulent spot has consumed
`thousands of CPU hours (on a Cray 2, Cray YMP, and YMP C-90
`over the course of 2.5 calendar years) [1]. We wish to animate 230
`time steps produced by the simulation, which are archived as hun-
`dreds of gigabytes of data. How does one visualize this large
`amount of ti me-varying data at interactive speeds?
`A new technique for locating vortices in an unsteady flow [2]
`compresses the volumetric flow-data by a factor of more than a
`thousand. This amount of compression seemed to promise interac-
`tive visualization of a massive time-varying dataset. We therefore
`developed a visualization system, Trackrur,
`that uses the com-
`pressed vortex representation to help track the development of a
`turbulent flow [3]. Trackrur uses a graphics workstation, 3D track-
`ing, and a stereoscopic display to create a virtual 3D environment
`populated by time-varying vortex tubes.
`
`2
`
`The Interactive Environment
`Our target user is the theoretical flow physicist who produced
`the time-varying dataset. From his perspective, the significant fea-
`tures of the simulation include the flat plate, the fluid flowing over
`it, the vortex structures, and the units of the computational domain
`(both spatial and temporal). The combination of a plane with a
`continual flow over it suggested to us a windy landscape.
`
`*ICASE. Mail.’ Stop 132C, NASA Langley Research Center, Hampton, Vir-
`ginia 23681. 804/864-2194 (banks@icare.edu).
`flnformation Sciences Institute, 4350N. Fairfax Drive, Suite 400, Arling-
`ton, VA 22203. 703/243-9422 (ke1leym@arpa.mil).
`
`Permission to copy without tee all or part of this material is
`granted provided that the copies are not made or distributed for
`direct commercial advantage, the ACM copyright notice and the
`title of the publication and its date appear, and notice is given
`that copying is by permission of the Association of Computing
`Machinery. To copy otherwise, or to republish, requires a fee
`and/or specific permission.
`1995 Symposium on Interactive 3D Graphics, Monterey CA USA
`© 1995 ACM 0-89791-736-7195/0004...$3.50
`
`One of our early design decisions was to make generous use
`of texture maps to enrich the virtual world. A grid-texture was an
`obvious choice for the ground plane, with stenciled textures added
`to denote streamwise units of the domain. To indicate the free-
`stream velocity, we animate a cloud-texture on two distant walls.
`Textures denote the upstream and downstream directions. Sur-
`rounded by a textured landscape, a viewer is given persistent
`reminders of the spatial context he is operating within. The 3D
`widgets in the environment are also textured to eliminate the car-
`toon quality that constant-colored polygons convey.
`In an actual wind-tunnel experiment,
`the vortex structures
`would be only millimeters in size and the free-stream velocity
`would be about 30 meters per second. The lifetime of the turbulent
`spot would be less than a second. Trackrur displays the 3D anima-
`tion at more human scales: the geometry is larger and the simula-
`tion lasts longer, each by about three orders of magnitude.
`We want to help the scientist comprehend the spatial evolu-
`tion of a turbulent spot; since the spot convects downstream, we let
`the viewer be convected along with it to keep it in the field of view.
`Widgets are convected downstream with the viewer to remain
`within reach. A time-slider advances to mirror the current time
`step in the animation; alternatively, the viewer can set the current
`time step by adjusting the slider. Shadows on the ground plane pro-
`vide a depth cue at only a small penalty in performance [4]. The
`viewer can select surface, wire—frame, or fat-line representations of
`the geometry. The fat-line segments (through the core of the vorti-
`ces) are given widths to match the thickness of the tube and are
`illuminated as one-dimensional fibers [5] in order to convey shape
`from shading.
`We also want to permit routine measurements of flow quanti-
`ties. The viewer is given a data probe — a ray emanating from the
`pointing device in the virtual environment. Trackrur locates the
`nearest point on a vortex core to the probe ray,
`then displays
`attributes (such as spatial position of the point) in a pop-up panel.
`
`3
`
`3D Toolkits
`
`libraries,
`Tracktur is constructed from several component
`including public-domain toolkits. The Minimal Reality toolkit [6]
`provides the basis of a through-the-window interface that uses ste-
`reoscopic display and 3D tracking for the head and hand. The
`CAVE version of the application [7] uses code developed by the
`Electronic Visualization Laboratory [8].
`to implement 3D menus
`We developed a custom toolkit
`(using Hershey fonts), buttons, and sliders. We also developed a
`calibration tool for the 3D trackers to determine the proper matrix
`traiisforms. The user interactively aligns coordinate axes {dis-
`played on the screen) to establish the correct rotation matrix. The
`various transformations are written to a file and need not be recom-
`puted unless the equipment is moved.
`
`i 1ilI1
`
`171
`
`

`
`
`‘r
`
`ls
`
`.
`
`.1
`
`A backward-tilted S-shaped vortex head that develops in the late
`stages of transition from a laminarflow to a turbulent spot.
`
`4
`
`The Fixed Polygon Budget
`A difficult aspect of developing an interactive system is pre-
`serving a fixed frame rate. Our scene-updates are typically domi-
`nated by the time spent drawing the vortex tubes, so we budget a
`fixed number of polygons with ,-which to model them. The turbu-
`lent spot
`increases in geometric complexity as the simulation
`progresses: a single vortex tube at time 28 develops into about 150
`tubes at time 221. An SGI Onyx with RealityEngine2 graphics sus-
`tains about l5 frames per second with a fixed count of 9000 poly-
`gens.
`In the early stages of the simulation, the polygon budget
`allows a finer resolution than we have computed. We therefore re-
`sample the vortex skeleton at a higher spatial resolution in order to
`exhaust the supply of polygons. But in the late stages of the simu-
`lation it is imperative to dole out the polygons in a miserly fashion.
`The vortex skeletons are down-sampled according to a set of heu-
`ristics designed to preserve significant geometric features. The re-
`sampling works as a filter on the original skeletal representation of
`the vortex core. The first sample-point is always retained. After a
`point is retained, subsequent points along the skeleton are rejected
`unless any of the following hold:
`
`0 the arclength exceeds a threshold;
`0 the integrated curvature exceeds a threshold;
`0 the radius of the cross-section changes quickly.
`
`Sometimes a vortex skeleton enters a small spiral from which
`it never exits. To guard against wasted samples, we reject points on
`the skeleton where the ratio of the skeleton‘s radius to its radius of
`curvature exceeds a threshold (we use the constant 0.7). These
`heuristics maintain a reasonable amount of geometric detail at the
`late stages of the simulation.
`
`5 What Has Been Learned
`
`The scientist who generated the dataset (Dr. Bart Singer)
`agreed to use the system to study how a turbulent spot develops.
`He has learned two new things about the evolution of the turbulent
`spot. In order to place them in their context, we give a brief
`descriptive summary of the spot’s development.
`First, Singer discovered a backwards-tilted S-shaped vortex
`head in the late stages of transition (see figure). The vortex is simi-
`lar in shape to a structure seen in experimental data for a similar
`flow. Singer had not observed this feature in his dataset until he
`used our system. Evidently,
`the interactivity permitted him to
`select the right combination of a particular viewpoint and a panic-
`
`172
`
`ular time step. This could, in principal, have been discovered with
`the visualization system he was a accustomed to using, but its
`more limited interactivity made the feature much harder to find.
`Secondly, the visualization system gave Singer his first view
`of the dynamic behavior of “necklace” vortices, which define the
`outer extent of the turbulent spot. They eventually shred into
`pieces, curling into horseshoe and hairpin vortices, Without Track-
`tur, Singer had been unable to track the necklace vortices through
`their entire history. These findings are initial evidence that the sys-
`tem can assist in the research task.
`
`6 Conclusions
`
`can certainly communicate research
`Visualization tools
`results, but it is not yet clear how well they help produce research
`results. We have created an interactive 3D visualization system,
`called Tracktur, and put it into the hands of the scientist. Tracktur
`provides a textured environment for examining the onset of turbu-
`lence. The viewer can navigate through the landscape and interact
`with a turbulent spot through 3D menus, buttons, sliders, and a
`data probe. In the hands of a fluid scientist, the system has yielded
`new insights into the development of a turbulent spot.
`
`Acknowledgments
`
`This work was supported under NASA contract No. NAS1-
`19480. We thank Bill von Ofenheim and the Data Visualization
`Lab at NASA Langley Research Center for use of their stereo
`glasses. We thank Jonathan Shade at the San Diego Supercomputer
`Center for help in creating transparent texture maps.
`
`Bibliography
`
`[1] Singer, Bart A. and Ron Ioslin, “Metamorphosis of a hairpin
`vortex into a young turbulent spot.” Physics of Fluids A, Vol.
`6, No. 11 (Nov. 94).
`
`[2] Banks, David C. and Bart A. Singer, “Vortex Tubes in Turbu-
`lent Flows: Identification, Representation, Reconstruction.”
`Proceedings of Visualization ’94.
`
`[3]
`
`“The Tracktur Home Page,” World Wide Web URL
`http:/lwww.icase.edu/~banks/trackturlvortex/docl
`tracktunhtml.
`
`[4] Blinn, Jim, “Me and My (Fake) Shadow.” IEEE Computer
`Graphics & Applications (Jim Blinn’s Corner), January 1988,
`pp. 82-86.
`
`[5] Banks, David C., “Illumination in Diverse Codimensions.”
`Proceedings of SIGGRAPH ’94 (Orlando, Florida, July 24-
`29, 1994). In Computer Graphics Proceedings, Annual Con-
`ference Series, 1994, ACM SIGGRAPH, New York, pp. 327-
`334.
`
`[6]
`
`“MR Toolkit," World Wide Web URL
`http:/!web.cs.ualhei1a.ca}~graphics/MRToolkit.html.
`
`[7] Banks, David C., “The Onset of Turbulence in a Shear Flow
`Over
`a Flat Plate.”
`[Demonstration] SIGGRAPH ’94
`VROOM Exhibit. In Visual Proceedings.‘ The Art and Inter-'
`disciplinary Programs of SIGGRAPH 94, Computer Graphics
`Annual Conference Series, 1994, ACM SIGGRAPH, New
`Dfork, p. 235. Also in “Fluid Mechanics,” World Wide Web
`_,4-=".,LJRL
`,/I " http1//www.ncsa.uiuc.edufEVI..ldocs/VROOM/I-ITMU
`PROJECTSl23BanI-;s.html.
`
`[8]
`
`“CAVE User’s Guide,” World Wide Web URL
`http://www.ncsa.uiuc.edufEVL/docslhtml/CAVEGuide.html.
`
`

`
`
`
`Behavioral Control for Real—Time Simulated Human Agents
`
`John P. Granieri, Welton Becket,
`Barry D. Reich, Jonathan Crabtree, Norman 1. Badler
`
`Center for Human Modeling and Simulation
`University of Pennsylvania
`Philadelphia, Pennsylvania 19104-6389
`granieri/becket/reich/crabtree/badlerflgraphics . cis . upenn . edu
`
`Abstract
`A system for controlling the behaviors of an interac-
`tive human—like agent, and executing them in real-time,
`is presented. It relies on an underlying model of contin-
`uous behacior, as well as a discrete scheduling mecha-
`nism for changing behavior over time. A multiprocess-
`ing framework executes the behaviors and renders the
`motion of the agents in real-time. Finally we discuss
`the current state of our implementation and some areas
`of future work.
`
`1
`
`Introduction
`As rich and complex interactive 3D virtual environ-
`ments become practical for a variety of applications,
`from engineering design evaluation to hazard simula-
`tion, there is a need to represent their inhabitants as
`purposeful, interactive, human—like agents.
`It is not a great leap of the imagination to think
`of a product designer creating a virtual prototype of a
`piece of equipment, placing that equipment in a virtual
`workspace, then populating the workspace with virtual
`human operators who will perform their assigned tasks
`(operating or maintaining) on the equipment. The de-
`signer will need to instruct and guide the agents in the
`execution of their tasks, as well as evaluate their per-
`formance within his design. He may then change the
`design based on the agents’ interactions with it.
`Although this scenario is possible today, using only
`one or two simulated humans and scripted task anima-
`tions [3], the techniques employed do not scale well to
`tens or hundreds of humans. Scripts also limit any abil-
`ity to have the human agents react to user input as well
`as each other during the execution of a task simulation.
`We wish to build a system capable of simulating many
`agents, performing moderately complex tasks, and able
`to react to external (either from user—generated or dis-
`tributed simulation) stimuli and events, which will oper-
`ate in near real-time. To that end, we have put together
`a system which has the beginnings of these attributes,
`
`Permission to copy without fee ail or part of this material is
`granted provided that the copies are not made or distributed for
`direct commercial advantage, the ACM copyright notice and the
`title of the publication and its data appear, and notice is given
`that copying is by permission of the Association of Computing
`Machinery. To copy otherwise, or to republish, requires a fee
`andfor specific permission.
`_
`1995 Symposium on Interactive 3D Graphics, Monterey CA USA
`© 1995 ACM 0-89791-736-7/95f0O04...$3.50
`
`and are in the process of investigating the limits of our
`approach. We describe below our architecture, which
`employs a variety of known and previously published
`techniques, combined together in a new way to achieve
`near real—time behavior on current workstations.
`
`We first describe the machinery employed for behav-
`ioral control. This portion includes perceptual, control,
`and motor components. We then describe the multipro-
`cessing framework built to run the behavioral system in
`near real-time. We conclude with some internal details
`of the execution environment. For illustrative purposes,
`our example scenario is a pedestrian agent, with the
`ability to locomote, walk down a sidewalk, and cross
`the street at an intersection while obeying stop lights
`and" pedestrian crossing lights.
`
`2 Behavioral Control
`
`The behavioral controller, previously developed in [4]
`and [5],
`is designed to allow the operation of paral-
`lel, continuous behaviors each attempting to accom-
`plish some function relevant to the agent and each con-
`necting sensors to effectors. Our behavioral controller
`is based on both potential-field reactive control from
`robotics [1, 10] and behavioral simulation from gra h-
`ics, such as Wilhelms and Skinner’s implementation 20
`of Braitenberg’s Vehicles
`Our system is structured
`in order to allow the application of optimization learn-
`ing [6], however, as one of the primary difliculties with
`behavioral and reactive techniques is the complexity of
`assigning weights or arbitration schemes to the various
`beh[avio]rs in order to achieve a desired observed behav-
`ior 5, 6 .
`Behaviors are embedded in a network of behaviors!
`
`nodes, with fixed connectivity by links across which only
`floating-point messages can travel. On each simulation
`step the network is updated synchronously and with-
`out order dependence by using separate load and emit
`phases using a simulation technique adapted from [14].
`Because there is no order dependence, each node in the
`network could be on a separate processor, so the net-
`work could be easily parallelized.
`Each functional behavior is implemented as a sub-
`network of behavioral nodes defining a path from the
`geometry database of the system to calls for changes
`in the database. Because behaviors are implemented
`as networks of simpler processing units, the representa-
`tion is more explicit than in behavioral controllers Where
`entire behaviors are implemented procedurally. Wher-
`
`173
`
`
`
`

`
`ever possible, values that could be used to parameterize
`the behavior nodes are made accessible, making the en-
`tire controller accessible to machine learning techniques
`which can tune components of a behavior that may be
`too complex for a designer to manage. The entire net-
`work comprising the various sub-behaviors acts as the
`controller for the agent and is referred to here as the
`behavior net.
`
`There are three conceptual categories of behavioral
`nodes employed by behavioral paths in a behavior net:
`
`perceptual nodes that output more abstract results
`of perception than what raw sensors would emit.
`Note that in a simulation that has access to a com-
`plete database of the simulated world, the job of
`the perceptual nodes will be to realistically limit
`perception, which is perhaps opposite to the func-
`tion of perception in real robots.
`~
`
`motor nodes that communicate with some form of mo-
`tor control for the simulated agent. Some motor
`nodes enact changes directly on the environment.
`More complex motor behaviors, however, such as
`the walk motor node described below, schedule a
`motion (a step) that is managed by a separate,
`asynchronous execution module.
`
`control nodes which map perceptual nodes to motor
`nodes usually using some form of negative feed-
`back.
`
`This partitioning is similar to Firby’s partitioning of
`continuous behavior into active sensing and behavior
`control routines [10], except that motor control is con-
`sidered separate from negative feedback control.
`
`2.1 Perceptual Nodes
`The perceptual nodes rely on simulated sensors to
`perform the perceptual part of a behavior. The sensors
`access the environment database, evaluate and output
`the distance and angle to the target or targets. A sam-
`pling of dilferent sensors currently used in our system is
`described below. The sensors differ only in the types of
`things they are capable of detecting.
`
`Object: An object sensor detects a single object. This
`detection is global; there are no restrictions such
`as visibility limitations. As a result, care must
`be taken when using this sensor:
`for example, the
`pedestrian may walk through walls or other objects
`without the proper avoidances, and apparent real-
`ism may be compromised by an attraction to an
`object which is not visible. It should be noted that
`an object sensor always senses the object’s current
`location, even if the object moves. Therefore, fol-
`lowing or pursuing behaviors are possible.
`
`Location: A location sensor is almost identical to an
`object sensor. The difference is that the location
`is a unchangeable point in space which need not
`correspond to any object.
`
`Proximity: A proximity sensor detects objects of a
`specific type. This detection is local: the sensor can
`detect only objects which intersect a sector-shaped
`region roughly corresponding to the field—of—view of
`the pedestrian.
`
`Line: A line sensor detects a specific line segment.
`
`Terrain: A terrain sensor, described in [17], senses the
`navigability of the local terrain. For example, the
`pedestrian can distinguish undesirable terrain such
`as street or puddles from terrain easier or more de-
`sirable to negotiate such as sidewalk.
`
`described
`Field-of-View: A field-of—view sensor,
`in [17], determines whether a human agent is visi-
`ble to any of a set of agents. The sensor output is
`proportional to the number of agents’ fields—of—view
`it is in, and inversely proportional to the distances
`to these agents.
`
`2.2 Control Nodes
`
`Control nodes typically implement some form of neg-
`ative feedback, generating outputs that will reduce per-
`ceived error in input relative to some desired value or
`limit. This is the center of the reactivit
`of the be-
`havioral controller, and as suggested in [9, the use of
`negative feedback will effectively handle noise and un-
`certainty.
`Two control nodes have been implemented as de-
`scribed in [4] and [5], attract and avoid. These loosely
`model various forms of taxes found in real animals [7, 11]
`and are analogous to proportional servos from control
`theory. Their output is in the form of a recommended
`new velocity in polar coordinates:
`
`Attract An attract control node is linked to El and d
`values, typically derived from perceptual nodes,
`and has angular and distance thresholds, i9 and
`ta. The attract behavior emits A6 and Ad values
`scaled by linear weights that suggest an update
`that would bring d and 6 closer to the threshold
`values. Given weights kg and icy :
`
`A19:
`
`0
`kg(3—t,9
`ic,9(6 + t,«,-.
`
`if -153 S 3 S is
`>t,9
`otherwise
`
`_
`
`Ad _ { it-,;(d - td)
`
`0
`
`ifdgtd
`
`otherwise.
`
`Avoid The avoid node is not just the opposite of ai-
`tmcif. Typically in attract, both 6 and d should
`be within the thresholds. With avoid, however,
`the intended behavior is usually to have at outside
`the threshold distance, using :9 only for steering
`away. The resulting avoid formulation has no an-
`gular threshold:
`
`A6:
`
`0
`kgvr-6)
`It-,9 —7r — 9)
`
`if cl > td
`ifdgtdandflzfl
`otherwise
`
`0
`
`A8 : [ kd(td — 0.‘)
`
`if (ll > L1
`
`otherwise.
`
`

`
`
`
`Max Step Length
`
`Figure 2: The fan of potential foot locations and orien-
`tations
`
`Perctplual
`Nodes
`
`Goal Sensor
`L b__
`‘gm 0 I p"
`
`Cununl
`Nodes
`
`5
`.
`l
`.
`ad?
`I
`
`Mom:
`Nodes
`
`mmmp
`
`i_
`.
`I
`.
`Attract
`:
`
`‘ -a' min- 9
`'
`
`I
`[HUI
`.
`
`
`6 :
`scaling weights (4)
`Walk
`‘
`7
`E
`
`Walkersensor
`Avoid
`54*
`I
`,,,,.,,_,,,,,
`41
`
`
`far. max-d E r:l£.t—d max-step
`avzraging weiglalsf-1)
`J
`scaling weig-‘US 1'4}
`_
`.n‘ep—Jpeed
`A
`J .
`:
`8
`.
`‘
`
`
`
`
`
`
`
`
`
`"ll
`
`
`
`Cylinder Sensor
`fav. max-d’
`averaging werglrrrts)
`
`
`
`Avoid
`max-d
`scaling we1'giI.r5(4)
`
`
`'
`
`E
`
`Figure 3: An example behavior net for walking
`
`is cleared, producing an extremely unrealistic sawtooth
`path about the true gradient in the potential field.
`To eliminate the sawtooth path effect, we sample the
`value of the potential field implied by the sensor and
`control nodes in the space in front of the agent and step
`on the location yielding the minimum sampled ‘energy’
`value. We sample points that would be the agent’s new
`location if the agent were to step on points in a number
`of arcs within a fan in front of the agent’s forward foot.
`This fan, shown in Fig. 2, represents the geometrically
`valid foot locations for the next step position under our
`walking model. This sampled step space could be ex-
`tended to allow side—stepping or turning around which
`the agent can do [3], though this is not currently ac-
`cessed from the behavior system described in this pa-
`per. For each sampled step location, the potential field
`value is computed at the agent’s new location, defined
`as the average location and orientation of the two feet.
`2.4 An example behavior net
`The example behavior net in Fig. 3 specifies an over-
`all behavior for walking agents that head toward a par-
`ticular goal object while avoiding obstacles (cylinders in
`this case) and each other. The entire graph is the behav-
`ior net, and each path from perception to motor output
`is considered a behavior. In this example there are three
`behaviors: one connecting a goal sensor to an attraction
`controller and then to the walk node (a goal-attraction
`behavior), another connecting a sensor detecting prox-
`imity of other walking agents to an avoidance controller
`
`
`
`Figure 1: Sawtooth path due to potential field discon-
`tinuities
`'
`
`2.3 Motor Nodes
`Motor nodes for controlling non-linked agents are im-
`plemented by interpreting the Ad and A9 values emit-
`ted from control behaviors as linear and angular ad-
`justments, where the magnitude of the implied velocity
`vector gives some notion of the urgency of traveling in
`that direction.
`If this velocity vector is attached di-
`rectly to a figure so that requested velocity is mapped
`directly to a change in the object’s position, the result-
`ing agent appears jet—powered and slides around with
`infinite damping as in Wilhelms and Skinner’s environ-
`ment [20].
`
`2.3.1 Walking by-sampling potential fields
`
`VVhen controlling agents that walk, however, the mo-
`tor node mapping the velocity vector implied by the
`outputs of the control behaviors to actual motion in
`the agent needs to be more sophisticated. In a walking
`agent the motor node of the behavior net schedules a
`step for an agent by indicating the position and orien-
`tation of the next footstep, where this decision about
`where to step next happens at the end of every step
`rather than continuously along with motion of the agent.
`The velocity vector resulting from the blended output
`of all control nodes could be used to determine the next
`footstep; however, doing so results in severe instability
`around threshold boundaries. This occurs because we
`allow thresholds in our sensor and control nodes and as
`a result the potential field space is not continuous. Tak-
`ing a discrete step based on instantaneous information
`may step across a discontinuity in field space. Consider
`the situation in Fig. 1 where the agent is attracted to a
`goal on the opposite side of a wall and avoids the wall
`up to some threshold distance. If the first step is sched-
`uled at position pl , the agent will choose to step directly
`toward the goal and will end up at P2. The agent is then
`well within the threshold distance for walls and will step
`away from the wall and end up at p3, which is outside
`the threshold. This process then repeats until the wall
`
`175
`
`

`
`
`
`Knit
`Bind
`Avoidances
`
`
`
`
`State 1
`
`Go North to
`
`SE Comer
`
`
`
`Figure 4: North-net: A sample ped—net shown graph-
`ically
`
`
`
`Figure 5: A pedestrian crossing the street
`
`We
`use
`PaT—Nets
`in
`several
`different ways.
`Liglht-nets control traflic lights and pod-nets control
`pe estrians. Light—:nets cycle through the states of the
`traffic light and the walk and alon’t walk signs.
`
`Fig. 4 is a simple ped-net, a north-net, which moves
`a pedestrian north along the eastern sidewalk through
`the intersection. Initially, avoidances are bound to the
`pedestrian so that it will not walk into walls, the street,
`poles, or other pedestrians. The avoidances are always
`active even as other behaviors are bound and unbound.
`In State 1 an attraction to the southeast corner of the
`intersection is bound to the pedestrian. The pedestrian
`immediately begins to walk toward the corner avoiding
`obstacles along the way. When it arrives the attraction
`is unbound, the action for State 1 is complete. Nothing
`further «happens until the appropriate walk light is lit.
`Wheriit is lit, the transition to State 2 is made and ac-

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket