throbber
posters.
`Examples of view dependent impostors are:
`
`e A texture map that is pasted onto the appropriate face
`of an object’s bounding box. This texture map is called
`a textured cluster when it conceponds to an image of
`a group of objects.
`
`e Another view dependent texture map is also known as
`billboard in [6] and is obtained in the same way as tex-
`ture maps. A billboard is centered at an object’s center
`and made to rotate in such a way that it always face
`the observer. Since one billboard is computed for each
`face of the object’s bounding box as the observer moves
`around the object a difierent billboard is selected to dis-
`play according to the viewpoint. This impoator is useful
`to represent objects that are apprordmately rotationally
`symmetric such as pine trees.
`
`0 Another variant of the texture map described above is
`a pseudo-texture mapl. A pseudo-texture map is a tri-
`angular mesh (or a quadrilateral strip) onto which a.
`texture map is pasted in such a way that each pixel in
`the image is associated to a pair of triangles (or quadri-
`lateral) in the mesh.
`
`Examples of view independent
`
`impostors are:
`
`levels-of—detall,
`a The conventional
`coarse versions of a given object?
`
`i.e., geometrically
`
`0 Boxes whose faces have the average areas and colors as
`the corresponding sides of the object’s bounding box.
`
`a Texture mapped boxes. This representation uses tex-
`ture maps that are pasted onto each face of the object’s
`bounding box and is useful to represent box like objects
`such as the Standard Oil Building in Chicago.
`
`4.2
`
`Impostor Selection
`
`There are certain cases where specific impostors are more
`suitable than others and we can usually “suggest” to the
`walkthrough program which representation to display at a.
`given point in the simulation.
`For example, if the imagespace size N of an object is
`less then a few pixels then the representation that should
`be used is the average box above.
`If N is greater then a
`prefixed maximum size then the full detail object might be
`required. If different LODs are present in the model, then
`difierent image space size thresholds may be used to select
`the appropriate LCD to be displayed.
`Box-like and symmetric objects can be displayed using a
`texture mapped box and a billboard, respectively. Texture
`maps can be selected according to the obeserver’s viewpoint.
`For example, if four texture maps are used for each face of
`an object’s bounding box, then the appropriate texture map
`for a given viewpoint can be selected as follows:
`
`1. In a pic-processing phase, associate to each texture map
`a. number corresponding to the region it belongs as in
`Figure 3.
`
`1It; can be used in machines that do not have texture mapping
`hardware.
`250ml! toolldts such as Performer[6j provide routines to auto-
`matically generate coarse versions of a given full-detail object.
`
`
`
`’qu-l
`
`Figure 3: Possible viewpoint regions in object coordinates.
`
`2. During the walkthrough we determine the viewpoint
`with respect
`to the object's coordinate system and
`therefore the region it is in.
`
`In some situations, both a view dependent and a view
`independent representation are suitable. When this is the
`case, we can decide upon these two representations by ab-
`taining the accuracy of each representation for the particular
`observer view angle using the table described in Section 3.2
`and then select the representation with the highest accu-
`racy/cost ratio. This heuristic is particularly useful in cases
`where the observer’s line of sight is approadring a 45 degree
`angle with the line perpendicular to the texture map.
`In
`such a, case although the texture map may have a low ren-
`dering cost, its accuracy will also have a low value which will
`favor the selection of a possibly more costly view dependent
`representation.
`
`4.3 Formalization of the Problem
`
`We begin by defining a meta-object abstraction to be an en-
`tity with one or more hardware drawable representations as
`in the framework described in Section 4.1. It is an abstrac-
`tion for both conceptual objects and groups of objects.
`As before, a hardware drawable representation is an entity
`that can be rendered by the graphics hardware to represent
`objects and has associated to it a. rendering cost and a mea-
`sure of its “contribution" to the simulation.
`The model is then defined as a. collection of conceptual
`objects at specific positions and orientations in space that
`forms the environment in which the user navigates.
`The model hierarchy is defined to be a. tree structure
`whose nodes are meta-objects that provide multiple repre-
`sentations of the model, each representing it at a given ren-
`dering time and providing the user with a given perception
`of it.
`In this hierarchy each node contains drawable rep-
`resentations of its children. The root contains the coarsest
`representations for the entire model with the lowest possible
`rendering cost while the leaves form the perceptually best
`representation of the model with the highest rendering cost.
`More formally, the model hierardiy M is a tree structure
`that can recursively be defined by the following rules:
`
`1. A meta-object that has no children is a model hierarchy
`with just one node, the root node.
`
`'
`
`2. Let M1, M2...Mn be model hierarchies whose root nodes
`are the meta-objects m1,m2...mn, respectively, that
`represent sets of conceptual objects and have associ-
`r' ated with each of them the sets r1,r2...rn of drawable
`representation. Let in be a. meta-object that repre-
`sents the union of m,‘ and has associated to it a set 1'
`of drawable representations such that Cost(Mar:(r)) <
`L1 Cost(Miu(n)), where Maa:(r) is the representa-
`tion that has the highest cost among those in r, Min(n)
`is the representation that has the lowest cost among
`
`
`
`
`
`98
`
`101
`
`101
`
`

`

`
`
`those in r.- and Cost(z) is the rendering cost of repre-
`sentation 2:. M is then defined to be a model hierarchy
`il'm is theparent ofm; fori: 1...n.
`
`Figure A shows how the model of a city would be orga-
`nized to form a. hierarchy in which each node has a set of
`impostors to represent the objects it subsumes.
`Given these definitions, we state the walk-through prob-
`lem as a tree traversal problem:
`
`"Select a set of nodes in the model hierarchy that pro-
`vides the user with a perceptually good representation of
`the model”, according to the following constraints:
`
`1. The sum of the rendering cost of all selected nodes is
`less than the user specified frame time.
`
`2. Only one node can be selected for each path from the
`root node to a leaf node, since each node already con—
`tains drawable representations that represent all its de-
`scendant nodes
`
`The problem as described here is an NP-complete tree
`traversal problem and is a. variant of the “Knapsack prob-
`lem", which is not surprising since we are generalizing the
`system that Funkhouser and Sequin showed to be a knapsack
`problem. The candidate sets from which only one element
`will be selected to be put in the knapsack are the set of rep-
`resentations associated to each meta-object. The knapsack
`size is the frame time per frame that the selected represen-
`tations must not exceed. The cost of each element is the
`rendering cost associated to a representation. The profit of
`an element is the accuracy of the representation plus the
`benefit of the object with which it is associated.
`To solve this problem we use the framework described in
`Section 4.1 and develop a model hierarchy building algo-
`rithm and a heuristic representation selection algorithm.
`
`4.4 Design of the Model Hierarchy
`
`We partition the entire model according to our formalization
`of the problem, and form a tree structure in which each node
`contains low-cost representations for the nodes it subsumes.
`The structure that we use is a. variation of an octree that
`is a bounding volume hierarchy, that can be used to cull
`objects against the viewing irusturn and also serves as a
`rendering aid, since we can draw its nodes.
`This tree is constructed in a bottom-up fashion instead of
`the traditional top-down recursive way, so that we can see
`which objects are being “clustered”3 together as described
`in Section 5.
`The criteria used to group objects takes into account only
`the proximity of objects and our model hierarchy building
`program is designed to cluster together nearby objects first
`in the way illustrated in the 2D example of Figure 4.
`According to a user-supplied number of divisions in x, y,
`and z axis of the bounding box of the entire model an initial
`octree cell size and therefore tree depth is specified. We start
`by creating a “child list” that contains all the conceptual
`Objects in the model with their bounding boxes. This initial
`list will correspond to the leaves of the tree. The child list
`is used to generate the next level up of the tree. For each
`
`3What is meant by clustering is basically the generation of
`impostors for the group of objects.
`
`213 Example:
`
`Soils-e A
`
`L “will.
`
`Figure 4: Generating the model hierarchy octree. Represen-
`tations are generated for cells with more than one object.
`
`structural {subtree A)
`
` clustorl
`
`Conceptual
`obj nctl
`
`Figure 5: Subtree A as depicted on Figure 4.
`
`level of the tree and for each cell in that level, we get the
`set of objects that are completely inside the cell.
`if this
`set is empty we move on to the next cell. Otherwise we
`compute the bounding box of the objects in the cell and
`discard it if the bounding box is already in the child list,- since
`impostor representations for that set of objects had already
`been created.
`If it is not in the list we create impostor
`representations for the cluster inside the cell.
`In our implementation clusters are generated by creating
`texture snaps1 of the objects from given view angles and their
`generation is described in Section 5. After the impostor rep-
`resentations have been created, we make the cell point to its
`children and remove them from the child list. We then add
`the new cell to the end of the child list and repeat the process
`until we obtain a. single cell with impostor representations
`for the entire model.
`It is important to note that at each time we cluster objects
`we always take into account the actual objects that the cell
`subtends instead of previously computed clusters.
`Note that cluster representations are generated only if
`there is more then one object totally inside each cell. Single
`objects inside a. cell as well as objects on cell boundaries will
`be grouped in the next levels up in the hierarchy. Figure 5
`shows the structure of subtree A depicted in Figure 4.
`
`4.5 Traversal of the Model Hierarchy
`
`Due to the NP—complete nature of selecting representations
`to render from the model hierarchy, we have devised a. heuris-
`tic algorithm that quickly (in less than the frame time) tra-
`verses the model hierarchy. This algorithm selects repre—
`sentations to be rendered, accumulating rendering cost until
`the user-specified frame time is reached. When this occurs,
`
`4.llctually, representations only need to obey the cost require-
`ment stated in Section 4.3.
`
`99
`
`102
`
`102
`
`

`

`the algorithm stops and sends a list of representations to the
`graphics pipeline.
`The tree traversal is top-down from the root node and
`first traverses the branches that contain the most “benefi-
`cial” nodes according to the benefit heuristic presented in
`Section 3.1.
`
`The problem is that our per-object benefit heuristic asso-
`ciates benefit not to cluster representations but to represen-
`tations for conceptual objects that are at the very bottom of
`the tree. High up in the hierarchy we do not know to which
`branches of the tree the most beneficial objects belong. Be-
`cause of thiI, we have decided to break the selection of nodes
`to render in two phases as described below.
`
`
`
`4.5.1 First Pass: Assign Initial Representation,
`Benefit, Visibility, and Cost.
`
`Figure 6: Tree representing the model hierarchy and the set
`of nodes to be rendered as a linked list.
`
`In this first phase of the selection process. we recursively
`descend the model hierarchy in a depth-first manner and
`associate a benefit and visibility value with each node in the
`tree, and an initial drawable representation.
`their benefits
`Since the leaves represent single objects,
`are computed as a weighted average of the factors intrinsic
`to objects as described in Section 3.1. The benefit wine
`associated to a tree node is taken to be the maximum value
`of all the benefits of its children.
`
`The visibility of nodes are computed by checking if the
`bounding box in eye-coordinates of the bounding box of the
`object intersects the viewing frustum. A node is said to be
`visible if at least one of its children is visible.
`At a given point in the simulation a view dependent and a
`view independent representation for an object is selected us-
`ing the criteria specified in Section 4.2. The rendering cost
`and accuracy of drawable representations that are stored
`with each representation in the model are used to select
`which of these two representations will be assigned to be
`the initial representation of the node. The representation
`that has the highest accuracy/cost ratio is selected to be the
`initial representation. In the next phase (described below),
`if there is still frame time left we try to improve on this
`initial choice.
`
`After initial representations are selected to each of a
`node’s children, the children’s cost is stored with the node
`to be used in the next phase.
`
`4.5.2 Second Pass: Best-First Tree Traversal.
`
`In this phase, we use the information obtained in the pre-
`vious phase for each node of the model hierarchy to imple—
`ment an eHicient ’best-first’ tree traversal. The result of this
`traversal is a rendering list of drawable representations that
`is sent to the graphics hardware for rendering as shown in
`Figure 6.
`To implement this strategy, we make use of a list of meta~
`objects organized in decreasing order of benefit (computed
`in the previous phase). We keep accumulating frame time as
`we select representations to render and whenever the time
`required to render the children of a node plus the total ac-
`cumulated time so far exceeds the frame time we insert the
`representation for the node in the rendering list and move
`on to the next node.
`The algorithm first explores the branches of the tree con-
`nected to the most beneficial nodes as follows: Start by in-
`serting the root node in the list and setting the total render-
`ing cost to be the cost of rendering the initial representation
`associated to the root node. We then process this list until
`
`it is empty. We remove the element in the front of the list
`and discard it if it is not visible.
`
`If the node is a leaf node (containing a conceptual object)
`we check if there is still rendering time left to select a better
`representation for the object. In the positive case we select
`to render (insert in the rendering list) the next higher accu-
`racy representation for the node and add its rendering time
`to the total accumulated rendering time.
`
`In the case where the node contains representations for a
`cluster of objects, we check if instead of rendering the cluster
`representation we still have time to render all of its children,
`i.e., the total accumulated time plus the cost of rendering
`the node’s children does not exceed the frame time. In the
`positive case, we insert each of its visible children in the
`list ordered by each ones benefit and add their cost to the
`total accumulated rendering time. Otherwise we insert the
`cluster's representation into the rendering list.
`
`Note that at each point in this traversal, a complete rep-
`resentation of the scene is stored in the list of meta-objects
`and whenever there is frame time left to render the children
`of a node, before adding the cost of the children to the total
`accumulated cost we subtract the cost of the initial repre-
`sentation for the node.
`
`4.6 Temporal Coherence
`
`While navigating through the model the viewpoint can ran-
`domly get close or far away from “important” objects that
`require most of the frame time. This sometimes causes a
`seemingly random switch from a cluster representation to
`the representations of the actual objects (or vice-versa). The
`idea of using frame-to—frame temporal coherence as used by
`Funkhonser and Sequin, is used here to mininirnize this ef-
`fect by discouraging switching from representations for par-
`ent nodeI to representations for children nodes. We keep a
`counter of the number of times the walkthrough program de-
`cided to switch from parent to children. The actual switch-
`ing is only allowed if this counter exceeds a pre-fixed thresh-
`old. The delayed switching from children representations to
`cluster representations is not implemented since it would oc-
`cur in a situation that most of the frame time has already
`been allocated and this delay would greatly reduce the frame
`rate.
`
`100
`
`103
`
`103
`
`

`

`"4/
`
`5
`
`Implementation
`
`This research has resulted in the implementation of three
`programs on a four processor SGI Onix workstation with
`a RealityEngine board:
`the model hierarchy building and
`representation generation program, the cost and accuracy of
`representations measurement program, and the walkthrough
`program-
`Thme programs are implemented in C++, use GL[8] for
`rendering, and have an X-Motif GUI to facilitate parameter
`changes for system evaluation.
`
`5.1 Model Hierarchy Building and Representa-
`tion Generation
`
`The program that builds the model hierarchy implements
`the hierarchy building algorithm described in section 4.4 and
`opens two windows, as shown in Figure B. The right window
`displays the objects]clusters and compute texture maps for
`each of the sides of their bounding beams while the left illus-
`trates the process of building the hierarchy. In this image,
`the dots represents objects that were not “clustered” yet.
`The purple square with green dots is the bounding box of
`the objects (in green) that completely fit inside it and the
`“red" band is showing groups of objects already “clustered".
`View dependent impostors such as texture maps are an-
`tomatically obtained in the following way with the help of
`the graphics hardware:
`
`1. Set up a viewpoint, a viewing direction, and, an ortho-
`graphic projection matrix.
`
`2. Draw the object(s) in a completely black backgron
`and adjust the texture resolution5 by scaling the ob-
`ject(s) inside the orthographic viewing volume.
`
`3. Grab the resulting image from the window (right win-
`dow in Figure B) and set the alpha component of black
`pixels to zero, so that if the objects have holes we can
`see through when they are rendered.
`
`Average color boxes are also obtained in a similar fashion.
`The average color for each face is just the average of the rgb
`colors of all non-black pixels and the average area is the
`number of all non-black pixels in the face’s image that is
`converted to an area in object coordinates.
`The generation of a pseudo—texture map involves a pre-
`processing of the original
`image because if there are too
`many pixels on the image the rendering of the texture would
`require too many meshed triangles. Therefore, we succes-
`sively shrink the original image by convolving it with a Gaus-
`sian filter that averages the RGB components of the pixels.
`
`5.2 Cost and Accuracy of Representations
`Measurement
`
`The cost of each representation is measured by selecting a
`specific representation and drawing it a number of times in
`order to get an average rendering time as shown in Figure
`0.
`
`The accuracy of an impostor is measured using the proce-
`dure described in Section 3.2 and a table that describes how
`similar each of the representations is compared to the origi-
`nal image of the object for five directions around the object
`5What ultimately determines the resolution of the texture map
`is the complexity (or granularity of details) that the object(s)
`Exhibit(s) from a particular direction.
`
`’L
`
`4‘?
`
`V
`
`smut-run
`
`Figure 7: Checking the visibility of a set of objects against
`the viewing frustum.
`
`is generated. One of the most immediate improvements we
`need to make is the use of more directions in this table.
`
`5 .3 Visual Navigation
`
`The walkthrougb program implements the framework de-
`scribed in Section 4.1 and the traversal algorithms described
`in Section 4.5. The computation of the representation to be
`rendered in the next frame is done in one processor while
`another one holds the graphics pipeline to render the cur-
`rent frame. Semaphores are used to synchronize the two
`processes.
`The traversal algorithm assumes that visibility of bound-
`ing boxes can be determined quickly. This can be done by
`first computing the bounding box in eye-coordinates of the
`object’s bounding box. We then compute its intersection
`with a box formed by extending the slice of the viewing
`fi-ustum corresponding to the farthest z-value of this box to
`its nearest z-value. This visibility test can return true even
`though no object inside the cluster is also inside the viewing
`frustum as shown in Figure 7.
`This problem is solved by the first phase of the traversal
`algorithm since it marks a. cluster as visible if and only if
`at least one of the objects that it represents is inside the
`viewing frustum.
`If computing the visibility of individual
`objects are taking too much time we can use a faster test
`and check if spheres enclosing groups of objects intersect
`the viewing frustum.
`
`5.4 Performance
`
`Our test model has around 1.6 million polygons and dur—
`ing our tests we have constrained the number and size of
`texture maps generated by the hierarchy building program
`to the available texture memory of one megatexel (one mil-
`lion texture pixels) by selecting appropriate octree cell sizes
`and adjusting the resolution of the texture representation
`for objects and clusters.
`For this model we were able to keep a frame rate of around
`16 frames per second (fps) for a target frame rate of 30 fps
`throughout the simulation without too much degradation
`in image quality. Figure D shows the image seen by the
`observer (left) and a top view of the the same scene showing
`where clusters are being displayed (right).
`Figure 8 shows the user mode (right) and real time (left)
`throughout a. simulation path of the model. The user time
`graph shows that our estimation of cost and rendering algo-‘
`rithm is achieving the goal of keeping a uniform and high
`frame rate. The real time graph show spikes due to random
`interrupts and a gap with respect to the 1/30 line due to
`smooth LOD switching using transparency blending.
`
`101
`
`104
`
`104
`
`

`

` I
`
`III
`
`III
`
`"I
`
`III
`
`0
`
`I“
`
`"I
`
`I“
`
`I"
`
`Reel uni}
`
`User lint:
`
`Figure 8: Plot frame versus frame time with (left) and with-
`out (right) smooth LOD switching.
`
`These interrupts can be minimized by mechanisms such as
`processor isolation, interrupts redirection, processor locking
`and so on as described in [9].
`takes
`the model hierarchy,
`The same model, without
`around 1 frame per second for certain viewpoints in our test
`path.
`
`6 Limitations
`
`One limitation of this system is the number of texture maps
`that can be used to represent objects and clusters. As the
`system uses more texture maps to represent clusters and
`individual objects, the chance of a texture-cache miss in-
`creases. A cache miss results in an unpredictable interrupt
`that will invariably defeat the purpose of a predictive system.
`Future methods of intelligent prefetch of textures that are
`likely to be needed could make texture cache misses much
`less likely, and thus allow the use of many more textured
`impostors.
`We have not addressed the illumination of the environ~
`ment. Although the illumination of a complex environment
`can be computed using the radiosity method in a view in-
`dependent fashion the shading attributes of objects (adding
`specular highlights) and clusters would need to be incorpo—
`rated to their representations. Instancing of objects would
`not be practical since two identical objects in different loca—
`tions in the model would have difierent shading attributes.
`As the size of texture memory increases these problems will
`become less serious, but they will not go away.
`The most serious limitation in our current implementar
`tion is that our tree traversal requires that a cluster know
`something about the benefits of its children, so all primitives
`are visited once per frame in the first pass of the algorithm,
`and therefore it is 0(n), where n is the number of objects.
`Our traversal algorithm is top—down, so there is no reason
`it could not be O(log n) if a. more intelligent traversal algo-
`rithm is used.
`
`7 Conclusion
`
`We have presented a way of using clusters of objects to im-
`prove the performance of an LOD-based visual navigation
`system. When there are too many visible LODB to render
`in real-time, we render single texture-mapped cluster primi-
`tives in place of groups of individual LODs. The techniques
`used to generate clusters can also be used to generate a par-
`ticular type of textured LODs for single primitives. We have
`
`102
`
`105
`
`also discussed some limitations of the object-based benefit
`heuristic, and extended it to account for the variability of
`an LOD’s quality as the view angle changes.
`The main lessons to be drawn from this work are that the
`predictive framework of Funkhouser and Sequin extends well
`to a hierarchical version of the LOD concept, and that pre-
`computed visibility information is not essential for efficient
`visual navigation programs.
`
`8 Acknowledgments
`
`Thanks to Ken Chin and Aaron Yonas for their suggestions
`on the draft of this paper. Thanks to Ken Chin and Paul
`Bourke for the model of a. tree and a. town house, respectiv-
`elly, used in the color plates. Thanks to the Brazilian gov-
`ernment agency CAPES, for providing the first author the
`financial support to conduct this research. Thanks to the Re-
`search and University Graduate Schools (RUGS) Research
`Facility Fund (RFF) and the NSF CDA-92-23008 grants that
`provided the graphics workstations that were used in this re-
`search. The second author was also supported by NSF BIA
`grant OCR-92439457.
`
`References
`
`John H. Rohlf, and Jr Frederick
`[1] John M. Airey,
`P. Brooks. Towards image realism with interactive up-
`date rates in complex virtual building environments.
`Computer Graphics, pages 41—50, 1990.
`
`Proceedings
`[2] Kurt Akeley. Reality engine graphics.
`of SIGGRAPH'93 (Anaheim, California, August 1-6,
`1993). In Computer Graphics Proceedings, Annual Con-
`ference Series, 19.93, ACM SIGGRAPH, pages 109—116.
`
`[3] Thomas A. Funkhouser and Carlo H. Sequin. Adaptive
`diSplay algorithm for interactive frame rates during vi—
`sualization of complex virtual environmnets. Computer
`Graphics, pages 247—254.
`
`[4] Thomas A. Funkhouser, Carlo H. Sequin, and Seth
`Teller. Management of large amounts of data in interac—
`tive building walkthroughs. Proceedings ofthe 1992 Sym-
`posium on Interactive 3D Graphics (Cambridge, Mus-
`sachusetts, March 29 - April 1, 1992), special issue of
`Computer Graphics, ACM SIGGRAPH, pages 11—20,
`1992.
`
`[5] Paulo Maciel. Visual navigation of largely unoccluded
`environments using textured clusters. Ph.D. Thesis, Jan-
`uary 1995. Indiana University, Bloomington.
`
`Iris performer: A
`[6] John Rohlf and Jamel Holman.
`high performance multiprocessing toolkit for real-time
`3D graphics. Proceedings of SIGGRAPH'91 (Orlando,
`Florida, July 24-29, 1994). In Computer Graphics Pm—
`ceedings, Annual Conference Series, 1994, ACM SIG-
`GRAPH, pages 381—394.
`
`[7] Harvey R. Schiflman. Sensation and Perception an Inte-
`gmted Approach. John Wiley k: Sons, New York, 1990.
`
`[8] Inc. Silicon Graphics. Graphics Libmr‘y Programming
`r Guide, Volumes I and II, 1992.
`
`[9} Inc. Silicon Graphics. React In Iris: A description
`of real-time capabilities of In'r 1:53 on Onyx/Challenge
`multiprocessor systems, 1994.
`
`
`
`105
`
`

`

`Guided Navigation of Virtual Environments
`
`Tinsley A. Galyean
`
`MIT Media Lab
`Cambridge, MA. 02139
`tag@media.mit.edu
`
`
`
`ABSTRACT
`
`This paper presents a new method for navigating virtual envi-
`ronments called “The River Analogy.” This analogy provides a
`new way of thinking about the user's relationship to the virtual
`environment; guiding the user’s continuous and direct input within
`both space and time allowing a more narrative presentation. The
`paper then presents the details of how this analogy was applied to a
`VR experience that is now part of the permanent collection at the
`Chicago Museum of Science and Industry.
`1. INTRODUCTION
`
`Today's Virtual Reality (VR) technology provides us with an
`opportunity to have experiences that would otherwise be impossi-
`ble. We can smoothly and continuously interact while immersed in
`environments that would be inaccessible or impossible to experi-
`ence. In these environments, we are free to roam and explore.
`architectural walk throughs for example, scientific visualization,
`and even games like DOOM place us in alternative worlds while
`giving us methods for navigating these virtual spaces. These meth-
`ods allow smooth and continuous interaction that can immediately
`influence the constantly changing presentation, but they rely on the
`user's actions and thoughts to bring structure to the experience. If
`any narrative structure (or story) emerges it
`is a product of our
`interactions and goals as we navigate the experience. I call this
`emergent narrative. In some applications this complete freedom to
`explore is appropriate. However, there is an alternative. This is the
`process of empowering the author to bring structure to the experi-
`ence, which makes this medium more appropriate for applications
`such as teaching, storytelling, advertising and information presen-
`tation. To do this, we will need to balance the interaction (explora—
`tion) with an ability to guide the user, while at the same time
`maintaining a sense of pacing or flow through the experience. This
`type of guidance is the proceSs of a providing narrative structure.
`Like a narrative presentation any solution must guide the user both
`temporally and spatially.
`2. PREVIOUS WORK
`
`Virtual environment navigation has mainly consisted of build-
`ing new methods and technologies that allow the users to control
`the position and orientation of the virtual camera, through which
`they see the virtual world. Early work in camera control (even
`before the advent of VR technology) focused on specifying camera
`mOVements over a path. [1, 4] In an effort to address the needs of
`
`Permission to copy without fee all or part of this material is
`granted provided that the copies are not made or distributed for
`direct commercial advantage, the ACM copyright notice and the
`title of the publication and its date appear, and notice is given
`that copying is by permission of the Association of Computing
`Machinery. To copy othenivise, or to republish, requires a fee
`and/or specific permission.
`1995 Symposium on Interactive 3D Graphics, Monterey CA USA
`© 1995 ACM 0-89791-736-7195/0004...$3.50
`
`animation production and does not address the issue of interactive
`camera control.
`A number of different researchers have addressed the issues
`behind camera control for manipulation and/or exploration appli—
`cations. [2, 5] All of these methods focus on providing better ways
`for the user to roam free, exploring within the virtual world. It is
`this ability for the user to directly control his/her place in the vir»
`tual world that is so often synonymous with the words “virtual
`reality.” While these methods couple smooth, continuous interac—
`tion with the smooth and continuous presentation available in real-
`time computer graphics environments, they do nothing to guide the
`user. There is no room for an author’s intentions to influence the
`experience. Therefore there is no narrative structure.
`Researchers in interactive narrative working with linear mate—
`rial like digital video have worked to unfold it in order to provide a
`non-linear environment for the user [3]. Shots are interactively
`laced together into sequences and these sequences tell a story.
`Because shots are the smallest building blocks available, the inter—
`action intermittently guides how these shots are laced together.
`The traditional analogy of how these types of interactive experi—
`ences are structured is often referred to as the branching analogy.
`Each branch represents a linear segment traversing part of the nar—
`rative space. A linear segment is played until the next node is
`reached. It
`is at these nodes where options are provided. The
`advantage of branching is that the experience does have a narrative
`structure, the interaction is guided. The disadvantage is that one
`can interact only at the nodes thereby chopping up both the inter-
`action and the presentation.
`The goal set forth is to find a way to marry the advantages of
`immersive VR experience with the advantages of narrative struc-
`ture. How do we allow the smooth, continuous interaction and pre—
`sentation, to coexist-ex

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket