throbber
the C language is more flexi-
`with additional C routines;
`ble and powerful than any higher level geometric scripting
`language we could design ourselves.
`
`10 Results
`
`We have constructed a placement editor for real-time inter-
`active walkthrough of large building databases. One of our
`primary goals was to work with oil"-the-shelf input and dis-
`play hardware, a goal which required the use of a software
`framework to allow the user to perform unambiguous 3D
`manipulation with 2D devices.
`Our solution is based on object associations, a. frame-
`work that provides the flexibility to combine pseudo-physical
`properties with convenient
`teleological behavior in a mix-
`ture tailor-made for a. particular application domain or a
`special set of tasks. We have found that such a mixture of
`the “magical” capabilities of geometric editing systems with
`some partial simulations of real, physical behavior makes a
`very attractive and easy-to-use editing system for 3D virtual
`environments. The combination of goal-oriented alignments,
`such as snap~dragging, with application specific physical be-
`havior, such as gravity and solidity, reduce the degrees of
`freedom the user has to deal with explicitly while maintain-
`ing most of the convenience of a good geometrical drafting
`program.
`
`We found it to be practical to separate into two types
`of procedures the mapping of 2D pointing to 3D motion
`and the enforcement of the desired object placement be-
`havior. These procedures are clearly defined and easy to
`implement as small add-on functions in C. Geometric and
`database toolkits allow high-level coding and ease of modi-
`fication. Our object associations normally cause little com-
`putational overhead to the WALKTHRU system. This is an
`important concern, since keeping the response time of the
`system fast and interactive is a crucial aspect of its usability
`and user-friendliness
`The result is a technique that makes object placement
`quick and accurate, works with “drag-and-drop” as well as
`“cut and paste” interaction techniques, can provide desir-
`able local object behavior and an automated grouping facil-
`ity, and greatly reduces the need for multiple editing modes
`in the user interface. The resulting environment is devoid
`of fancy widgets, sophisticated measuring bars, or multiple
`view windows. To the novice user it seem that not much is
`happening — objects simply follow the mouse to reasonable,
`realistic locations. And that is how ideally it should be: any
`additional gimmick is an indication that the paradigm has
`not yet been pushed to its full potential. Some issues remain
`to be fully resolved, such as dealing with association loops,
`but our prototype demonstrates that this approach provides
`a simple,
`flexible, and practical approach to constructing
`easy-to-use 3D manipulation interfaces.
`A prototype implementation in the context of a model
`of a building with more than 100 rooms has proven to be
`attractive and has reduced by a large factor the tedium
`of placing furniture and wall decorations. One of the au-
`thors has constructed scenes of rather cluttered offices with
`many pieces of furniture, fully loaded with books, pencils,
`coffee cups, etc.
`in five to ten minutes (see Figure C2).
`The implementation in our specific WALKEDIT applica-
`tion domain required only 5 programmer-defined procedures
`to fully characterize most of the desired object behavior.
`
`References
`
`[1] Baraff, D. Fast Contact Force Computation for Non-
`penetrating Rigid Bodies. Proc. of SIGGRAPH ’.94 (Or-
`lando, FL, Jul. 1994), pp. 23-34.
`
`[2] Barlow, M. Of Mice and 3D Input Devices. Computer-
`Aided Engineering 12, 4 (Apr. 1993), pp. 54-56.
`
`[3] Bier, E.A. Snap-Dragging in Three Dimensions. Proc. of
`the 1.9.90 Symposium on Interactive 3D Graphics (Snow-
`bird, UT, Mar. 1990), pp. 193-204.
`
`[4] Borning, A. The Programming Aspects of Thinglab,
`a Constraint-Oriented Simulation Laboratory. ACM
`Trans. on Programming Languages and Systems 3, 4,
`pp. 353-387.
`
`[5] Funkhouser, T.A. and Séquin, C.H. Adaptive Display
`Algorithm for Interactive Frame Rates during Visual-
`ization of Complex Virtual Environments. Proc. of SIG-
`CRAPH ’93 (Anaheim, CA, Aug. 1993), pp. 247-254.
`
`[6] Gleicher, M. Briar: A Constraint-Based Drawing Pro-
`gram. Proc. of the ACM Conference on Human Factors
`in Computing Systems — CHI ’.92 (Monterey, CA, May
`1992), pp. 661-662.
`
`[7] Hahn, J.K. Realistic Animation of Rigid Bodies. Com-
`puter Graphics 22, 4 (Aug. 1988), pp. 299-208.
`
`[8] Helm, R., Huynh, T., Lassez, C., and Marriott, K. Lin-
`ear Constraint Technology for Interactive Graphic Sys-
`tems. Proc. of Graphics Interface ’.92 (Vancouver, BC,
`Canada, May 1992).
`-
`
`[9] Lin, M.C. and Canny, J.F. A fast algorithm for incre-
`mental distance calculation. International Conference
`on Robotics and Automation, IEEE (May 1991), pp.
`1008-1014.
`
`[10] Myers, B.A. Creating User Interfaces using Program-
`ming by Example, Visual Programming, and Con-
`straints. ACM Trans. on Programming Languages and
`Systems, 12, 2 (Apr. 1990), pp. 143-177.
`
`[11] Nelson, G. Juno, a Constraint-Based Graphics System.
`Proc. of SIGGRA PH ’85 (San Fransisco, CA, Jul. 22-
`26, 1985). In Computer Graphics 19, 3 (Jul. 1985), pp.
`235-243.
`
`[12] Nielson, G. and Olsen, D. Direct Manipulation Tech-
`niques for 3D Objects Using 2D Locator Devices. Proc.
`of the 1.986 Workshop an Interactive 3-D Graphics
`(Chapel Hill, NC, Oct. 1986), pp. 175-182.
`
`[13] Smith, R.B. Experiences with the Alternate Reality
`Kit: An Example of the Tension between Literalism
`and Magic. IEEE Computer Graphics and Applications
`7, 9 (Sep. 1987), pp. 42-50.
`
`[14] Teller, S.J., and Séquin, C.H. Visibility Preprocessing
`for Interactive Walkthroughs. Proc. of SIGGRAPH ’.91
`' (Las Vegas, Nevada, Jul. 28-Aug. 2, 1991). In Computer
`,;”" Graphics, 25, 4 (Jul. 1991), pp. 61-69.
`
`[15] Venolia, D. Facile 3D Direct Manipulation. Proc. of the
`ACM Conference on Human Factors in Computing Sys-
`tems — CHI .93 (Amsterdam, Netherlands, Apr. 1993),
`pp. 31-36.
`
`BUNGIE - EXHIBIT 1006 - PART 10 OF 14
`
`
`138
`
`BUNGIE - EXHIBIT 1006 - PART 10 OF 14
`
`

`
`CamDroid: A System for Implementing
`Intelligent Camera Control
`
`Steven M. Drucker
`
`David Zeltzer
`
`MIT Media Lab
`
`MIT Research Laboratory for Electronics
`Massachusetts Institute of Technology
`Cambridge, MA. 02139, USA
`smd @media.mit.edu
`
`dz@vetrec.rnit.edu
`
`Abstract
`
`2. Related Work
`
`In this paper, a method of encapsulating camera tasks into well
`defined units called “camera modules" is described. Through this
`encapsulation,
`camera modules
`can
`be
`programmed and
`sequenced, and thus can be used as the underlying framework for
`controlling the virtual camera in widely disparate types of graphi-
`cal environments. Two examples of the camera framework are
`shown: an agent which can film a conversation between two virtual
`actors and a visual programming language for filming a virtual
`football game.
`Keywords: Virtual Environments, Camera Control, Task Level
`Interfaces.
`
`1. Introduction
`
`Manipulating the viewpoint, or a synthetic camera, is fundamental
`to any interface which must deal with a three dimensional graphi-
`cal environment, and a number of articles have discussed various
`aspects of the camera control problem in detail [3, 4, 5, 19]. Much
`of this work, however, has focused on techniques for directly
`manipulating the camera.
`
`ln our view, this is the source of much of the difficulty. Direct con-
`trol of the six degrees of freedom (DOFS) of the camera (or more,
`if field of view is included) is often problematic and forces the
`human VE participant to attend to the interface and its “control
`knobs” in addition to — or instead of — the goals and constraints
`of the task at hand. In order to achieve task level interaction with a
`
`computer-mediated graphical environment, these low-level, direct
`controls, must be abstracted into higher levei camera primitives,
`and in turn, combined into even higher level interfaces. By clearly
`specifying what specific tasks need to be accomplished at a partic-
`ular unit of time, a wide variety of interfaces can be easily con-
`structed. This technique has already been successfully applied to
`interactions within a Virtual Museum [8].
`
`Permission to copy without fee all or part of this material is
`granted provided that the copies are not made or distributed tor
`direct commercial advantage, the ACM copyright notice and the
`title of the publication and its date appear, and notice is given
`that copying is by permission of the Association of Computing
`Machinery. To copy otherwise, or to republish, requires a fee
`and/or specific permission.
`1995 Symposium on Interactive 3D Graphics, Monterey CA USA
`© 1995 ACM 0-89791-736-7/95/0O04...$3.5O
`
`Ware and Osborne [19] described several different metaphors for
`exploring 3D environments including “scene in han ," “eyeball in
`hand," and “flying vehicle control" metaphors. All of these use a 6
`DOF input device to control the camera position in the virtual envi-
`ronment. They discovered that flying vehicle control was more use-
`ful when dealing with enclosed spaces, and the “scene in hand”
`metaphor was useful in looking at a single object. Any of these
`metaphors can be easily implemented in our system.
`
`Mackinlay et al [16] describe techniques for scaling camera motion
`when moving through virtual spaces, so that, for example, users
`can always maintain precise control of the camera when approach-
`ing objects of interest. Again, it is possible to implement these
`techniques using our camera modules.
`
`Brooks [3,4] discusses several methods for using instrumented
`mechanical devices such as stationary bicycles and treadmills to
`enable human VE participants to move through virtual worlds
`using natural body motions and gestures. Work at Chapel Hill, has,
`of course, focused for some time on the architectural “walk-
`through,” and one can argue that such direct manipulation devices
`make good sense for this application. While the same may be said
`for the virtual museum, it is easy to think of circumstances — such
`as reviewing a list of paintings —-— in which it is not appropriate to
`require the human participant to physically walk or ride a bicycie.
`At times, one may wish to interact with topological or temporal
`abstractions, rather than the spatial. Nevertheless, our camera mod-
`ules wili accept data from arbitrary input devices as appropriate.
`
`Blinn [2] suggested several modes of camera specification based
`on a description of what should be placed in the frame rather than
`just describing where the camera shouid be and where it should be
`aimed.
`
`Phillips et al suggest some methods for automatic viewing control
`[18]. They primarily use the “camera in hand” metaphor for view-
`ing human figures in the Jack” system, and provide automatic fea-
`tures for maintaining smooth visual
`transitions and avoiding
`viewing obstructions. They do not deal with the problems of navi-
`gation, exploration or presentation.
`
`139
`
`

`
`Karp and Feiner describe a system for generating automatic pre-
`sentations, but they do not consider interactive control of the cam-
`era [l 2}.
`
`ma] description of the goals or constraints on the camera for each
`period of time.
`
`As shown in figure 1, the generic module contains the following
`components:
`
`
`
`
`Constraint LII!
`
`Cutncrn
` Paramoturs
`
`
`
`Figure 1: Genetic camera module containing a controller,
`an initializer, a constraint list, and local state
`
`' the local state vector. This must always contain the camera
`position, camera view normal, camera “up” vector, and field
`of view. State can also contain values for the camera parame-
`ter derivatives, a value for time, or other local information
`specific to the operation of that module. While the module is
`active, the state's camera parameters are output to the ren-
`derer.
`
`- initiaflizer. This is a routine that is run upon activation of a
`module. Typical initial conditions are to set up the camera
`state based on a previous module's state.
`- controller.
`‘This component
`translates user inputs either
`directly into the camera state or into constraints. There can be
`at most one controller per module.
`- constraints to be satisfied during the time period that the mod-
`ule is active. Some examples of constraints are as follows:
`- maintain the camera's up vector to align with world up.
`- maintain height relative to the ground
`- maintain the camera's gaze (i.e. view normal) toward a
`specified object
`- make sure a certain object appears on the screen.
`- make sure that several objects appear on the screen
`0 zoom in as much as possible
`
`In this system, the constraint list can be viewed simply as a black
`box that produces values for some DOFs of the camera. The con-
`straint solver combines these constraints using a constrained opti-
`mizing solver to come up with the final camera parameters for a
`particular module. The camera optimizer is discussed extensively
`in [9]. Some constraints directly produce values for a degree of
`freedom, for example, specifying the up vector for the camera or
`the height of the camera. Some involve calculations that might pro-
`duce multiple DOFs, such as adjusting the VlBW normal of the cam-
`era tolook at a particular object. Some,
`like a path planning
`constraint discussed in [8] are quite complicated, and generate a
`series of DOFs over time through the environment based on an ini-
`tial and final position.
`
`Gleicher and Witkin [10] describe a system for controlling the
`movement of a camera based on the screen-space projection of an
`object, but their system works primarily for manipulation tasks.
`
`Our own prior work attempted to establish a procedural framework
`for controlling cameras [7]. Problems in constructing generalizable
`procedures
`led to the
`current,
`constraint-based framework
`described here. Although this paper does not concentrate on meth-
`ods for satisfying multiple constraints on the camera position, this
`is an important part of the overall camera framework we outline
`here. For a more complete reference, see [9]. An earlier form of the
`current system was applied to the domain of a Virtual Museum [8].
`
`3. CamDroid System Design
`
`This framework is a formal specification for many different types
`of camera control. The central notion of this framework is that
`
`camera placement and movement is usually done for particular rea-
`sons, and that those reasons can be expressed formally as a number
`of primitives or constraints on the camera parameters. We can iden-
`tity these constraints based on analyses of the tasks required in the
`specific job at hand. By analyzing a wide enough variety of tasks, a
`large base of primitives can be easily drawn upon to be incorpo-
`rated into a particular task-specific interface.
`
`3.1 Camera Modules
`
`A camera module represents an encapsulation of the constraints
`and a transformation of specific user controls over the duration that
`a specific module is active. A complete network of camera modules
`with branching conditions between modules incorporates user con-
`trol, constraints, and response to changing conditions in the envi-
`ronment overtime.
`‘
`
`Our concept of a camera module is simila.r to the concept of ashot
`in cinematography. A shot represents the portion of time between
`the starting and stopping of filming a particular scene. Therefore a
`shot represents continuity of all the camera parameters over that
`period of time. The unit of a single camera module requires an
`additional level of continuity, that of continuity of control of the
`camera. This requirement is added because of the ability in com-
`puter graphics to identically match the camera parameters on either
`side of a cut, blurring the distinction of what makes up two sepa-
`rate shots. Imagine that the camera is initially pointing at character
`A and following him as he moves around the environment. The
`camera then pans to character B and follows her for a period of
`time. Finally the camera pans back to character A. In cinematic
`terms, this would be a single shot since there was continuity in the
`camera parameters over the entire period. In our terms, this would
`be broken down into four separate modules. The first module’s task
`is to follow character A. The second module’s task would be to pan
`from A to B in a specified amount of time. The third moduEe’s task
`would be to follow B. And finally the last m0dule’s task would be
`to pan back from B to A. The notion of breaking this cinematic shot
`into 4 modules does not specify implementation, but rather a for-
`
`140
`
`

`
`App]icatiun
`Specific
`Processes.‘
`
`Object 1nlerfac:(s}
`
`extended 313 system
`
`Figure 2: Overall CamDroid System
`3.2 The CamDroid System
`
`The overall system for the examples given in this paper is shown in
`figure 2.
`
`The CamDroid System is an extension to the 3D virtual environ-
`ment software testbed developed at MIT [6]. The system is struc-
`tured this way’ to emphasize the division between the virtual
`environment database,
`the camera framework, and the interface
`that provides access to both. The CamDroid system contains the
`following elements.
`- A general interpreter that can run pre-specified scripts or man-
`age user input. The interpreter is an important part in develop-
`ing the entire runtime system. Currently the interpreter used is
`TCL with the interface widgets created with TX [17]. Many
`commands have been embedded in the system including the
`ability to do dynamic simulation, visibility calculations, finite
`element simulation, matrix computations, and various data-
`base inquiries. By using an embedded interpreter we can do
`rapid prototyping of a virtual environment without sacrificing
`too much performance since a great deal of the system can
`still be written in a low level language like C. The addition of
`TK provides convenient creation of interface widgets and
`interprocess communication. This is especially important
`because some processes Inight need to perform computation
`intensive parts of the algorithms; they can be offloaded onto
`separate machines.
`- A built-in renderer. This subsystem can use either the hard-
`ware of a graphics workstation (currently SGIs and HPs are
`supported), or software to create a high quality antialiased
`image.
`- An object database for a particular environment.
`- Camera modules. Described in the previous section. Essen-
`tially, they encapsulate the behavior of the camera for differ-
`ent styles of interaction. They are prespecified by the user and
`associated with various interface widgets. Several widgets can
`be connected to several camera modules. The currently active
`camera module handies all user inputs and attempts to satisfy
`all the constraints contained within the module, in order to
`
`compute camera parameters which will be passed to the ren-
`derer when creating the final image. Currently, only one cam-
`era module is active at any one time, though if there were
`multiple viewports, each of them could be assigned a unique
`
`camera.
`
`4. Example: Filming a conversation
`
`The interface for the conversation filming example is based on the
`construction of a software agent which perceives changes in lim-
`ited aspects of the environments and uses a number of primitives to
`implement agent behaviors. The sensors detect movements of
`objects within the environment and can perceive which character is
`designated to be talking at any moment.
`
`In general, the position of the camera should be based on conven-
`tional techniques that have been established in filming a conversa-
`tion. Several books have dissected conversations and come up with
`simplified rules for an effective presentation [1, 14}. The conversa-
`tion filmer encapsulates these rules into camera modules which the
`software agent calls upon to construct (or assist a director in the
`construction of) a film sequence.
`
`4.1 Implementation
`
`The placement of the camera is based on the position of the two
`people having the conversation (see figure 3). However, more
`important than placing the camera in the approximate geometric
`relationship shown in figure 3 is the positioning of the camera
`based on what is being framed within the image.
`
`
`
`Figure 3: Filming a conversation [Katz88].
`
`Constraints for an over-the-shoulder shot:
`
`° The height of the character facing the view should be approx-
`imately 1/2 the size of the frame.
`- The person facing the view should be at about the 2/3 line on
`the screen.
`
`- The person facing away should be at about the 1/3 line on the
`screen.
`
`- The camera should be aligned with the world up.
`0 The field of view should be between 20 and 60 degrees.
`- The camera view should be as close to facing directly on to
`the character facing the viewer as possible.
`
`141
`
`

`
`
`
`l.
`!'
`
`Character 1 09
`
`
`*
`behind character #2
`
`39
`1/3 character
`#1 screen framing
`
`SH
`medium closeup
`CQ
`look into gaze #1
`2:3 clhnaracler
`#2 screen training
`
`CB
`align 1D world up
`
`speaking
`
`*
`behind character #1
`
`Q!)
`medium closeup
`O0
`look into gaze #2
`
`1/3 character
`#2 screen framing
`9';
`233 charactgf
`#1 screen framing
`
`SQ
`align to world up
`
`Figure 4: Two interconnected camera modules for filming a conversation
`
`5. Example: the Virtual Football Game
`
`The virtual football game was chosen as an example because there
`already exists a methodology for filming football games that can be
`called upon as a reference for comparing the controls and resultant
`output of the virtual football game. Also, the temporal flow of the
`football game is convenient since it contains starting and stopping
`points, specific kinds of choreographed movements, and easily
`identifiable participants. A visual programming language for com-
`bining camera primitives into camera behaviors was explored.
`Finally, an interface, on top of the visual programming language,
`based directly on the way that a conventional football game is
`filmed, was developed.
`
`It is important to note that there are significant differences between
`the virtual
`football game and filming a real
`football game.
`Although attempts were made to make the virtual football game
`rea1istic— three-dimensional video images of players were incor-
`porated and football plays were based on real plays [15] —this vir-
`tual football game is intended to be a testbed for intelligent camera
`control rather than a portrayal of a real football game.
`
`Sgrflmplemeritation
`Figure 7 shows the visual programming environment for the cam-
`era modules. Similar in spirit to Haeberli’s ConMan [11] or Kass’s
`G0 [13], the system allows the user to connect camera modules,
`
`Constraints for a corresponding over—the-shoulder shot:
`the people
`- The same constraints as described above but
`should not switch sides of the screen; therefore the person fac-
`ing towards the screen should be placed at the 1/3 line and the
`person facing away should be placed at the 2/3 line.
`
`Figure 3 can be used to find the initial positions of the cameras if
`necessary, but the constraint solver contained within each camera
`module makes sure that the composition of the screen is as desired.
`
`Figure 4 shows how two camera modules can be connected to auto-
`matically film a conversation.
`
`A more complicated combination of camea modules can be incor-
`porated as the behaviors of a simple software agent. The agent con-
`tains a rudimentary reactive planner which pairs camera behaviors
`(combination of camera primitives) in response to sensed data. The
`agent has two primary sets of camera behaviors: one for when
`character 1 is speaking; and one for when character 2 is speaking.
`The agent needs to have sensors which can “detect” who is speak-
`ing and direct a camera module from the desired set of behaviors to
`become active. Since ‘the modules necessarily keep track of the
`positions of the characters in the environment, the simulated actors
`can move about while the proper screen composition is maintained.
`
`Bcl1aviu1'sIorChamcIer1 lalkin
`Bebnvio1sforCI:arat.‘l.:r I talking
`
`
`Qtwlboraumri in mm EI-rUva¢Ch-raclr-II
`
`uI|It0w(Upnl'DnrI:rrII
`
`gee
`
`
`
`
`
`
`
`
`
`
`Figure 5: Conversation filming agent and its behaviors.
`
`Figure 6 shows an over-the-shoulder shot automatically generated
`by the conversation filming agent.
`
`
`
`142
`
`
`CAMERA MODULE 1: Over Shoulder of Character #2 at Character #1
`CAMERA MODULE 2.:
`OVER SHOULDER
`CHARA III‘ R #2 10 #1
`
`Character 2
`
`orbit controller
`
`orbit comjouer
`
`Over Shoulder of Character #1 at Character #2
`OVER SHOUT.-DER
`CH-A-RA
`= 1 ‘D *2
`
`
`
`

`
`ing on the appropriate buttons of the football play controller.
`Passes can be indicated by selecting the appropriate players at the
`appropriate time step and pressing the pass button on the play con-
`troller.
`
`
`
`Figure 8: The virtual football game interface
`
`The user can also select or move any of the camera icons and the
`viewpoint is immediately shifted to that of the camera. Essentially,
`pressing one of the camera icons activates a camera module that
`has already been set up with initial conditions and constraints for
`that camera. Cameras can be made to track individual characters or
`
`the ball by selecting the players with the middle mouse button.
`This automatically adds a tracking constraint to the currently active
`module. If multiple players are selected, then the camera attempts
`to keep both players within the frame at the same time by adding
`multiple tracking constraints. The image can currently be fine-
`tuned by adjusting the constraints within the visual programming
`environment. A more complete interface would provide more
`bridges between the actions of the user on the end—user interface
`and the visual programming language...
`
`
`
`Figure 9: View from “game camera” of virtual football
`game.
`
`Figure 7: Visual Programming Environment for camera
`modules
`
`The end~user does not necessarily wish to be concerned with the
`visual programming language for camera control. An interface that
`can be connected to the representation used for the visual program-
`ming language is shown in Figure 7. The interface provides a
`mechanism for setting the positions and movements of the players
`within the environment, as well as a way to control the virtual cam-
`eras. Players can be selected and new paths drawn for them at any
`time. The players will move along their paths in response to click-
`
`
`
`143
`
`and drag and drop initial conditions and constraints, in order to
`control the output of the CamDroid system. The currently active
`camera module's camera state is used to render the view of the
`graphical environment. Modules can be connected together by
`drawing a line from one module to the next. A boolean expression
`cam then be added to the connector to indicate when control should
`be shifted from one module to the connected module. It is possible
`to set up multiple branches from a single module. At each frame,
`the branching conditions are evaluated and control is passed to the
`first module whose branching condition evaluates to TRUE.
`
`Constraints can be instanced from existing constraints, or new ones
`can be created and the constraint functions can be entered via a text
`editor. Information for individual constraints can be entered via the
`
`keyboard or mouse clicks on the screen. When constraints are
`dragged into a module, all
`the constraints in the module are
`inciuded during optimization. Constraints may also be grouped so
`that slightly higher level behaviors composed of a group of low
`level primitives may be dragged directly into a camera module.
`
`lnitial conditions can be dragged into the modules to force the min-
`imization to start from those conditions. Initial conditions can be
`retrieved at any time from the current state of the camera. Camera
`modules can also be indicated to use the current state to begin opti-
`mization when control is passed to them from other modules.
`
`Controllers can also be instanced from a palette of existing control-
`lers, or new ones created and their functions entered via a text edi-
`tor. If a controller is dragged into the module. it will translate the
`actions of the user subject to the constraints within the module. For
`example, a controller that will orbit about an object may be added
`to a module which constrains the camera’s up vector to align with
`the world up vector.
`
`Camera Module 1
`
`
`
`
`
`
`
`Q dofiaus3 '
`Pan
`flying vehicle
`
`5!
`Vida ‘ugh * quarterback *
`game camera
`ilald goal cam
`
`G9 =:
`G9
`69 track bnfifil track receive:
`lliflh H526
`track :11:
`G6 1 kf
`lock rolgc W G66939
`over the shoulder
`
`

`
`6. Results
`
`We have implemented a variety of applications from a disparate set
`of visual domains, including the virtual museum [8], a mission
`planner [21], and the conversation and football game described in
`this paper. While formal evaluations are notoriously difficult, we
`did enlist the help of domain experts who could each observe and
`comment on the applications we have implemented. For the con-
`versation agent, our domain expert was MIT Prof.essor Glorianna
`Davenport, in her capacity as an accomplished documentary film-
`maker. For the virtual football game, we consulted with Eric Eisen-
`dratt, a sports director for WBZ-TV, Boston. In addition, MIT
`Professor Tom Sheridan was an invaluable source of expertise on
`teleoperation and supervisory control. A thorough discussion of die
`applications, including comments of the domain experts, can be
`found in [9].
`
`7 Drucker, S., T. Galyean, and D. Zeltzer. CINEMA: A System for
`Procedural Camera Movements. Proc. 1992 Symposium on Inter-
`active 3D Graphics. 1992. Cambridge MA: ACM Press.
`
`8. Drucker, S. M. and D. Zeltzer. Intelligent Camera Control for
`Virtual Environments. Graphics Interface '94. 1994.
`
`9. Drucker, S.M Intelligent Camera Control for Graphical Envi-
`ronments. PhD. Thesis. MIT Media Lab. 1994.
`
`10. Gleicher, M..A.W. Through-the-Lens Camera Control. Com-
`puter Graphics. 26(2): pp. 331-340. 1992
`
`11. Haeberli, P.E., ConMan: A Visual Programming Language for
`Interactive Graphics. Computer Graphics. 22(4): pp. 103-111.
`1988
`
`7. Summary
`
`A method of encapsulating camera tasks into well defined units
`called “camera modules” has been described. Through this encap-
`sulation, camera modules can tie designed which can aid a user in a
`wide range of interaction with 3D graphical environments. The
`CamDroid system uses this encapsulation, along with constrained
`optimization techniques and visual programming to greatly ease
`the development of 3D interfaces. Two interfaces to distinctly dif-
`ferent environments have been demonstrated in this paper.
`
`8. Acknowledgements
`
`This work was supported in part by ARPA/Rome Laborato-
`ries, NHK (Japan Broadcasting Co.), the Office of Naval
`Research, and equipment gifts from Apple Computer,
`Hewlett-Packard, and Silicon Graphics.
`
`9. References
`
`1. Al‘ij0I1, Du Grammar ofthe Film language. 1976, Los Angeles:
`Silman-James Press.
`
`2. Blind, J., Where am I? What am I looking at? IEEE Computer
`Graphics and Applications, July 1988.
`
`3. Brooks, F.P., Jr. Grasping Reality Through Illusion -~ Interactive
`Graphics Serving Science. Proc. CHI '88. May 15-19, 1988.
`
`12. Karp, P. and S.K. Feiner. Issues in the automated generation of
`animated presentations. Graphics Interface '90. 1990.
`
`13. Kass, M. GO: A Graphical Optimizer. in ACM SIGGRAPH 91
`Course Notes, Introduction to Physically Based Modeling. July 28-
`August 2, 1991. Las Vegas NM.
`
`14. Katz, S.D., Film Directing Shot by Shot: I/isualising from Con-
`cept to Screen. 1991, Studio City, CA: Michael Weise Productions.
`
`15. Korch, R. The Oflicial Pro Football Hall of Fame. New York,
`Simon & Schuster, Inc. 1990.
`
`16. Mackinlay, J. S., S. Card, et al. Rapid Controlled Movement
`Through a Virtual 3d Workspace. Computer Graphics 24(4): 171-
`176. 1990.
`
`17. Ousterhout, J. K. Tcl: An Embeddable Command Language.
`Proc. I990 Winter USENIX Conference. 1990.
`
`18. Philips, C.B.N.I.B., John Grariieti. Automatic Viewing Control
`for 3D Direct Manipulation. Proc. I 992 Symposium on Interactive
`3D Graphics. 1992. Cambridge, MA; ACM Press.
`
`19. Ware, C. and S. Osborn. Exploration and Virtual Camera Con-
`trol in Virtual Three Dimensional Environments. Proc. I990 Sym-
`posium on Interactive 3D Graphics, Snowbird, Utah, 1990. ACM
`Press.
`
`4. Brooks, F.P., Jr. Walkthrough -- A Dynamic Graphics System for
`Simulating Virtual Buildings. Proc. I986 ACM Workshop on Inter-
`active 3D Graphics. October 23-24, 1986.
`
`20. Zeltzer, D. Autonomy, Interaction and Presence.‘ Presence:
`Teleoperators and Virtual Environments 1(1): 127-132. March,
`1992.
`
`5. Chapman, D. and C. Ware. Manipulating the Future: Predictor
`Based Feedback for Velocity Control in Virtual Environment Navi-
`gation . Proc. I992 Symposium on Interactive 3D Graphics. 1992.
`Cambridge MA: ACM Press.
`
`6. Chen. D. T. and D. Zeltzer. The 3d Virtual Environment and
`
`Dynamic Simulation System. Cambridge MA, Technical Memo.
`MIT Media Lab. August, 1992.
`
`21. Zeltzer, D. and S. Drucker . A Virtual Environment System for
`Mission Planning. Proc. 1992 IMAGE VI Conference,

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket