throbber
George G. Robertson, Stuart K. Card, andJock D. Mackinlay INFORMATION VISUALIZATION USING 3D INTERACTIVE ANIMATION innovations are often driven by a combination of technology advances and application demands. On the technology side, advances in interactive computer graphics hardware, coupled with low-cost mass storage, have created new possibilities for information retrieval systems in which UIs could play a more central role. On the application side, increasing masses of information confronting a business or an individual have created a demand for information management applications. In the 1980s, text-editing forced the shaping of the desktop metaphor and the now standard GUI paradigm. In the 1990s, it is likely that information access will be a primary force in shaping the successor to the desktop metaphor. This article presents an experimental system, the Information Visual/mr (see Figure 1), which explores a UI paradigm that goes beyond the desktop metaphor to exploit the emerging generation of graphical personal computers and to support the emerging application demand to retrieve, store, manipulate, and understand large amounts of information. The basic problem is how to utilize advancing graphics cecnnology to lower the cost of finding Information and accessing it once found (the information's "cost structure"). I t b U S T ~ A T I O N : R I C O k a N $ S T U D I O ¢~NIIINUCAT, OI4$OpT~/April 1993/%1.36, No.4 sml
`
`APPLE 1012
`
`1
`
`

`

`Figure 1. Information Visualizer Overview We take four broad strategies: mak- ing the usel:'S immediate workspace larger, enabfing user interaction with multiple agents, increasing the real- time interaction rate between user and system, and using visual abstraction to shift information to the perceptual system to speed information assimila- tion and retrieval. Technology Advances Since the early development of the standard GUI, hardware technology has continued to advance rapidly. Processor and memory technology have far greater performance at far lower cost. Specialized 3D graphics hardware has made it progressively faster and cheaper to do 3D transfor- mations, hidden-surface removal, double-buffered animation, an- tialiasing, and lighting and surface models. At the same time, software support for real-time operating sys- tems and emerging industry stan- dard open graphics libraries (e.g., OpenGL and PEX) are simplifying the 3D programming task. The trend will bring these technologies to the mass market in the near future. These technology advances have created many possibilities for user interface innovation. Yet the basic Windows-Icons-Menus-Pointing (WIMP) desktop metaphor has not changed much since its emergence in the Alto/Smalltalk work. Nonethe- less, there is a great desire to explore new UI paradigms. Experiments with pen-based notebook metaphors, virtual reality, and ubiquitous com- puting are proceeding and may eventually influence the mass mar- ket. Brown University's Andy van Dam, in several recent conferences has exhorted us to break out of the desktop metaphor and escape flat- land and a recent workshop focused on Software Architectures for Non- WIMP User Interfaces [9]. It is this kind of technology change that is driving our research in the Informa- tion Visualizer. Information Access vs. Document Retrieval Computer-aided access to informa- tion is often thought of in the context of methods for library automation. In particular, document retrieval [19] is usually defined more or less as follows: There exists a set of docu- ments and a person who has an inter- est in the information in some of them. Those documents that contain information of interest are relevant, others not. The problem is to find all and only the relevant documents. There are two standard figures of merit for comparing and evaluating retrieval systems: Recall is the per- centage of all the relevant documents found; and precision is the percentage of the documents found that are rel- 58 April 1993/Vol.36, No.4 /|ONNUNICJI~'I'ION|OPTH| A|N
`
`2
`
`

`

`evant. While this formulation has been useful for comparing different approaches, we propose extending the document retrieval formulation to take the larger context into ac- count. From a user's point of view, document retrieval and other forms of information retrieval are almost always part of some larger process of information use [2]. Examples are sensemaking (building an interpreta- tion of understanding of informa- tion), design (building an artifact), de- cision making (building a decision and its rationale), and response tasks (find- ing information to respond to a query). In each of these cases: 1. Information is used to produce more information, or to act di- rectly 2. The new information is usually at a higher level of organization rel- ative to some purpose If we represent the usual view of information retrieval as Figure 2(a), we can represent this extended view as Figure 2(b). Framing the problem in this way is suggestive: what the user needs is not so much informa- tion retrieval itself, but rather the amplification of information-based work processes. That is, in addition to concern with recall and precision, we also need to be concerned with reducing the time cost of informa- tion access and increasing the scale of information that a user can handle at one time. Information Workspaces From our observations about the problem of information access [2], we were led to develop UI paradigms oriented toward managing the cost structure of information-based work. This, in turn, led us to be concerned not just with the retrieval of informa- tion from a distant source, but also with the accessing of that informa- tion once it is retrieved and in use. The need for a low-cost, immediate storage for accessing objects in use is common to most kinds of work. The common solution is a workspace, whether it be a woodworking shop, a laboratory, or an office. A workspace is a special environment in which the cost structure of the needed materi- als is tuned to the requirements of the work process using them. Computer screens provide a work- space for tasks done with the com- puter. However, typical computer displays provide limited working space. For real work, one often wants to use a much larger space, such as a dining room table. The Rooms sys- tem [10] was developed to extend the WIMP desktop to multiple work- spaces that users could switch among, allowing more information to reside in the immediate work area. The added cost of switching and finding the right workspace was re- duced by adding the ability to share the same information objects in dif- ferent workspaces. Rooms also had an overview and other navigational aids as well as the ability to store and retrieve workspaces, all to remove the major disadvantages of multiple desktops. The essence of our proposal is to evolve the Rooms multiple desktop metaphor into a workspace for infor- mation access--an Information Work- space [2]. Unlike the conventional in- formation retrieval notion of simple access of information from some dis- tal storage, an information work- space (1) treats the complete cost structure of information, integrating information access from distant, sec- (a) Information Retrieval ondary or tertiary storage with infor- mation access from Immediate Stor- age for information in use, and (2) considers information access part of a larger work process. That is, in- stead of concentrating narrowly on the control of a search engine, the goal is to improve the cost structure of information access for user work. With this system, we use four methods for improving the cost structure of information access: 1. Large Workspace. Make the Imme- diate Workspace virtually larger, so that the information can be held in low-cost storage 2. Agents. Delegate part of the work- load to semiautonomous agents 3. Real-Time Interaction. Maximize interaction rates with the human user by tuning the displays and re- sponses to real-time human action constants 4. Visual Abstractions. Use visual ab- stractions of the information to speed assimilation and pattern detec- tion Figure 2. (a) Tradi- tional Information retrieval formula- tion and (D) refor- mulatlon with context of use (b) Amplification of Information-Intensive Work COMMUNICATIONS 011TNU ACU/April 1993/Vol.36, No.4 ~
`
`3
`
`

`

`Table 1. Techniques used in the Information visualizer to increase information access per unit cost Table 2. In|k)rmation Visualizer solutions to basic Ul problems. Informatk workspa~ Make Larger Offload Work with Agents Rapid interact at real-time ~ human rates Make faster witJ~ information Visualization These define the goals for our UI paradigm. Each of these is intended to decrease the costs for performing information-intensive tasks, or, alter- natively, to increase the scope of in- formation that can be utilized for the same cost. Figure 3 shows how these goals are applied to the reformulated information access problem shown in Figure 2(b). The Information Visualizer sys- tem is our experimental embodiment of the Information Workspace con- cept with mechanisms for addressing each of these system goals (see Table 1): 1) [Large Workspace]. We use two methods to make the workspace larger: We add more (virtual) screen space to the Immediate Workspace by using a version of the Rooms system. We increase the density of information that can be held in the same screen space by using animation and 3D perspective. 2) [Agents]. To delegate part of the workload, we use agents to conduct searches, to organize in- formation into clusters, or design presentations of information. We manage this by means of a schedul- ing architecture, called the Cognitive Coprocessor [17], that allows multi- ple display and application processes to run together. A kind of user inter- face agent, called Interactive Objects, is used to control and communicate with the system. 3) [Real-time Inter- action]. To maximize human interac- tion rates, we use the properties of the scheduler to provide highly in- teractive animation and communica- tion with the Interactive Objects. To tune the system to human action times, we require certain classes of actions to occur at set rates. To en- force these rates under varying com- putational load, we use a Governor mechanism in our scheduler loop. 4) [Visual Abstractions]. To speed the user's ability to assimilate informa- tion and find patterns in it, we use visualization of different abstract in- formation structures, including lin- ear structures, hierarchical struc- Figure 3. Improving the information cost structure In the Information access model 60 April 1993/Vol.36, No.4 /flMNIlUlIIII~'I'IONIllOPTIHINIIACN
`
`4
`
`

`

`tures, continuous data, and spatial data. There have been many systems that have supported interactive ani- mation-oriented UIs, starting with Ivan Sutherland's thesis [23] at the dawn of computer graphics. As with Sutherland's thesis, early examples required specialized and/or expen- sive computing machinery and were oriented toward specialized tasks. Cockpit simulation systems are a good example. The architectures for such systems share the animation- loop core with our system. The drop in cost for 3D animated systems and the increase in capability has acceler- ated experiments in using this tech- nology as the basis of a new mass market user interface paradigm. One strategy has been to work up from building blocks. A. Van Dam's group at Brown has been working on an object-oriented framework for inter- active animation, 3D widgets [3], and modeling time in 3D interactive ani- mation systems. Silicon Graphics has recently introduced a high-level 3D toolkit, called Inventor. Another tack has been to drive the development by focusing on applications, for exam- ple, continuously running physical simulations. M. Green's group at the University) of Alberta has developed a Decoupled Simulation Model for virtual reality systems [20]. Their architectural approach is similar to ours, but focuses more on continu- ously running simulations. D. Zeltzer and colleagues at MIT [25] have built a constraint-based system for interac- tive physical simulation. Our system, by contrast, is oriented toward the access and visualization of abstract nonphysical information of the form that knowledge workers would en- counter. UI Architecture In order to achieve the goals set forth in Table 1 we have been led to a UI paradigm involving highly interac- tive animation, 3D, agents, and visu- alizations. This is one of the UI re- gimes now being made practical by current and predicted advances in hardware and software technology. There are several problems, how- ever, which need to be addressed in order to realize such a UI paradigm: 1. The Multiple Agent Problem. How can the architecture provide a sys- tematic way to manage the interac- tions of multiple asynchronous agents? 2. The Animation Problem. How can the architecture provide smooth in- teractive animation and solve the Multiple Agent problem? 3. The Interaction Problem. How can 3D widgets be designed and coupled to appropriate application behavior? 4. The Viewpoint Movement Problem. How can the user rapidly and simply move the point of view around in a 3D space? 5. The Object Movement Problem. How can objects be easily moved about in a 3D space? 6. The Small Screen Space Problem. How can the dynamic properties of the system be utilized to provide the user with an adequately large work- space? Many of these problems are well known. The Multiple Agent and Animation problems are less obvious, and since they define the basic orga- nization of the Information Visual- izer, we describe them in more detail. The Multiple Agent Problem. We want our architecture to support multiple agents to which the user can delegate tasks. In fact, we have previ- ously argued [17] that T.B. Sheri- dan's analysis of the supervisory con- trol of semiautonomous embedded systems [21] can be adapted to de- scribe the behavior of an interactive system as the product of the interac- tions of (at least) three agents: a user, a user discourse machine (the UI), and a task machine or application. These agents operate with very different time constants. For example, a search process in an application and the graphical display of its results may be slow, while the user's perception of displayed results may be quite fast. The UI must provide a form of "im- pedance matching" (dealing with dif- ferent time constants) between the various agents as well as translate between different languages of inter- action. The application itself may be broken into various agents that sup- ply services, some of which may run on distributed machines (e.g., an agent to filter and sort your mail). Even the UI may itself contain agents (e.g., presentation agents). These additional agents have their own time constants and languages of in- teraction that must be accommo- dated by the UI. Impedance matching can be diffi- cult to accomplish architecturally because all agents want rapid interac- tion with no forced waiting on other agents, and the user wants to be able to change his or her focus of atten- tion rapidly as new information be- comes available. For example, if a user initiates a long search that pro- vides intermediate results as they become available, the user should be able to abort or redirect the search at any point (e.g., based on perception of the intermediate results), without waiting for a display or search pro- cess to complete. The UI architec- ture must provide a systematic way to manage the interactions of multiple asynchronous agents that can inter- rupt and redirect one another's work. The Animation Problem. Over the last 65 years, animation has grown from a primitive art form to a very complex and effective discipline for communication. Interactive anima- tion is particularly demanding archi- tecturally, because of its extreme computational requirements. Smooth interactive animation is particularly important because it can shift a user's task from cognitive to perceptual activity, freeing cognitive processing capacity for application tasks. For example, interactive ani- mation supports object constancy. Consider an animation of a complex object that represents some complex relationships. When the user rotates this object, or moves around the ob- ject, animation of that motion makes it possible (even easy, since it is at the level of perception) for the user to retain the relationships of what is dis- played. Without animation, the dis- play would jump from one configu- ration to another, and the user would have to spend time (and cognitive effort) reassimilating the new dis- play. By providing object constancy, animation significantly reduces the cognitive load on the user. The Animation Problem arises when building a system that attempts ¢~MUNICAVlONg OP'm~| AcM/Apri11993/Vol.36, No.4 61[
`
`5
`
`

`

`~ ~ E N~R~ ~ZO a to provide smooth interactive anima- tion and solve the Multiple Agent problem. The difficulty is that smooth animation requires a fixed rate of guaranteed computational resource, wlhile the highly interactive and redirectable support of multiple asynchronous agents with different time constants has widely varying computational requirements. The UI architecture must balance and pro- tect these very different computa- tional requirements. In fact, the animation problem is one aspect of a broader Real-Time Interaction problem. Services need to be delivered under real-time deadline, under varying load, while simultaneously handling the Multi- ple Agent problem. The Cognitive Coprocessor Table 2 summarizes the Information Visualizer's solutions to each of the problems described earlier. The next few sections describe these solutions. The heart of the Information Vis- ualizer arclhitecture is a controlled- resource scheduler, the Cognitive Coprocessor architecture, which serves as an animation loop and a scheduler for Sheridan's three agents and additional application and inter- face agents. It manages multiple asynchronous agents that operate with diffe~rent time constants and need to interrupt and redirect one another's work. These agents range from trivial agents that update dis- play state to continuous-running simulations and search agents. This architecture provides the basic solu- tion to the Multiple Agent and Ani- mation problems. The Cognitive Coprocessor is an impedance matcher between the cog- nitive and perceptual information processing requirements of the user and the properties of these agents. In general, these agents operate on time constants different from those of the user. There are three sorts of time constants for the human that we want to tune the system to meet: per- ceptual processing (0.1 second) [1], immediate response (1 second) [15], and unit task (10 seconds) [15]. The perceptual processing time con- stant. The Cognitive Coprocessor is based on a continuously running scheduler loop and double-buffered graphics. In order to maintain the il- lusion of animation in the world, the screen must be repainted at least every 0.1 second [1]. The Cognitive Coprocessor therefore has a Governor mechanism that monitors the basic cycle time. When the cycle time be- comes too high, cooperating render- ing processes reduce the quality of rendering (e.g., leaving off most of the text during motion) so that the cycle speed is increased. The immediate response time constant. A person can make an unprepared response to some stimulus within about a second [15]. If there is more than a second, then either the listen- ing party makes a back-channel re- sponse to indicate that he is listening (e.g., "uh-huh") or the speaking party makes a response (e.g., "uh...") to indicate he is still thinking of the next speech. These serve to keep the parties of the interaction informed that they are still engaged in an inter- action. In the Cognitive Coprocessor, we attempt to have agents provide status feedback at intervals no longer than this constant. Immediate re- sponse animations (e.g., swinging the branches of a 3D tree into view) are designed to take about a second. If the time were much shorter, then the user would lose object constancy and would have to reorient himself. If they were much longer, then the user would get bored waiting for the response. The unit task time constant. Finally, a user should be able to complete some elementary task act within about 10 seconds (say, 5 to 30 seconds) [1, 15]. This is about the pacing of a point and click editor. Information agents may require considerable time to complete some complicated request, but the user, in this paradigm, always stays active. A user can begin the next request as soon as sufficient in- formation has developed from the last request or even in parallel with it. The basic control mechanism (inner loop) of the Cognitive Copro- cessor is called the Animation Loop (see Figure 4). It maintains a Task Queue, a Display Queue, and a Gover- nor. Built on top of the Animation Loop is an information workspace manager (and support for 3D simu- lated environments), called 3D/ Rooms; supports for navigating around 3D environments; and sup- port for Interactive Objects, which pro- vide basic input/output mechanisms for the UI. The task machine (which, for the Information Visualization application, is a collection of visualiz- ers) couples with the Cognitive Copocessor in various ways. More details of this architecture can be found in [17]. Interactive Objects The basic building block in the In- formation Visualizer, called Interac- tive Objects, forms the basis for cou- pling user interaction with application behavior and offloading work to an agent to handle user in- teraction. Interactive Objects are a generalization of Rooms Buttons [10]. They are used to build complex 3D widgets that represent informa- tion or information structure. Rooms Buttons are used for a vari- ety of purposes, such as movement, new interface building blocks, and task assistance. A Button has an ap- pearance (typically, a bitmap) and a selection action (a procedure to exe- cute when the Button is 'pressed'). The most typical Button in Rooms is a door--when selected, the user passes from one Room to another. Buttons are abstractions that can be passed from one Room to another, and from one user to another via email. Interactive Objects are similar to Buttons, but are extended to deal with gestures, animation, 2D or 3D appearance, manipulation, object- relative navigation, and an extensible set of types. An Interactive Object can have any 2D or 3D appearance defined by a draw method. The notion of selec- tion is generalized to allow mouse- based gestural input in addition to simple 'pressing'. Whenever a user gestures at an Interactive Object, a gesture parser is involved that inter- prets mouse movement and classifies it as one of a small set of easily differ- entiated gestures (e.g., press, rubout, check, and flick). Once a gesture has been identified, a gesture-specific method is called. These gesture methods are specified when the In- teractive Object is created. The ges- 62 April 1993/Vo1.36, No.4 /COlllEIUHIUTIOH|OF'i*HIIACEi
`
`6
`
`

`

`.EN ture parser can be easily extended to allow additional gestures and gesture methods, as long as the new gestures are easily differentiated from other gestures. _ There are a number of types of Interactive Objects. In the current implementation, these include static text, editable text, date entry, num- ber entry, set selection, checkmark, simple button, doors, sliders, and thermometers (for feedback and progress indicators). The basic set of 3D widgets supported for Interactive Objects can be easily extended. Interactive Objects are generalized to the point that every visible entity in the simulated scene can be an In- teractive Object (and should be, so that object-relative navigation is con- sistent across the scene). Thus, the surfaces of the 3D Room (the walls, floor, and ceiling) are Interactive Objects. All the controls (e.g., but- tons, sliders, thermometers, text, and editable text) are Interactive Objects. And finally, the application-specific artifacts placed in the room are In- teractive Objects. Search Agents Search agents are also used to off- load user work. The Information Visualizer uses an indexing and search subsystem [5], which allows search for documents by keyword or by iterative "relevance feedback" (e.g., find the documents most like this document). Associative retrieval based on such linguistic searches can be used to highlight portions of an information visualization. Thus we can combine traditional associative searches with structural browsing. In addition, clustering agents are used to organize information. Using a near-linear clustering algorithm [4], which allows interactive use of clustering, a structure can be in- duced on an unstructured (or par- tially structured) body of informa- tion. There are several ways this can be of use. For unstructured informa- tion, a user can induce a subject hier- archy, which can then be browsed with our hierarchy visualization tools. For information that already has a structure, the clustering results sometimes reveal problems with the existing structure. In general, if a The heart of the Information Visualizer architecture is a controlled resource scheduler, the Cognitive Coprocessor architecture, which serves as an animation loop and a scheduler for Shen'dan's three agents and additional application and interface agents. Figure 4. Cognitive Coprocessor Interaction architecture ¢OMNUNICA'I'IONIOFTHIIIACN/Apri] 1993/Vo1.36, No.4 ~
`
`7
`
`

`

`user is unsure about the content of a corpus, and therefore unsure of what kinds of queries to make, clus- tering can provide an overview of the content of ~that corpus. 3D Navigation and Manipulation In virtual 3D workspaces, techniques are required for moving the user (the viewpoint) and objects around the space. The Information Visualizer currently has five of these as building blocks, with others under develop- ment: 1. The Walking Metaphor 2. Point of Interest Logarithmic Flight 3. Object of Interest Logarithmic Manipulation 4. Doors 5. Overview Walking Metaphor. The 'Walking Metaphor' [13] has virtual joystick controls superimposed as heads-up displays on the screen and controlled by the mouse. The controls are oper- ators related to the way a human body might be moved (one control for body motion forward, backward, turn-left, or turn-right; a second for motion in tlhe plane of the body: left, right, up, or down; and a third for rotating the head left, right, up, or down). This scheme is fairly general and works well for exploratory movement, which has no particular object as its target. Large information spaces, how- ever, involve numerous objects and/ or highly detailed objects that re- quire the user to move back and forth from global, orienting views to manipulate detailed information. Therefore, an important require- ment for such systems is a movement technique that allows the user to move the viewpoint (1) rapidly through large distances, (2) with such control that the viewpoint can ap- proach very close to a target without collision. We call this the problem of rapid and controlled, targeted 3D move- ment [12]. Point of lnterest (POI) Logarithmic Flight. Our second navigation tech- nique uses. a point of interest loga- rithmic movement algorithm for very rapid, but precise movement relative to objects of interest [12]. Current techniques for moving the viewpoint [13] are not very satisfac- tory for targeted movement. They typically exhibit one or more of the following three problems: (1) ineffi- cient interactions and movement tra- jectories, typically caused by 2D input devices; (2) difficulties control- ling high velocities when the tech- nique is based on flying or steering the viewpoint through the work- space; and (3) limits on human reach and precision when the technique is based on directly positioning the viewpoint. Most viewpoint movement tech- niques focus on schemes for directly controlling the six degrees of free- dom of viewpoint movement (3 posi- tion and 3 orientation) or their rate derivatives--a complex control task. Our solution is to have the user select a point of interest (POI) on the sur- face of an object and use the distance to this POI to calculate a logarithmic motion function. Two keys on the keyboard are used to indicate loga- rithmic motion along the ray toward and away from the POI. The view- point is automatically oriented dur- ing the flight to face the surface being approached by using the sur- face normal at the POI. Another control allows movement perpendic- ular to the surface normal. This al- lows for scrolling over extended ob- jects (for example, a virtual blackboard) or circumnavigation around spherical objects (for exam- ple, a virtual globe.) Object of Interest Logarithmic Manipulation. Logarithmic motion can also be used to manipulate ob- jects with the same UI as POI view- point movement. The mouse cursor is used to control a ray that deter- mines the lateral position of the ob- ject of interest (given the viewpoint coordinates) and the same keyboard keys are used to control the position of the object on the ray. However, the user must be able to control ob- ject position at a distance, where log- arithmic motion is not effective. The solution is to use an acceleration mo- tion clipped by a logarithmic motion. The object moves slowly at first (al- lowing control at a distance), then accelerates toward the viewpoint, and finally moves logarithmically slower for control near the view- point. POI logarithmic flight and object of interest logarithmic manipulation both allow simple, rapid movement of the viewpoint and of objects in a 3D space over multiple degrees of freedom and scales of magnitude with only a mouse and two keyboard keys. We believe these techniques provide a mouse-based solution for the viewpoint movement and object movement problems that are as good or even better than those requiring special 3D devices. The chief advan- tage of a mouse-based solution is that mice are ubiquitous. Also, many users of information visualization (office workers, for example) are not likely to be willing to wear special equipment (such as gloves and hel- mets). Even so, the techniques could be adjusted to work with 3D devices such as the glove. Doors. The 3D/Rooms system sup- ports Doors that allow a user to move from one room (or workspace) through to a home position in an- other room. The Door is an Interac- tive Object that supports either man- ual control or scripted animation of opening and walking through to the other room. Overview. As with Rooms, 3D/ Rooms contains an Overview (see Fig- ure 1) allowing the user to view all the 3D workspaces simultaneously. This is a navigation technique that lets the user view all the rooms and go to any room directly. In 3D/ Rooms the user can also reach into the Rooms from the Overview, move about in them, and manipulate their objects. 3D/ROOmS 3D/Rooms extends the logic of our Rooms system to three dimensions. In the classical desktop metaphor and the original Rooms system, the view of a Room is fixed. In 3D/ Rooms, the user is given a position and orientation in the Room, and can move about the Room, zoom in to examine objects closely, look around, or even walk through doors into other Rooms. Thus 3D/Rooms is the same as Rooms, except that visualiza- tion artifacts (implemented as Inter- active Objects) replace a collection of 64 April 1993/Vol.36, No.4 /COIlIHUlllCAT|OIdl OlU THE ACM
`
`8
`
`

`

`windows, and users can have arbi- trary positions and orientations in the Rooms. The effect of 3D/Rooms is to make the screen space for immediate stor- age of information effectively larger (in the sense that the user can get to a larger amount of ready-to-use infor- mation in a short time). The effect of rapid zooming, animation, and 3D is to make the screen space effectively denser (in the sense that the same amount of screen can hold more ob- jects, which the user can zoom into or animate into view in a short time). By manipulating objects or moving in space, the user can disambiguate images, reveal hidden information, or zoom in for detail--rapidly ac- cessing more information. Both the techniques for making the Immedi- ate Storage space virtually larger and the techniques for making the space virtually denser should make its ca- pacity larger, hence the average cost of accessing information lower, hence the cost of working on large information-intensive tasks lower. Information Visualization Recent work in scientific visualization shows how the computer can serve

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket