throbber
generally perceptible. Any "chattering" induced by sharp
`surface features may be dealt with by either increasing the
`sponginess of the surface, or increasing the size of the virtual
`surface in the user's hand space, thereby effectively increasing
`the spatial sampling rate for
`the same hand movement
`velocity.
`
`The assumptions made about reasonable and expected hand
`movements could easily be enforced by the addition of velocity
`dependent forces to restrain motion to a reasonable speed.
`Users have been surprisingly adept, however, at tuning their
`hand gestures to give the maximum sensitivity to surface
`features, and so a need for such restrictions has not yet been
`seen.
`
`
`
`E2-"tfisiit"
`
`.. Ms...
`
`- Measuring a sample of TMV. Height in 3D space is
`Figure l
`exaggerated by a factor of 5.
`
`4. TOOLS
`
`4.1 DISPLAY TOOLS
`
`The NM has inherited the standard set of virtual-reality (VR)
`tools from the UNC vlib, such as grabbing, scaling, and flying
`[3].
`In addition, tools are added as their desirability becomes
`apparent during use ‘of the system. When used immersively,
`fixed lighting sources have proved sufficient, as the user's head
`position relative to the surface and light determines specular
`highlighting. By moving about
`in the scene,
`the optimal
`angle for viewing features of interest can be found. While
`working in groups, however, it frequently proves advantageous
`to fix a single hypothetical user's position in space, and
`display to a projection screen a single view which all user's
`share. Transition from fixed view to head tracked may be
`performed on the fly for investigation of specific features, the
`subtleties of which are often more easily discerned in the
`immersive mode despite the lower resolution display. While in
`fixed view, it is helpful to adjust the lighting source to bring
`out specific details. A virtual pointer is supplied to allow the
`user to point to the directional light source. The lighting of
`the scene is updated as the user moves the pointer until
`illumination becomes optimal.
`
`4.2 MEASURING TOOLS
`
`Quantitative tools are essential for full understanding of the
`data. Often, features are distinguishable only by their absolute
`size. The user may create a measuring rectangle perpendicular
`to the horizontal plane by selecting two points, such as at the
`base and peak of a feature. The rectangle is displayed with
`height and width in nanometers, as well as a profile of the
`surface intersecting the rectangle.
`This display may be
`independently positioned by the user, and persists until being
`
`explicitly dismissed, giving a reference scale for the rest of the
`image. Since the horizontal shape of features displayed is the
`convolution of features on the surface with the probe tip, which
`has a typical radius of curvature of 30 to 50 nm, features tend to
`appear
`flattened. This appearance may be corrected by
`vertically stretching the measurement rectangle until
`the
`profile of a reference feature takes the correct shape (e.g. a
`colloidal ball has same height as width). The height of the rest
`of the scene is then sealed accordingly.
`4.3 VCR TOOLS
`
`While the NM is primarily a real-time data visualization
`system, it is also valuable for off-line analysis. Snapshots of
`the scene may be saved to disk at any time. Additionally, the
`stream of data returning from the microscope is saved and may
`be replayed interactively, with all tools available except those
`involving modification, which naturally require an actual
`surface and microscope. Standard VCR functions are supported,
`such as control over replay rate, fast forward, rewind, and
`absolute positioning in the stream. These afford quick review
`of selected segments within a stream which may be quite large,
`having been acquired over the span of up to an hour. The
`viewpoint and vertical scale can be different in the replay than
`in the original experiment, as they do not depend on the surface
`data.
`
`The NM is not limited to data collected within the interface.
`
`Simple file format conversion routines have been written to
`allow the importation of data collected elsewhere and by
`microscopes other than the Digital Instruments Nanoscope III
`currently used in the system. Data received from the UCLA
`materials science group is investigated using the interface as a
`3-D viewer, and video tapes returned to the group of "walk-
`throughs" of the surface under study. As a visualization tool
`alone the interface has proven worthwhile in the understanding
`of complex surface features.
`4.4 MODIFICATION TOOLS
`
`A set of physical knobs on the ARM control the sponginess of
`the surface and the forces applied by the tip to the sample. That
`the perceived hardness of the surface determines sensitivity to
`smaller details is straightforward. The implications of tip force
`on haptic response is more subtle.
`if the force applied by the
`microscope is
`too great,
`a
`feature will be displaced
`immediately, and will never be felt.
`If the force is too small,
`the feature will be felt, but no modification made. The force
`necessary to modify the surface is determined by factors such as
`the exact tip shape, the direction of the force, humidity. and
`surface contaminants, and may vary widely across a given
`sample and over periods of time as short as tens of minutes.
`Without knowing a priori the force required, the user must have
`immediate control over the forces applied. The interface allows
`the user to control the position of the tip and feel the surface ‘
`with one hand, while the other hand adjusts physical knobs
`controlling the force level. The microscope may be toggled
`between non-damaging and modification modes with a thumb
`switch, to allow exact positioning of the tip by feel before the
`application of forces to features.
`
`the modification mode's immediate haptic
`To supplement
`display, after a modification event a small area around the event
`is scanned and updated. This area is generally of greatest
`interest and most likely to be out of date, and is refreshed in
`about a hundredth of the time needed to rescan the entire
`surface. After quickly updating that subset of the grid, imaging
`of the full selected region resumes.
`
`16
`
`BUNGIE - EXHIBIT 1006 - PART 2 OF 14
`
`BUNGIE - EXHIBIT 1006 - PART 2 OF 14
`
`

`
`Since the forces applied by the tip are
`Area Sweep
`always under immediate user control,
`the entire area being
`scanned may be swept out simply by increasing the force until
`all materials are removed as desired. This is the interactive
`method most commonly supported by commercial AFMS.
`While an efficient way to clear a region,
`it has several
`disadvantages. This method is inappropriate for selective
`removal of material within a rectangle. The force required to
`move material
`in one area may be enough to damage the
`substrate or other desired features nearby. Moreover,
`the
`clearing may be incomplete, with ragged edges around the
`border or debris left in the region which must then be cleaned
`out.
`
`The etching of circuits from a conducting
`Line Tool
`film on non-conducting substrate frequently requires straight
`lines connecting cleared regions or
`isolating conducting
`regions. The user may select any two endpoints of a line
`segment, and have the tip scratch between the two points at a
`preset modification force.
`
`support
`Eng:-ave Tool Many commercial microscopes
`lithography techniques, allowing the user to preset a path to be
`traced by the tip at a specific force. These afford efficient
`etching of an exact known outline into a surface, but leave the
`same jagged edges and debris as the area sweep. Cleaning these
`edges is easily performed using the engraving tool,
`in which
`the tip tracks the hand exactly over the surface. The effect is
`like the user having an ice pick with which to feel the surface,
`scratch it, and push about materials on it.
`(Depending on the
`tip radius relative to surface features, it may be a very blunt ice
`pick.) This gives the finest degree of control available with
`the microscope.
`
`
`
`Figure 2 - A segment of TMV is separated using the sweep tool.
`The two black lines extending upward toward the hand (not
`shown) define the flat edge of the virtual broom. The two
`parallel lines of white markers indicate the path having been
`swept out. The image has not yet been updated to show the
`removal of the segment.
`
`imagined, pushing
`can easily be
`As
`Sweep Tool
`materials about with an ice-pick might sometimes be less than
`convenient. Often, a different instrument is more appropriate.
`While there is only one physical tip, control of its motion can
`simulate other, virtual tools. A virtual whisk broom is provided
`for selective clearing of regions and manipulation of larger
`objects, or even small objects which are to be swept in a
`general direction, and then positioned precisely using the
`engrave tool.
`In sweep mode, the tip oscillates between the
`tracked position of the user's hand and a point determined by
`the orientation of the hand (figure 2). The magnitude and
`direction of the oscillation is therefore immediately and
`
`intuitively determined by the user, giving the illusion of an
`extended tip, the flat edge of which may be used to scrape our
`selected areas or push objects. This complements the area
`sweep mode in that, while it
`lacks the precise rectangular
`boundary of area sweep, it is also not limited to any rectangle.
`The "edge" may become wide or narrow, and change orientation
`relative to the surface plane as required to navigate though
`features which must be left undisturbed.
`
`5. RESULTS
`
`5.1 BALL PUSHING
`
`The manipulation of colloidal gold particles has proved an
`excellent
`test-bed for the interface,
`in addition to being a
`worthwhile pursuit in its own right. Controlled movement of
`the balls would enable the performance of experiments
`determining physical properties and materials characteristics
`which are currently only predicted by theory [1]. Balls are
`typically deposited randomly about a surface.
`Isolation and
`precise positioning of individual balls, either into patterns or
`within other structures is difficult,
`if not
`impossible using
`means available with commercial microscopes.
`The
`interaction between balls
`and the microscope tip is
`unpredictable. At the same time, the balls are rigid enough to
`easily be felt with the NM‘s haptic display, and image clearly.
`
`
`
`a.
`
`
`
`Figure 3 - Colloidal gold balls arr
`icon is front right.
`
`anged in a ring. The hand
`
`In one experiment, a thin gold wire (~50 nm wide) was etched
`into a 15 nm thick gold film on mica substrate using standard
`AFM lithography techniques. A gap approximately 100 nm
`wide was then cut into the wire, and colloidal gold balls of 15
`nm diameter distributed over the surface. The user was then able
`to select a ball and maneuver it through the other particles in
`the area and position it
`in the gap, without disturbing other
`material in the region, or damaging the wire. By using a light
`force in engrave mode, the user could feel the ball on the edge
`of the tip, and so could follow the ball closely and detect when
`the ball
`took an erratic jump, quickly compensating in the
`direction of pushing, or waiting for the image to be updated to
`relocate the ball. Fig. 4 shows the trace of the pushing events
`and the final pushes of the ball
`into the gap. The entire
`
`17
`
`

`
`sequence was performed in a matter of minutes. It is not clear
`how or even if the same result could be accomplished using
`methods currently available with commercial AFMS. The
`experiment provides a convincing proof of concept for the
`manipulation of a colloidinto a gap in a wire which is
`connected
`to macroscopic
`leads
`for
`the
`electrical
`characterization of the particle. Such a circuit is currently
`being fabricated at UNC-CH, and characterization experiments
`are expected in the coming weeks.
`
`Colloidal gold balls have also been arranged into structures
`such as a ring and a matrix. The ability to arrange the balls
`into specific patterns is useful both in the fabrication of
`circuits from the balls, and comparison with theoretic
`refraction patterns in near field spectroscopy studies. Work is
`also currently underway to position balls in arrangements for
`which theoretic predictions of refraction patterns exist.
`
`5.2 VIRUS MANIPULATIONS
`
`The positioning of a virus in a circuit as described above would
`offer a unique ability to characterize the electrical properties of
`the virus. Manipulation of the virus is even more challenging
`than the gold balls, however.
`Samples of Tobacco Mosaic
`Virus (TMV) were distributed over a mica substrate.
`In pushing
`it with the engrave tool, the TMV was found to be very easy to
`bend and break. The tip could also be positioned on the TMV
`and the force turned up slowly until the tip ruptured the virus, an
`event which could be easily felt by the user. But while the
`dissection of TMV particles was interesting in its own right,
`moving a particle as a whole unit was also desirable.
`
`User frustrations with trying to push an extended flexible
`object with a sharp instrument led to the introduction of the
`sweep tool.
`Intuitively, it would have been the tool of choice
`in an analogous real-world task. Building the illusion of the
`broad edge from the reality of the microscope's single sharp tip
`proved easier than coming up with the initial insight that a
`blunter instrument would sometimes be preferable.
`In the
`natural and intuitive ‘environment in which user's had been
`interacting with the TMV, however, that insight and the request
`for its implementation were natural and forthcoming.
`
`the sweep tool proved ideal.
`In positioning a virus particle,
`The broad edge of the tip oscillations along the length of the
`TMV applied a more uniform force, moving and rotating it as a
`unit. Again as proof of concept, a letter T was formed of TMV
`segments as shown in Fig. 5. As can be seen in the figure, the
`TMV particles obtain a slightly rumpled appearance after they
`have been moved. This indicates that, while we are moving the
`particles as units, we are not doing so without damage. We are
`investigating possible virtual tools that might be even gentler
`still,
`in the hopes of moving the particles while leaving them
`intact.
`
`6. CONCLUSIONS
`
`The NanoManipulator provides an intuitive interface hiding the
`details of performing complex tasks using an SPM. Surface
`features are more easily recognized with the combination of 3-
`D topography and haptic feedback in real
`time. Feeding the
`user's
`senses more fully allows faster development of
`manipulation skills. The collaborative nature of the project
`allows new tools to be developed as the needs of the users
`
`become more sophisticated. Many tasks performed using the
`NM are not well enough understood to be automated, so they
`require real time feedback to and response from the user.
`It is
`hoped that
`the NM will provide the insight
`into the
`manipulation process necessary to automate the fabrication of
`mesoscopic and nanometer-scale circuits. The NM is valuable
`new in the building of one-of-a-kind structures which will
`contribute significantly to the areas of materials science and
`solid state physics.
`
`AC KNOWLEGEMENTS
`
`to the National Institutes of Health (grant
`We are grateful
`#RR02l70), the Defense Advanced Research Projects Agency
`(contract #DABT63-92-C-0048),
`the National Science
`Foundation (grant #IRI-9202424),
`and the Office of Naval
`Research for funding and support.
`FIEFERENCES
`
`Devoret, M. H., D. Esteve and C. Urbina, “Single-
`l.
`electron Transfer in Metallic Nanostructures", Nature 360, 547
`(1992).
`
`2.
`(1992).
`
`Kastner, M. A., Reviews of Modern Physics, 64, 849
`
`and Richard Holloway,
`Robinett, Warren,
`3.
`Implementation of Flying, Scaling, and Grabbing in Virtual
`Worlds. Proceedings of the ACM Symposium on Interactive 3D
`Graphics (Cambridge, MA, 1992), special issue of Computer
`Graphics, ACM SIGGRAPH, New York, 1992.
`
`Taylor, Russell, Warren Robinett, Vernon L. Chi,
`4.
`Frederick P. Brooks, Jr., William V. Wright, R. Stanley
`Williams, and Erik J. Snyder, The Nanomanipulator: A Virtual-
`Reality Interface for a Scanning Tunneling Microscope.
`Proceedings of SIGGRAPH '93 (Anaheim, California, August
`1-6, 1993).
`In Computer Graphics Proceedings, Annual
`Conference Series, 1993, ACM SIGGRAPH, New York, 1993,
`pp. 127-134.
`
`Mark, Wiliam R. and Scott C. Randolph, "UNC-CH
`5.
`Force Feedback Library", UNC-CH Computer Science Dept.
`Technical Report #TR94-056, 1994.
`
`Force Display in Molecular
`Ouh-young, Ming.
`6
`Docking, Ph. D. Thesis, University of North Carolina at
`Chapel Hill, 1990.
`
`Fuchs, Henry, John Poulton, John Eyles, Trey Greer,
`7
`Jack Goldfeather, David Ellsworth, Steve Molnar, Greg Turk,
`Brice Tebbs, and Laura Israel. Pixel—Planes 5: A Heterogeneous
`Multiprocessor Graphics System Using Processor-Enhanced
`Memories.
`Proceedings of SIGGRAPH '89.
`In Computer
`Graphics, 19 3 (1989). 79-88.
`
`Stroscio, Joseph A. and D. M. Eigler, Atomic and
`8
`Molecular Manipulation with the Scanning Tunneling
`Microscope. Science, 254 (1991). 1319-1326.
`
`Sarid, Dror, Scanning Force Microscopy,0xfora.'
`9
`Series in Optical and Imaging Sciences, Oxford University
`Press, NY 1991.
`
`Chen, C. Iulian, Introduction to Scanning Tunneling
`10
`Microsc0py,0xford Series in Optical and Imaging Sciences,
`Oxford University Press, NY 1993.
`
`
`
`

`
`Combatting Rendering Latency
`
`Marc Olano, .lon Cohen, Mark Mine, Gary Bishop
`Department of Computer Science, UNC Chapel Hill
`{o]ano,cohenj,mine,gb] @cs.unc.edu
`
`ABSTRACT
`
`Latency or lag in an interactive graphics system is the delay
`between user input and displayed output. We have found latency
`and the apparent bobbing and swimming of objects that it
`produces to be a serious problem for head-mounted display
`(HMD) and augmented reality applications. At UNC, we have
`been investigating a number of ways to reduce latency; we present
`two of these. Slats is an experimental rendering system for our
`Pixel-Planes 5 graphics machine guaranteeing a constant single
`NTSC field of latency. This guaranteed response is especially
`important for predictive tracking. Just-in-time pixels is an attempt
`to compensate for rendering latency by rendering the pixels in a
`scanned display based on their position in the scan.
`
`1
`
`INTRODUCTION
`
`1.1 What is latency?
`
`Performance of current graphics systems is commonly measured
`in terms of the number of triangles rendered per second or in
`terms of the number of complete frames rendered per second.
`While these measures are useful, they don‘t tell the whole story.
`
`Latency, which measures the start to finish time of an operation
`such as drawing a single image, is an often neglected measure of
`graphics performance. For some current modes of interaction,
`like manipulating a 3D object with a joystick, this measure of
`responsiveness may not be important. But for emerging modes of
`“natural” interaction, latency is a critical measure.
`
`long as the full time spent computing the frame in all of its stages.
`
`1.3 Why is it bad?
`
`Latency is a problem for head-mounted display (HMD)
`applications. The higher the total latency, the more the world
`seems to lag behind the user’s head motions. The effect of this lag
`is a high viscosity world.
`
`The effect of latency is even more noticeable with see-through
`HMDs. Such displays superimpose computer generated objects
`on the user’s view of the physical World. The lag becomes
`obvious in this situation because the real world moves without lag,
`while the virtual objects shift in position during the lag time.
`catching up to their proper positions when the user stops moving.
`This “swimming” of the virtual objects not only detracts from the
`desired illusion of the objects’ physical presence, but also hinders
`any effort to use this technology for real applications.
`
`Most see-through HMD applications require a world without these
`“swimming” effects.
`If we hope to have applications present 3D
`instructions to guide the performance of “complex 3D tasks” [9],
`such as repairs to a photocopy machine or even a jet engine, the
`instructions must stay fixed to the machine in question. Current
`research into the use of see-through HMDS by obstetricians to
`visualize 3D ultrasound data indicates the need for lower latency
`visualization systems [3]. The use of see-through HMDS for
`assisting surgical procedures is unthinkable until we make
`significant advances in the area of low latency graphics systems.
`
`2
`
`COM BATTING LATENCY
`
`1.2 Why is it there?
`
`2.1 Matching
`
`All graphics systems must have some latency simply because it
`takes some time to compute an image.
`In addition, a system that
`can produce a new image every frame may (and often will) have
`more than one frame of latency. This is caused by the pipelining
`used to increase graphics performance. The classic problem with
`pipelining is that it provides increased throughput at a cost in
`latency. The computations required for a single frame are divided
`into stages and their execution is overlapped. This can expand the
`effective time available to work on that single frame since several
`frames are being computed at once. However, the latency is as
`
`Permission to copy without fee all or part of this material is
`granted provided that the copies are not made or distributed for
`direct commercial advantage, the ACM copyright notice and the
`title of the publication and its date appear,‘ and notice is given
`that copying is by permission of the Association of Computing
`Machinery. To copy otherwise, or to republish, requires a fee
`and/or specific permission.
`1995 Symposium on Interactive 3D Graphics, Monterey CA USA
`© 1995 ACM 0-89791-736-7/95/00D4...$3.5O
`
`A possible solution to this lag problem is to use video techniques
`to cause the user’s view of the real world to lag in synchronization
`with the virtual world. However,
`this only works while the
`latency is relatively small.
`
`2.2 Prediction
`
`Another solution to the latency problem is to predict where the
`user’s head will be when the image is finally displayed [10, 1, 2].
`This technique, called predictive tracking, involves using both
`recent tracking data and accurate knowledge of the system’s total
`latency to make a best guess at the position and orientation of the
`user’s head when the image is displayed inside the HMD. Azuma
`states that for prediction to work effectively, the lag must be small
`and consistent.
`In fact he uses the single field-time latency
`rendering system (Slats), which we will discuss shortly, to achieve
`accurate prediction.
`
`19
`
`

`
`To
`
`
`Figure 2: Pixel-Planes 5 system architecture
`
`Pixels may be transformed at whatever rate is most convenient.
`This reduces latency at
`the cost of image clarity since only a
`portion of the pixels are updated. The transform rate can remain
`locked to the tracker update rate or separated on a pixel-by-pixel
`basis as with the just-in-time pixels method, discussed next.
`
`2.3.2
`
`Just-in-time pixels (JITP)
`
`We will present a technique called just-in-time pixels, which deals
`with the placement of pixels on a scan-line display as a problem of
`temporal aliasing [14]. Although the display may take many
`milliseconds to refresh, the image we see on the display typically
`represents only a single instant in time. When we see an object in
`motion on the display, it appears distorted because we see the
`‘higher scan lines before we see the lower ones, making it seem as
`if the lower part of the object lags behind the upper part.
`Avoidance of this distortion entails generating every pixel the way
`it should appear at the exact time of its display. This can lead to a
`reduction in latency since neither the head position data, nor the
`output pixels are limited to increments of an entire frame time.
`This idea is of limited usefulness on current LCD HMDS with
`their sluggish response. However, it works quite well on the
`miniature CRT HMDs currently available and is also applicable to
`non-interactive video applications.
`
`2.3.3
`
`Slats
`
`As a more conventional attack on latency, we have designed a
`rendering pipeline called Slats as a testbed for exploring fixed and
`low latency rendering [7]. Unlike just-in—time pixels, Slats still
`uses the single transform per frame paradigm. The rendering
`latency of Slats is exactly one field time (16.7 ms). This is perfect
`for predictive tracking which requires low and predictable
`latency. We measure this rendering latency from the time Slats
`begins transforming the data set into screen coordinates to the
`time the display devices begin to scan the pixel colors from the
`frame buffers onto the screens.
`
`3 MEASURING LATENCY
`
`We have made both external and internal measurements of the
`latency of the Pixel-Planes 5 PPHIGS graphics library [13, 7].
`These have shown the image generation latency to be between 54
`and 57 ms for minimal data sets. The internal measurement
`methods are quite specific to the PPHIGS library. However, the
`external measurements can be taken for any graphics system.
`
`The external latency measurement apparatus records three timing
`signals on a digital oscilloscope (see figure 1). A pendulum and
`led/photodiode pair Provide the reference time for a real-world
`event — the low point of the pendulum’s are. A tracker on the
`pendulum is fed into the graphics system. The graphics system
`
`AID converter
`output
`
`Pedulum
`
`L
`
`' LEDfPhotodiode
`
`
`
` Digital
`
`Oscilloscope
`
`
`
`Figure 1: Apparatus for external measurement of
`tracking and display latency.
`
`2.3 Rendering latency: compensation and reduction
`
`2.3.1
`
`Flange of solutions
`
`There are a wide spectrum of approaches that can be used to
`reduce lag in image generation or compensate for it. One way to
`compensate for image generation latency is to offset the display of
`the computed image based upon the latest available tracking data.
`
`This technique is used, for example, by the Visual Display
`Research Tool (VDR'T), a flight simulator developed at the Naval
`Training Systems Center in Orlando, Florida [5, 6]. VDRT is a
`helrnet~mounted laser projection system which projects images
`onto a retro-reflective dome (instead of using the conventional
`mosaic of high resolution displays found in most
`flight
`simulators).
`In the VDRT system, images are first computed
`based upon the predicted position of the user's head at the time of
`image display.
`Immediately prior to image readout, the most
`recently available tracking data is used to compute the errors in
`the predicted head position used to generate the image. These
`errors are then used to offset the raster of the laser projector in
`pitch and yaw so that the image is projected at the angle for which
`it was computed. Rate signals are also calculated and are used to
`develop a time dependent correction signal which helps keep the
`projected image at the correct spatial orientation as the projector
`moves during the display field period.
`
`Similarly, Regan and Pose are building the prototype for a
`hardware architecture called the address
`recalculation
`pipeline[l5]. This system achieves a very small latency for head
`rotations by rendering a scene on the six faces of a cube. As a
`pixel is needed for display, appropriate memory locations from the
`rendered cube faces are read. A head rotation simply alters which
`memory is accessed, and thus contributes nothing to the latency.
`Head translation is handled by object-space subdivision and image
`composition. Objects are prioritized and re-rendered as necessary
`to accommodate translations of the user's head. The image may
`not always be correct if the rendering hardware cannot keep up,
`but the most important objects, which include the closest ones,
`should be rendered in time to keep their positions accurate.
`
`Since pipelining can be a huge source of lag, latency can be
`reduced by reducing pipelining or basing it on smaller units of
`time like polygons or pixels instead of frames. Most commercial
`graphics systems are at least polygon pipelined. Whatever level
`the pipelining, a system that computes images frame by frame is
`by necessity saddled with at least a frame time of latency. Other
`methods overcome this by divorcing the image generation from
`the display update rate.
`
`Frameless rendering[4] can be used to reduce latency in this way.
`in this technique pixels are updated continuously in a random
`pattern. This removes the dependence on frames and fields.
`
`20
`
`

`
`object
`
`object
`
`viewing
`frustum
`
`‘viewing
`frustum
`
`time t,
`\r\
`
`time I),
`,/t/
`
`
`
`scanline y
`Cam?”
`rotation _ _ _ _ _ _ _ _
`
`
`
`
`scanline x
`
`viewing
`frustum
`
`time t,,
`
`viewing
`frustum
`
`time t),
`
`
`
`
`scanline x
`
`-
`
`Camilla
`rotation
`
`scanline y
`= = = = = = =
`
`image
`scanout
`
`at time t,,
`
`image
`scanout
`
`at time ty
`
`image
`scanout
`
`at time tx
`
`image
`scanout
`
`at time ty
`
`-
`
`percieved
`object
`
`Figure 4: Image generation using just-in-time
`pixels
`
`Thus, unless animation is performed on fields (i.e. generating a
`separate image for each field),
`the last pixel in an image is
`displayed more than 33 ms after the first. The problem with this
`sequential readout of image data, is that it is not reflected in the
`manner in which the image is computed.
`
`Typically, in conventional computer graphics animation, only a
`single viewing transform is used in generating the image data for
`an entire frame. Each frame represents a point sample in time
`which is inconsistent with the way in which it is displayed. As a
`result, as shown in figure 3 and plate 1, the image does not truly
`reflect the position of objects (relative to the view point of the
`camera) at the time of display of each pixel.
`
`A quick “back of the envelope” calculation can demonstrate the
`magnitude of the errors that result if this display system delay is
`ignored. Assuming, for example, a camera rotation of 200
`degrees/second (a reasonable value when compared with peak
`velocities of 370 degrees/second during typical head motion - see
`[12]) we find:
`Assume:
`
`1)
`2)
`
`200 degrees/sec camera rotation
`camera generating a 60 degree Field of View (FOV)
`image
`NTSC video
`60 fields/sec NTSC video
`~600 pixelsfFOV horizontal resolution
`We obtain:
`
`3)
`
`200
`
`degrees
`sec
`
`sec
`1
`X _
`60 fields
`
`= ‘ 3 degrees
`field
`
`camera rotation
`
`Thus in a 60 degree FOV image when using NTSC video:
`1
`FOV
`3.3 degrees >< —
`
`X 600
`
`pixels
`
`= 33 pixels error
`
`degrees
`
`percieved
`object
`
`Image generation in conventional
`Figure 3:
`computer graphics animation.
`Scanline x is
`displayed at time tx, scanline y is displayed at time
`Iy.
`
`starts a new frame when it detects the pendulum’s low point from
`the tracking data. An D/A converter is used to tell
`the
`oscilloscope when the new frame has started. Frames alternate
`dark and light and a photodiode attached to the screen is used to
`tell when the image changes. The tracking latency was the time
`between the signal from the pendulum’s photodiode and the
`rendering start signal out of the D/A converter. The rendering
`latency was the time between the signal out of the D/A converter
`and the signal from the photodiode attached to the screen. These
`time stamps were averaged over a number of frames.
`
`The internal measurements found the same range of rendering
`latencies. The test was set up to be as fair as possible given the
`Pixel-Planes 5 architecture (figure 2, explained in more detail
`later). The test involved one full screen triangle for each graphics
`processor. This ensured that every graphics processor would have
`work to do and would have rendering instructions to send to every
`tenderer. The first several frames were discarded to make sure the
`
`pipeline was full. Finally, latency determined from time stamps
`on the graphics processors was averaged over a number of frames.
`
`4
`
`JUST-IN-TIME PIXELS
`
`4.1 The idea
`
`the pixels that make up an
`When using a raster display device,
`image are not displayed all at once but are spread out over time.
`In a conventional graphics system generating NTSC video, for
`example,
`the pixels at the bottom of the screen are displayed
`almost 17 ms after those at the top. Matters are further aggravated
`when using NTSC video by the fact that not all of the lines of an
`NTSC image are displayed in one raster scan but are in fact
`interlaced across two fields.
`In the first field only the odd lines in
`an image are displayed, and in the second field only the even.
`
`21
`
`

`
`Thus with camera rotation of approximately 200 degrees/second,
`registration errors of more than 30 pixels (for NTSC video) can
`occur in one field time. The term registration is being used here to
`describe the correspondence between the displayed image and the
`placement of objects in the computer generated world.
`
`Note that even though the above discussion concentrates on
`camera rotation, the argument is valid for any relative motion
`between the camera and virtual objects. Thus, even if the
`camera's view point is unchanging, objects moving relative to the
`camera will exhibit the same registration errors as above. The
`amount of error is dependent upon the velocity of the object
`relative to the camera’s view direction.
`If obje

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket