throbber
United States Patent [19J
`Cooper
`
`111111
`
`1111111111111111111111111111111111111111111111111111111111111
`US006118456A
`[11] Patent Number:
`[45] Date of Patent:
`
`6,118,456
`Sep.12,2000
`
`[54] METHOD AND APPARATUS CAPABLE OF
`PRIORITIZING AND STREAMING OBJECTS
`WITHIN A 3-D VIRTUAL ENVIRONMENT
`
`[75]
`
`Inventor: David G. Cooper, Los Gatos, Calif.
`
`[73] Assignee: Adaptive Media Technologies,
`Sunnyvale, Calif.
`
`[21] Appl. No.: 09/054,338
`
`[22] Filed:
`
`Apr. 2, 1998
`
`Int. Cl? ...................................................... G06F 15/00
`[51]
`[52] U.S. Cl. .............................................................. 345/433
`[58] Field of Search ..................................... 345/433, 419,
`345/421, 422, 425
`
`[56]
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`
`8/1997 Durward eta!. ........................ 345/419
`5,659,691
`5,675,721 10/1997 Freedman eta!. ...................... 345/419
`
`OTHER PUBLICATIONS
`
`Arikawa, M. et al., "Dynamic LoD for QoS Management in
`the Next Generation VRML," Proceedings of the Inti. Conf
`on Multimedia Computing and Systems, Jun. 17, 1996.
`Funkhouser, T.A. and Sequin, C.H., "Adaptive Display
`algorithm for Interactive Frame Rates during Visualization
`of Complex Virtual Environments," Computer Graphics
`Proceedings, Annual Conf. Series 1993, pp. 247-254.
`Airey, J.M.; Rohlf, J.H.; Brooks, F.P. Jr.; "Towards Image
`Realism with Interactive Update Rates in Complex Virtual
`Building Environments," Computer Graphics, ACM SIG(cid:173)
`GRAPH Special Issue on 1990 Symposium on Interactive
`3D Graphics, vol. 24, No. 2, Mar. 1990, pp. 41-50.
`
`Blake, E.H.; "A Metric for Computing Adaptive Detail in
`Animated Scenes using Object-oriented Programming,"
`£urographies '87 Elsivier Science Publishers B.V., Proc. of
`European Computer Graphics Conf. and Exhibition,
`Amsterdam, Aug. 24-28, 1987, pp. 295-307.
`Brooks, F.P. Jr.; "Walkthrough-A Dynamic Graphics Sys(cid:173)
`tem for Simulating Virtual Buildings," Abstracts from the
`1986 Workshop on Interactive 3d Graphics, Computer
`Graphics, vol. 21, No. 1, Jan. 1987, p. 3.
`Funkhouser, T.A.; Sequin, C.H., Teller, S.J.; "Management
`of Large Amounts of Data in Interactive Building Walk(cid:173)
`throughs," ACM SIGGRAPH Special issue on 1992 Sym(cid:173)
`posium on Interactive 3D Graphics, 11-20.
`
`Primary Examiner---Phu K. Nguyen
`Attorney, Agent, or Firm-Pillsbury Madison & Sutro LLP
`
`[57]
`
`ABSTRACT
`
`A method of assessing objects in a 3D graphical scene
`provides discovery of the most important objects in that
`scene from the viewer's perspective at any instant in time.
`These objects are then queued in priority order and requests
`for each object's data sent to the server at a rate determined
`by the available network bandwidth. The importance of each
`object is recalculated per scene update and new requests are
`sent out based on these values. Only data requests that can
`be responded to within the next update cycle are sent, while
`most request messages are retained. This allows for imme(cid:173)
`diate response to a changing view position, and reduces
`visual latency, defined as the time that lapses until an object
`having a data deficit gets that data. Latency is reduced for
`important objects at the cost of lesser objects. Because
`important objects are those which contribute most to the
`visual scene, the overall richness of the scene appears to
`grow faster than the number of objects received.
`
`18 Claims, 5 Drawing Sheets
`
`46-1...46-N
`
`-----------(cid:173)
`
`, /
`
`' ' ' '
`
`'
`
`' ' ' ' '
`----~---
`
`' ' '
`/
`
`' '
`
`<1
`
`v
`
`44
`
`Microsoft Corp. Exhibit 1006
`
`

`
`U.S. Patent
`
`Sep.12,2000
`
`Sheet 1 of 5
`
`6,118,456
`
`10
`
`Server
`
`14
`
`12
`
`Client
`
`18
`
`16
`
`FIG. 1
`
`Microsoft Corp. Exhibit 1006
`
`

`
`U.S. Patent
`
`Sep.12,2000
`
`Sheet 2 of 5
`
`6,118,456
`
`12~
`r--l---------------------------------------------~
`( 20
`
`From 10
`
`Object
`List
`
`(24
`
`22'\
`
`Priority
`Queue
`
`I
`
`(26
`
`Streaming
`Function
`
`1-<--------1
`
`:-------·
`
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`Object
`Assessment
`Function
`
`From 18
`
`28'\
`
`Object
`Data
`
`To 18
`
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`l (30
`
`.....
`
`Amortization
`Function
`
`From 10
`
`To 10
`
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`L..------ ----- ------------------------------------~
`
`Frame
`Max
`Avail. Update
`BW
`Rate
`
`FIG. 2
`
`Microsoft Corp. Exhibit 1006
`
`

`
`U.S. Patent
`
`Sep.12,2000
`
`Sheet 3 of 5
`
`6,118,456
`
`42
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`
`I
`1
`I
`1
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`I
`1
`I
`I
`I
`I 1
`I
`I
`\
`
`v
`
`' <1
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`I
`
`46-1 ... 46-N
`
`FIG. 3A
`
`FIG. 3B
`
`v
`
`/
`
`/
`
`Microsoft Corp. Exhibit 1006
`
`

`
`U.S. Patent
`
`Sep.12,2000
`
`Sheet 4 of 5
`
`6,118,456
`
`S2
`
`Determine Objects in Scene
`
`S6
`
`Remove from
`Priority
`Queue
`
`SlO
`
`Determine
`Importance
`
`y
`
`SI2
`
`Queue Object in
`Priority Queue
`
`y
`
`End
`
`FIG. 4
`
`Microsoft Corp. Exhibit 1006
`
`

`
`U.S. Patent
`
`Sep.12,2000
`
`Sheet 5 of 5
`
`6,118,456
`
`S20
`
`Determine Bandwidth Available
`
`S24
`
`S26
`
`Allocate
`100% for
`Back Channel
`
`y
`
`S28
`
`Reserve
`10% for
`Back Channel
`
`N
`
`Get Next Object from Queue
`
`y
`
`N
`
`S32
`
`Estimate Size of Requested Data
`
`y
`
`N
`
`S36
`
`Request Data for Object
`
`FIG. 5
`
`N
`
`End
`
`Microsoft Corp. Exhibit 1006
`
`

`
`1
`METHOD AND APPARATUS CAPABLE OF
`PRIORITIZING AND STREAMING OBJECTS
`WITHIN A 3-D VIRTUAL ENVIRONMENT
`
`6,118,456
`
`2
`Virtual Environments," Computer Graphics Proceedings
`(SIGGRAPH '93), pp. 247-254 (1993), Funkhouser and
`Sequin teach a predictive algorithm for prioritizing objects
`in a scene using adaptive heuristics so as to produce the
`5 "best" image possible within a target frame rate time. For
`each frame, the available levels of detail of each potentially
`visible object in a scene are tagged with a benefit (e.g.
`contribution to overall image quality) and a cost (e.g.
`contribution to overall rendering time). The total cost of
`10 rendering all objects can not exceed the target frame time,
`and the chosen level of detail for each object is that which
`results in the highest benefit/cost ratio. This algorithm
`purportedly achieves near uniform frame times for all
`observer viewpoints while optimizing the qualitative human
`15 visual perception of the scene.
`However, many challenges remain that are not addressed
`by conventional techniques.
`Like many conventional assessment algorithms, Funk(cid:173)
`houser requires all object data and level of detail information
`to be available to it in order to fully assess the benefits
`gained by rendering certain objects at levels of detail greater
`than other objects. In a server-client configuration, where
`object data is stored on the server and where the assessment
`algorithm executes on a client, this means that all object data
`and level of detail information must be previously transmit(cid:173)
`ted from the server to the client before assessment process(cid:173)
`ing begins. For complex graphical scenes, with large num(cid:173)
`bers of objects, this requires substantial transmissions of
`data and requests between server and client. Moreover, since
`the algorithm operates on all data associated with all of the
`objects, substantial processing power is required.
`Conventional assessment algorithms also fail to account
`for many factors that affect visual richness of a scene. For
`example, visual latency is an important factor that is not
`addressed. Visual latency is a measure of the time between
`when an object first becomes visible in a scene to when it is
`rendered in full detail. Simply put, the visual latency of
`important objects in a scene affects the overall visual rich(cid:173)
`ness of the scene more than the visual latency of less
`important objects. Put another way, reducing latency for
`important objects causes the overall visual richness of the
`scene to appear to improve quicker than just elevating their
`level of detail over less important objects.
`The importance of latency is further pronounced in a
`network environment, for example, where object data must
`be transmitted from a server to a client, and where objects
`move in and out of scenes and become less or more
`important within scenes in response to changing viewpoints
`within the virtual environment. Since the channel between
`the server and client typically has a limited bandwidth,
`simply prioritizing objects within a scene based on the time
`required to render each object, as performed by Funkhouser,
`does not consider the costs associated with requesting and
`transmitting object data between the server and client. In
`particular, if the available bandwidth is consumed by
`requests and transmissions of data for less important objects
`at the expense data for more important objects, the visual
`richness of the scene will suffer even if the less important
`objects are rendered at lesser detail. Moreover, requests and
`transmissions of object data should keep in step with the
`changing viewpoint, as different objects become more
`important to the scene, so that the latency of important
`objects is kept to a minimum.
`Another factor that is not accounted for by conventional
`assessment algorithms is viewer turn rate. Some algorithms
`attach more importance to objects appearing at the center of
`
`20
`
`BACKGROUND OF THE INVENTION
`1. Field of the Invention
`The present invention relates to computer graphics, and
`more particularly, to a method and apparatus that adaptively
`prioritizes objects in a 3-D virtual environment for low(cid:173)
`latency streaming.
`2. Description of the Related Art
`Interactive computer graphics systems produce realistic(cid:173)
`looking, three-dimensional models and are useful for appli(cid:173)
`cations such as architectural and mechanical CAD, flight
`simulation, and virtual reality. Such graphics systems typi(cid:173)
`cally include a computer workstation that displays images of
`a three-dimensional model on a video screen as seen from a
`simulated observer's viewpoint that can be interactively
`controlled by a user. It is generally desired that such graphics
`systems maintain an interactive frame rate that is substan-
`tially constant (e.g., ten frames per second).
`Complex three-dimensional models consist of hundreds
`or thousands of objects, for example, a model of a house
`with dozens of rooms, each containing different furnishings. 25
`As the simulated observer's interactive viewpoint changes
`(e.g. moving from room to room and moving closer to
`certain furnishings), objects can enter or leave the simulated
`field of view, or can be occluded from view by other objects.
`Objects are typically modeled ofiline by a modeling process, 30
`and consist of tesselated polygonal approximations or
`meshes. For realistic representation, modeled objects can
`contain thousands of polygons, each of which must be
`rendered independently on a display. Accordingly, the real(cid:173)
`istic representation of all objects in a scene at desired 35
`constant frame rates, and with changing interactive
`viewpoint, is beyond the capabilities of conventional inter(cid:173)
`active computer graphics systems.
`Techniques have been developed to reduce the processing
`required to render a virtual environment within the desired 40
`frame rates. For example, it is not necessary to render
`objects that are not visible to a hypothetical viewer in the
`environment, so such objects can be culled from the scene
`to be represented. This reduces processing requirements
`without degrading accuracy. However, for complex virtual 45
`environments, the processing savings from culling non(cid:173)
`visible objects are not sufficient to maintain the desired
`interactive frame rates.
`For further efficiency, detail elision techniques establish a
`heirarchy of objects within a field of view, each with a 50
`number of levels of detail (corresponding to the number of
`polygons that have to be rendered, for example). Some
`objects are rendered at higher levels of detail than other
`objects, thus further reducing processing requirements. Heu(cid:173)
`ristics are employed to determine which object should be 55
`rendered at what level of detail, for example, rendering
`apparently closer or larger objects at higher levels of detail
`than objects that are far away or which occupy less screen
`area. Such heuristics are designed to reduce the number of
`polygons associated with the object that need to be rendered 60
`without substantially reducing image quality. The heuristics
`can be static, that is, having a constant set of thresholds from
`frame to frame, or they can be adaptive, where the thresholds
`are adjusted dynamically in accordance with a target param(cid:173)
`eter such as frame rate.
`In a paper entitled "Adaptive Display Algorithm for
`Interactive Frame Rates During Visualization of Complex
`
`65
`
`Microsoft Corp. Exhibit 1006
`
`

`
`6,118,456
`
`4
`FIG. 4 is a flowchart illustrating processing performed by
`an object assessment function according to the present
`invention; and
`FIG. 5 is a flowchart illustrating processing performed by
`5 a streaming function according to the present invention.
`
`DETAILED DESCRIPTION OF THE
`PREFERRED EMBODIMENTS
`
`3
`the screen than those appearing outside the center.
`Meanwhile, however, when a simulated observer's view(cid:173)
`point is turning within the scene, the user who is command(cid:173)
`ing such movement will tend to be more interested in objects
`that are in the direction of the turn. Such an interest will be
`further in proportion to the rate at which the viewpoint is
`turning. Accordingly, higher priority should be given to
`those objects in the direction of the turn in proportion to the
`rate of the turn.
`Therefore, there remains a need for an implementation for
`prioritizing objects within 3D virtual environments and for
`managing the streaming of the objects in response to chang(cid:173)
`ing viewpoints that effectively addresses these challenges.
`The present invention fulfills this need.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`These and other objects and advantages of the present
`invention will become apparent to those skilled in the art
`after considering the following detailed specification,
`together with the accompanying drawings wherein:
`FIG. 1 is a block diagram illustrating an interactive 3-D
`graphical system according to the present invention;
`FIG. 2 is a functional block diagram illustrating a client
`in an interactive 3-D graphical system according to the
`present invention;
`FIGS. 3A and 3B illustrate scenes and objects in a 3-D
`virtual environment that are captured by an instantaneous
`hypothetical viewpoint according to the present invention;
`
`SUMMARY OF THE INVENTION
`An object of the present invention is to provide a method
`and apparatus that effectively manages the display of a
`complex 3-D graphical scene.
`Another object of the present invention is to provide a
`method and apparatus for managing the display of a com- 20
`plex 3-D graphical scene that is capable of working with a
`large number of objects.
`Another object of the present invention is to provide a
`method and apparatus for managing the display of a com-
`plex 3-D graphical scene that is capable of reducing the
`visual latency of the display in response to a changing
`viewpoint.
`Another object of the present invention is to provide a
`method and apparatus for managing the display of a com(cid:173)
`plex 3-D graphical scene that effectively utilizes the total
`bandwidth of the server/client stream.
`Another object of the present invention is to provide a
`method and apparatus for managing the display of a com(cid:173)
`plex 3-D graphical scene that is capable of fairly assessing 35
`visual importance of partially transmitted objects as well as
`fully transmitted objects.
`Another object of the present invention is to provide a
`method and apparatus for managing the display of a com(cid:173)
`plex 3-D graphical scene that requires only bounding box 40
`information to assess an object's visual importance.
`These and other objects and advantages are accomplished
`by the present invention by providing, among other features,
`an object assessment function and a streaming function. The
`object assessment function determines the visual importance 45
`of objects in a 3D graphical scene while taking into con(cid:173)
`sideration the instantaneous hypothetical viewer's viewpoint
`of the scene, and determines the order in which those objects
`should receive data. The streaming function requests data for
`those objects from the server in a manner that maximizes the 50
`available bandwidth for requesting and transmitting such
`data. Reduced visual latency and increased richness of the
`scene to the perception of the viewer are thereby achieved.
`
`25
`
`As shown in FIG. 1, an interactive 3-D graphical system
`10 according to the invention includes a server 10 that com(cid:173)
`municates with a client 12 via a data pipe 14. The server side
`includes a data storage device 16 that stores three(cid:173)
`dimensional graphical representations of a modeled scene.
`The client side includes an interactive 3-D graphical display
`15 system 18 that receives commands from an interactive user
`and displays three-dimensional graphical images in response
`to user commands and image data received from server 10
`via client 12.
`In one example of the invention, server 10 is a host
`process executing on a file server computer, data storage
`device 16 is a hard disk drive, data pipe 14 is a local or wide
`area network, client 12 is a client process executing on a
`computer workstation and communicates with the host pro(cid:173)
`cess via the network using streams protocols, and 3-D
`graphical display system 18 is a video display system with
`input/output devices such as a keyboard and a mouse. It
`should be apparent that the computer workstation associated
`with the client process can include the video display system
`that supports 3-D graphical display system 18. It should be
`30 further apparent to those skilled in the art that each of the
`elements described above can be embodied in various com(cid:173)
`binations of other hardware and software elements that are
`different from this example.
`FIG. 2 further illustrates a client 12 in accordance with the
`invention. It includes an object list 20, an object assessment
`function 22, a priority queue 24, a streaming function 26, an
`object data table 28, and an amortization function 30.
`Object list 20 contains a list of objects within a computer
`generated three-dimensional environment that are stored in
`representative form in data storage device 16. The contents
`of object list 20 are initialized by data received from server
`10, which data includes the bounding box geometry for each
`object (as described in more detail below), the position of
`the object within the 3-D virtual environment, and the
`amount of data associated with the object that is needed to
`fully render the object.
`Object assessment function 22 maintains a list of visible
`objects in priority queue 24 in accordance with an instan(cid:173)
`taneous viewpoint of a hypothetical viewer within the three(cid:173)
`dimensional environment. The instantaneous viewpoint is
`updated in accordance with user commands as processed by
`interactive 3-D graphical display system 18. Object assess(cid:173)
`ment function 22 determines a priority value for each visible
`55 object among the objects in object list 20, in accordance with
`an algorithm that will be described in more detail below, and
`orders the list of objects in priority queue 24 correspond(cid:173)
`ingly. Object assessment function 22 also monitors the
`contents of object data table 28 to determine which objects
`60 need more data to be rendered accurately.
`Streaming function 26 manages the request and receipt of
`object data from server 10 via data pipe 14. Data pipe 14 has
`a maximum available bandwidth, which can be received as
`a parameter by streaming function 26. In accordance with an
`65 algorithm that will be described in more detail below,
`streaming function 26 determines the amount of object data
`that can be received during each update cycle
`
`Microsoft Corp. Exhibit 1006
`
`

`
`6,118,456
`
`25
`
`5
`(corresponding to the frame update rate, which can also be
`specified by a parameter input by streaming function 26),
`given the maximum available bandwidth of the data pipe,
`and fills the available bandwidth by making requests of
`object data in accordance with the contents of priority queue 5
`24. The received object data is stored in object data table 28,
`where it is made available to a rendering program in
`interactive 3-D graphical display system 18.
`Amortization function 30 assigns to object assessment
`function 22 and streaming function 26 respective amounts of
`time within the update cycle in which to operate. When the
`assigned time period is exceeded, processing related to the
`respective function is not permitted to continue and control
`is given to the other function. On the next frame update
`cycle, processing for each function continues from the 15
`previous termination point. Accordingly, object assessment
`function 22 and streaming function 26 need not process a
`predetermined number of objects each update cycle. Rather,
`amortization function 26 insures that the processing times
`for each function do not exceed the amount of time elapsed 20
`during an update cycle. Assuming frame coherence, (i.e., a
`high degree of similarity between scenes in successive
`frames) any error introduced by cutting off processing
`before all objects are updated each frame should be minimal
`and quickly remedied.
`As shown in FIG. 3A, a three-dimensional scene 40
`according to the invention is a partial view of a computer(cid:173)
`generated 3-D environment 42 that is capable of being
`displayed on a computer display such as that included in
`interactive 3-D graphical display system 18. Scene 40 con- 30
`tinuously changes based on the movement of hypothetical
`viewer V through the environment, as commanded by the
`interactive user through interactive 3-D graphical display
`system 18. The hypothetical viewer's instantaneous position
`and perspective within the environment that captures scene 35
`40 is referred to as viewpoint 44.
`Environment 42 includes a number of objects 46-1 ...
`46-N. Each object is actually a mesh comprising a collection
`of polygons that result from an offline tesselation of a
`modeled physical object, the techniques for performing such
`offline tesselation and mesh generation being well known in
`the art. The collection of polygons for each object is stored
`on data storage device 16. It should be noted that at any
`given time, certain ones of objects 46-1 ... 46-N may be
`within scene 40 visible through viewpoint 42, while certain
`others may be outside the scene.
`As shown in FIG. 3B, each object has a position P within
`environment 42 with respect to a Cartesian coordinate
`system. Further associated with each object is an axially
`aligned bounding box 48 that describes the object's position
`and volume. This box is sized so as to exactly contain all of
`the object's geometry with respect to each axis. The exact
`center of the bounding box corresponds to the position P of
`the object with which the bounding box is associated.
`Bounding box information is stored on the client side in
`object list 12.
`Scene 40 is rendered on a computer display such as on
`interactive 3-D graphic display device 18 by rendering each
`object, one at a time. Rendering of scene 40 is repeated
`every frame update cycle. A frame is a single update of the
`system. The frame update cycle is the number of times per
`second the system updates. To render the scene each frame,
`display device 18 accesses data from object data table 28 for
`each object within the scene. To be accurately rendered, the
`object's entire collection of polygons (i.e., object data) must
`have been transmitted from server 10 to client 12. When an
`
`6
`object needs more data than it currently has stored in object
`data table 28 to be rendered accurately, that object has a data
`deficit, and additional data must be requested from, and
`transmitted by, server 10.
`Requests for object data by client 12 (i.e. back channel
`data), and transmissions of object data from server 10 to
`client 12 (i.e. forward channel data), both consume band(cid:173)
`width. The bandwidth of data pipe 14 refers to the total
`amount of data in bits per second that can be communicated
`10 between client 12 and server 10.
`To utilize the available bandwidth efficiently, therefore,
`the present invention includes object assessment function 22
`and streaming function 26. As will be explained in more
`detail below, for all visible objects in a scene that have a data
`deficit, object assessment function 22 determines an impor(cid:173)
`tance value associated therewith. Streaming function 26 then
`sends requests to server 10 for data for those objects starting
`with the most important objects, as queued in priority queue
`24 by object assessment function 22, and progressing to the
`least important as resources permit. The amount of data
`requested is carefully monitored and transmissions of
`requests stop when it has been determined that the data
`requested exceeds the amount of data that can be returned
`before the next update cycle.
`FIG. 4 illustrates processing performed by object assess(cid:173)
`ment function 22.
`When the scene updates, in response to a user-
`commanded change in viewpoint for example, all objects
`currently queued for receiving data are reprioritized with the
`goal that the most important objects in the scene for the
`current viewpoint are most accurately rendered. First, in step
`S2, object assessment function 22 determines what objects
`are within the current scene that is captured with the
`viewpoint for this update cycle. This can be performed, for
`example, by comparing the current viewpoint with the
`position and geometry of each object's bounding box stored
`in object list 20.
`Next, in step S4, object assessment function 22 deter-
`40 mines whether the object within scene 40 that is currently
`being analyzed is visible. Operating on only visible objects
`reduces the number of objects that need to be considered at
`any one time and makes the algorithm more amenable to
`large data sets. If a visible object has been previously queued
`45 as needing data, but then falls outside the viewpoint of the
`hypothetical viewer or otherwise becomes not visible, it is
`removed from priority queue 24 (step S6). Object visibility
`can be determined using techniques that are known in the
`art, including for example spatial culling, view frutsrum
`50 culling and occlusion culling.
`If it is determined in step S4 that the object is visible,
`object assessment function 22 then determines whether the
`data has a data deficit. That is, it is determined in step S8
`whether all of the data needed to render the object accurately
`55 is stored in object data table 28. For example, if the amount
`of data in object data table 28 equals the amount of data
`required to fully render the object, as listed in object list 22
`for the object, that object can be rendered accurately. If so,
`no data for the object needs to be requested, so the object is
`60 removed from priority queue 24 if it was previously queued
`therein (step S6).
`Otherwise, if the object is visible and has a data deficit,
`processing advances to step S10, where object assessment
`function 22 determines the relative priority to be assigned to
`65 the object.
`Object priority is determined by calculating an Impor(cid:173)
`tance value for each object in the scene based on a set of
`
`Microsoft Corp. Exhibit 1006
`
`

`
`6,118,456
`
`8
`The Screen Area Value indicates the area occupied by the
`object within the entire scene, and thus its visual impact on
`the scene. There are two steps to calculating this Value. First,
`the number of pixels that the bounding box of the object
`5 occupies is calculated, then this number is converted to a
`percentage of the total screen area that the object occupies.
`In the first step, the coordinates of the corners of the
`bounding box are multiplied through the projection matrix
`for the scene and the resulting screen space numbers give the
`10 pixel coordinates of these points on the screen. The largest
`and smallest X and Y coordinates are selected to determine
`the overall screen area.
`
`In the second step, the resulting value is scaled relative to
`the total screen area, giving a value of 1000 for a bounding
`box which occupies the full screen and a value of 0 for an
`object which is smaller than one pixel.
`The following integer calculation is used
`
`Screen Area~(Pixel Area*1000)(fotal Screen Pixel Area
`
`7
`common evaluation factors. Importance is thus a measure of
`an object's contribution to the visual richness of the overall
`scene. It takes into account the characteristics of the object
`with regard to the current viewing position. As the hypo(cid:173)
`thetical viewer's viewpoint changes, an object's importance
`may change. Prioritizing requests for data so as to more
`completely render those objects that contribute most to the
`scene, and updating those priorities in accordance with the
`changing viewpoint, results in a visual perception that the
`richness of the scene is improving rapidly in response to
`such changes in viewpoint.
`It should be noted that objects within the scene can have
`different data deficits. That is, some objects may have
`sufficient data already transmitted to the client side so as to
`be rendered with high resolution, while others may have 15
`only their primitive representations already transmitted. In
`order to process all objects in the scene fairly and
`consistently, the lowest common denominator of the data
`states, the primitive representation, of all objects should be
`used. In this example of the invention, object assessment 20
`function 22 therefore utilizes only the geometry of the
`bounding box associated with each object to make priority
`assessments. This makes it possible to compare objects that
`may have no data against objects that have received several
`data updates. Moreover, because calculations using the 25
`bounding box representation are typically less intensive,
`advantages in processing speeds are obtained.
`Object assessment function 22 determines the Importance
`of each visible object by calculating and summing together
`a series of n Value/Weight pairs for the object. These pairs
`consist of a component that is instantaneously calculated for
`the object (the Value) and a corresponding scaling factor (the
`Weight) that determines how important the corresponding
`Value is to the overall Importance. Accordingly, for each
`object 0,
`
`35
`
`The Focal Point Value considers the area of the screen
`where the viewer is most likely to be looking and assigns
`higher importance to objects in this area than objects outside
`this area.
`The Focal Point is nominally the center of the screen and
`is modified by a factor related to the amount of the instan(cid:173)
`taneous turn rate and the maximum turn rate of the viewer.
`30 The Focal Point Value is calculated to fit the following scale
`O=furthest point from the focal point
`1000=exactly on the focal point
`The focal point is offset from the center of the screen by
`the following algorithm
`Offset=(Screen Size/2)*(turn rate/MAX_RATE)
`Pixel position of the focal point=( Center of the screen)+
`Offset
`Turn rate is a number that grows positive for left turns and
`negative for right turns and increases to a maximum value
`determined by MAX_RATE. This maximum value corre(cid:173)
`sponds to the maximum angular change in heading the
`viewer can achieve in one scene update. This maximum can
`be set as a parameter for a particular application. For
`example, if the application uses a car as a hypothetical
`viewer moving through a virtual environment rather than a
`person walking, the maximum turn rate will be different. The
`maximum right and left turns are denoted by a polarity of the
`MAX_RATE parameter, that is:
`
`Importance(O)~L(Value(n,O)*Weight(n))
`
`Preferably, the calculation of all Values are scaled so that
`each calculated Value falls within a range of 0 to 1000, and 40
`all Weights are assigned so that they sum to a value of 1000.
`This results in a calculated Importance that is always
`between 0 to 1,000,000, thus allowing simple and fast
`comparison between objects in a scene

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket