throbber
Interactive 3D Graphics
`Providence, R.I., April 27 — 30, 1997
`
`Symposium Chair
`
`Andy van Dam, Brown University
`
`Program Co-Chairs
`
`Michael Cohen, Microsoft
`David Zeltzer, David Sarnoff Research Center
`
`Program Committee
`
`Kurt Akeley, Silicon Graphics
`Fred Brooks, Jr., University of North Carolina
`Ingrid Carlbom, Lucent Technologies
`Ed Catmull, Pixar
`Frank Crow, Interval Research
`Jessica Hodgins, Georgia Institute: of Technology
`Fred Kitson, Hewlett Packard
`Marc Levoy, Stanford University
`Dan Ling, Microsoft
`Peter Schréder, California Institute of Technology
`Susumu Tachi, University of Tokyo
`Michael Zyda, Naval Postgraduate School
`
`Supporting Organizations
`
`Autodesk,Inc.
`Cyberware
`David Sarnoff Research Center
`Fraunhofer CRCG
`Hewlett-Packard
`Interval Research
`Microsoft
`Mitsubishi Electric Research Labs
`The National Science Foundation
`The NSF Science & Technology Center for Computer Graphics & Scientific Visualization
`Office of Naval Research
`Pixar Animation Studios
`Sense8 Corporation
`Silicon Graphics,Inc.
`Sun Microsystems
`U.S. Army Research Laboratory
`
`Proceedings Production Editor
`
`Stephen N. Spencer, The Ohio State University
`
`Sponsored by the Association for Computing Machinery’s
`Special Interest Group on Computer Graphics (SIGGRAPH)
`
`1
`
`MEDIATEK,Ex. 1014, Page 1
`IPR2018-00101
`
`MEDIATEK, Ex. 1014, Page 1
`IPR2018-00101
`
`

`

`Copyright © 1997 by the Association for Computing Machinery, Inc. (ACM). Copying without fee is permitted provided that copies are not
`made ordistributed for profit or commercial advantage andcredit to the source is given. Abstracting with credit is permitted. To copy oth-
`erwise, to republish, to post on serversorto distribute to lists,requires prior specific permission and/or a fee. For other copyingofarticles
`that carry a code at the bottom ofthefirst or last page, copying is permitted provided that the per-copy fee indicated in the code is paid
`through the Copyright Clearance Center, 222 Rosewood Drive, Danvers, MA 01923. Request permission to republish from: Publications
`Dept., ACM, Inc., Fax +1-212-869-0481 or e-mail permissions@acm. org
`
`Orders from ACM Members:
`
`A limited numberof copies are available from
`ACM MemberServices. Sendall inquiries, or
`order with paymentin U.S. dollars to:
`
`U.S.A. and Canada:
`ACM Order Department
`PO. Box 12114
`Church Street Station
`New York, NY 10257
`Telephone: +1-800-342-6626
`Telephone: +1-212-626-0500
`Fax: +1-212-944-1318
`E-mail: orders@acm. org
`URL: http: //www.acm.org/
`
`All other countries:
`ACMEuropeanService Center
`108 Cowley Road
`Oxford OX4 1JF
`United Kingdom
`Telephone: +44-1-865-382338
`Fax: +44-1-865-38 1338
`E-mail: acm_europe@acm.org
`
`Please include the ACM order numberin all
`inquiries.
`
`ACM Order Number: 429973
`ACM ISBN: 0-89791-884-3
`
`M.1.T. LIBRARIES
`
`RECEIVED_
`
`|
`
`JUN 1 8 1997
`
`2
`
`MEDIATEK,Ex. 1014, Page 2
`IPR2018-00101
`
`MEDIATEK, Ex. 1014, Page 2
`IPR2018-00101
`
`

`

`Keynote Address
`Bill Buxton
`
`Image Based Rendering
`Chair: Kurt Akeley
`
`Post-Rendering 3D Warping «2.2.0.0...en ete eee eee tent eee eee7
`William R. Mark, Leonard McMillan, Gary Bisho,
`Color Plate ........0-........ 00000000.ee ee ee eee eben ene 180
`
`Time Critical Lumigraph Rendering ©2000... 0.0. cence ents 17
`Peter-Pike Sloan, Michael F. Cohen, Steven J. Gortler
`Color Plate 2...ee ene een ee tne beeen 181
`
`Navigating Static Environments Using Image-Space Simplification and Morphing ....................00..25
`Lucia Darsa, Bruno Costa Silva, Amitabh Varshney
`Color Plate 20ne nnn ne ee beet eee eae 182
`
`Manipulation and Motion in VR
`Chair: Frederick Brooks, Jr.
`
`An Evaluation of Techniques for Grabbing and Manipulating Remote Objects in
`Immersive Virtual Environments ..... 0.0.ee teen tenn eee ees35
`Doug A. Bowman, Larry FE Hodges
`Color Plate 2.ee ee ne ee eee ee eeeeEe as 182
`
`Image Plane Interaction Techniques In 3D Immersive Environments ..........0.00000 000 eee eee eee ee39
`Jeffrey S. Pierce, Andrew S. Forsberg, Matthew J. Conway, Seung Hong,
`Robert C. Zeleznik, Mark R. Mine
`Color Plate 2...eee tenet teen e nents 183
`
`Providing a Low Latency User Experience In A High Latency Application ...... 0.0.00... ccc eee eed45
`Brook Conner, Loring Holden
`Color Plate 2...ne eet eee ttn e nee teas 184
`
`Managing Latency in Complex Augmented Reality Systems ... 2.0.0.0... 0.2. c ec ec teens49
`Marco C. Jacobs, Mark A. Livingston, Andrei State
`Color Plate 20.en een nee teen ee eee 185
`
`Rendering Complex Models I
`Chair: Peter Schréder
`
`View-Dependent Culling of Dynamic Systems in Virtual Environments ......... 2.0.0.0... cece ee eee55
`Stephen Chenney, David Forsyth
`
`Multi-Pass Pipeline Rendering: Realism For Dynamic Environments .........0...0. 00.00 e cece nent eee59
`Paul J. Diefenbach, Norman I. Badler
`Color Plates 2.00. been een ete tebe en eens 186, 187
`
`Efficient Radiosity Rendering using Textures and Bicubic Reconstruction ...........0.0.0 00 cece eee ee eeeTI
`Rui Bastos, Michael Goslin, Hansong Zhang
`Color Plate 2...ene eee en eee nen ere eben ents 184
`
`Model Simplification Using Vertex-Clustering «2.2.0...ce teen teen eae75
`Kok-Lim Low, Tiow-Seng Tan
`Color Plate 26. ene eee nee tenet e nent e ens 188
`
`3
`
`MEDIATEK,Ex. 1014, Page 3
`IPR2018-00101
`
`MEDIATEK, Ex. 1014, Page 3
`IPR2018-00101
`
`

`

`Brook Conner
`Sony Corporation of America
`550 Madison Avenue, 5th floor
`New York, New York, 10028
`brook_conner@ sonyusa.com
`
`Loring Holden
`Brown University site of the
`NSFScience and Technology Center for
`Computer Graphicsand Scientific Visualization
`Providence, RI 02912
`Ish @cs.brown.edu
`
`ABSTRACT
`Throughtheuse of particular visual effects, we provide a low
`latency user experience, even when extremely large latencies
`occur in an application. We demonstrate these effects in a
`wide-area distributed virtual reality application. Theseeffects
`include the use of motion blur, transparency, and defocusing.
`While the effects incur a performance penalty, the penalty
`is predictable, unlike the lag induced by network delays.
`Thus, we provide immediate feedback to each participant,
`even whenthe networkprevents information moreusefulthan
`the fact that delays are occurring. When updatesare finally
`received, we use the same effects to provide coherent updates
`to the user’s information, withoutthe jarring discontinuities
`that otherwise would confuse a participant’s understanding
`of the environment.
`
`1.3.6 [Computer
`CR Categories and Subject Descriptors:
`Graphics]: Methodology and Techniques - Interaction Tech-
`niques; I,3.2 [Computer Graphics]: Graphics Systems - Dis-
`tributed/network graphics
`
`Additional Keywords: motion blur, temporal aliasing, vir-
`tual environments
`
`Introduction
`Widearea networkssuch as the Internet provide no guarantees
`aboutthe timeit takes a packetof informationto reachits goal.
`A packet mightbe delayed for any numberofreasons, includ-
`ing high networktraffic, slow hardware along the particular
`route chosen, load problemsat either the source or the desti-
`nation machine,or simply an unnecessarily long route. While
`providing dedicated resources can mitigate these effects, they
`cannot be eliminated without the use of a completely closed
`network.
`
`In contrast, virtual reality applications have very stringent
`latency requirements. High latency can induce a number of
`unpleasant effects, such as simulator sickness or a loss of
`feeling of control (as when an environment responds to a
`
`Permission to makedigital/hard copiesofall or part ofthis material for
`Personalor classroomuseis granted withoutfee provided that the copies
`are not made or distributed for profit or commercial advantage, the copy-
`night notice, thetitle of the publication and its date appear, and notice is
`Biven tat popyrightis by permissionofthe ACM,Inc. To copy otherwise
`republish, to pos
`s
`Ss
`istri
`ists,
`requires
`specific.
`pennission adie fe~ serversorto redistribute to lists, requires specific
`1997 Symposium on Interactive 3D Graphics, Providence RI USA
`Copyright 1997 ACM 0-89791-884-3/97/04 .- $3.50
`
`user’s actions sluggishly).
`
`Increasingly virtual reality applications are becoming net-
`worked over general-purpose LANs [5, 7] making the con-
`flicting constraints of networks and virtual reality a growing
`problem.
`
`Prior Work
`Someresearchers have attempted to address latency in dis-
`tributed VR at a system level [5,7]. Others have attempted to
`increase a user’s understanding of a complex scene through
`the use of visual effects, such as motion blur and cartoon
`idioms such as squash and stretch [2, 3].
`In contrast, our
`work uses visual effects to increase the user’s understanding
`of latency as it occurs, thereby mitigating the effects of that
`latency. Before examining our own techniques,let’s consider
`both the system-based techniques addressing latency and the
`visual effects addressing complexity.
`
`Addressing Latency at a System Level
`In the past, distributed VR applications have used two com-
`plementary techniques to address problemsof latency: dead
`reckoning and decoupling.
`
`Dead reckoning addresseslate arrival of motion information.
`By using derivatives of earlier motion or derivative informa-
`tion provided by an objectitself, the dead reckoning system
`calculates the position of an object locally, without needing
`to wait for the arrival of the actual information [5]. This
`can clearly introduce errors when the dead reckoning system
`uses a derivative whose own derivative is non-zero (such as
`the derivatives of objects under user control).
`
`More advanced dead reckoning systems use a Kalman filter
`[1] to attempt to predict where an object will be in the future.
`This technique also introduces errors, especially when the
`model of the object’s motion used by the Kalman filter is a
`poor one.
`
`Decoupling in VR systemsis the process of making the sys-
`tem multi-threaded, with certain threads assigned to tasks
`most in need of low latency [7]. Typically, these threadsare
`Unix processes provided with dedicated hardware resources
`to guarantee a minimum level of performance. Non-critical
`tasks are serviced as resources becomeavailable. This ap-
`proach can guarantee, for example, that a new framewill be
`generated to track user head motion, but the contents of the
`environment(for example, balls moving under a kinematics
`simulation) may be changed at a slowerrate. This technique
`
`
`
`45
`
`MEDIATEK,Ex. 1014, Page 4
`IPR2018-00101
`
`MEDIATEK, Ex. 1014, Page 4
`IPR2018-00101
`
`

`

`Weare using particular visual effects that have been used in
`the past to make complex scenes more understandable. By
`demonstrating visually that something complex is happening,
`a user can have a greater understanding ofthe scene.
`
`[4] focuses on this approach.
`The work of Brenda Laurel
`Other work, such as John Lasseter
`[3], demonstrates the
`use of traditional animation techniques in order to explain
`a scene as Clearly as possible. This has been applied in a
`programming environmentby the Self Project
`[2]. Objects
`in the Self environment draw attention to themselves with
`animated wiggles and expressive paths. When they move
`fast, they motion-blurto indicate a continuous motion, not a
`discrete change of location.
`
`Other work has developed techniques for producing the ef-
`fects of motion blur [11] and defocusing [6] in a fully three-
`dimensional, real-time environment, such as virtual reality.
`This work showed that rich visual effects could be achieved
`with relatively little cost in computation time or rendering
`performance.
`
`Weuse the effects and the concepts developed by these re-
`searchers to address a new problem, high latency. As ex-
`plained above, high latency can produce manyeffects that a
`user does not expect. With these techniques, we can show
`the user that latency-induced effects are occurring, and that
`as a result object behavior might be somewhatdifferent from
`what was anticipated.
`
`Visual Effects to Reducethe Perception of Latency
`Let us now look atour particular effects and how they mitigate
`the user’s perception of lag. First, we discuss the actions that
`include large sourcesof lag. Second, wediscussthe particular
`effects we use and in what situations we use them. Third,
`webriefly discuss how a range of possible effects might be
`applied in a consistent visual languageof lag and its effects.
`
`Someof the User Actions Affected by Latency
`We focus on two aspects of interacting in a shared virtual
`world: actions initiated by a participant and actions observed
`by a participant.
`In both cases, we work with solely those
`cases where unbounded network lag can cause extremely
`undesirable effects, such as a frozen world. For example,
`navigating through a scene, while initiated by a participant, is
`generally not a source of unbounded networklag, as naviga-
`tion is typically handled locally. Thus, we make no attempt
`to mitigate the effects of lag on navigation, though there is
`lag present and our effects could be reformulated perhaps to
`addressthatlag.
`
`Typical actionsinitiated by a participant include grabbing an
`object and moving it through a scene. Grabbing a shared ob-
`ject is a source of network-based lag, as a shared object must
`be controlled by a mutually exclusive lock. Acquiring such
`a lock requires a network roundtrip (unless the lock coinci-
`dentally resides on the same machineas the participant who
`wishes to take the lock - a situation which is generally not
`true). A user interface that does notallow the user to manip-
`
`object.
`
`Some have suggested that social conventions can suffice in
`place of alock, such as aconventionthatusers notgrab objects
`that others are already holding. However, in a collaborative
`environmentthese social conventions often break down. For
`example, in the midst of exciting collaboration, participants
`may lose track of social graces (as when two people in a
`physical room get excited about a project or get into a heated
`argument).
`
`Theactions initiated by other participants are also a source of
`lag, since a remote participant’s actions necessarily require
`the potentially high-lag network transmission of information.
`An avatar might appear to move slowly or notat all, simply
`due to poor network performance. Objects being controlled
`by other participants suffer from similar problems.
`
`Transparency, Motion Blur, and Defocusing
`Weaddress the problemsofthe users’ experience with real-
`time transparency, real-time motion blur, and real-time defo-
`cusing (e.g., per-object depth-of-field effects). Although all
`of these effects take more time to render, usually requiring
`multiple rendering passes,
`they can still be done by stan-
`dard 3D graphics libraries and accelerators quickly enough
`to maintain the frame rate required by a VR application.
`
`In the Macintosh Finder [10] the user clicks on an icon, and
`as she drags the icon across the screen, an outline of the icon
`(or a transparent copy in more recent versions) follows the
`pointer. When the user "drops" the icon, the transparent ver-
`sion becomes opaque and the old opaque version disappears.
`
`transparency can be used when manipulating a
`Similarly,
`shared object whose lock has not yet been acquired. Two
`copies of the object are used, one that follows the user’s
`manipulations and one that remains where the object will be
`if the lock fails (see colorplate,left). Both objects are drawn
`transparent, indicating that both are tentative. If the lock is
`acquired, the copy the user is manipulating becomes solid
`and the other copy fades out (see color plate, bottom right).
`If the lock is not acquired, the copy the user is manipulating
`fades out while the other copy becomessolid (see color plate,
`top right). We call this technique "ghost locking".
`
`Ghost locking makes the user aware of many aspects of ma-
`nipulating a shared object in a networked environment. The
`user has an immediate response to her actions, affording a
`feeling of control over the environment. She also sees im-
`mediately that lag is occurring, because of the appearance
`of a “partial” or tansparent object. Finally, fading out the
`copy that lost the lock gives the user time to understand the
`changed state of the manipulated object.
`
`This effect can be customized by varying levels of opacity in
`either the grabbed object or the original object, depending on
`outside knowledge aboutthe state of the lock or the likelihood
`that the lock will be taken successfully. For example, a lock
`is morelikely to fail for a highly contended object.
`In this
`
`MEDIATEK,Ex. 1014, Page 5
`IPR2018-00101
`
`46
`
`MEDIATEK, Ex. 1014, Page 5
`IPR2018-00101
`
`

`

`may make sense whenthe expected lag is short and the user
`robably will not movethe ghostfar from the original location
`(or acquiring the lock is very likely). Ghost locking with no
`original copylessensclutter, but removesthe reference to the
`original location. Withoutthis reference object, lock denial
`may be more disruptive.
`
`When an object is manipulated, motion blur can be used
`to show that the motion is a continuous path, even when
`network updatesare late in arriving (in which situation many
`VR applications discard all but the latest update). Motion
`blur can similarly be used when correcting an object that has
`moved to an incorrect position because of dead reckoning.
`
`A blur effect using the real-time motion blur effect can be
`integrated with ghost locking. The grabbed object can be
`blurred backtoits original (producinga taffy-like look),until
`the lock is acquired, when the original blurs to the manipu-
`lated position. If the lock is not acquired, the "taffy" snaps
`back into its original position.
`
`Defocusing can be applied in all situations when unknown
`delays are occurring. When a response is expected from
`an object (or an avatar) and noneis forthcoming, the object
`that should be responding can be defocused. Note that this
`integrates nicely with earlier work, such as dead-reckoning-
`an object may be moving under dead-reckoning for too long,
`providing too much error. A blurry moving objecttells the
`user that the object’s position may in fact not be correct.
`
`Objects can be defocused more as more delays occur. Even-
`tually, a non-responsive object can fade out entirely. When
`integrated with motion blurring and transparency manipula-
`tion, defocusing can inform a participant when unantic ipated
`delays are occurring and howsevere the delays are. This pro-
`vides a participant with information about when an avatar or
`Object is non-responsive, as opposed to unchanging.
`
`Applying Visual Effects with a Consistent Visual Lan-
`guage
`These visual effects have many parameters. The level of
`transparency or blurring can be varied in all of them, and
`some have alternative visual representations. Exactly which
`parameter and style should be used dependsonthe exact situ-
`ation. For example, making grabbed objects transparent may
`hot be appropriate if most objects in the scene are transparent
`- using wireframe would be moresuitable.
`
`While we havenotperformed user studies, we have found that
`4 consistent choice of parameters and styles can effectively
`Produce a visual languageof the effects of lag in a system.
`Cloudiness as seen in blur effects visually tells the user that
`the situation is uncertain in waysthat precise rendering styles
`
`like motion lines do not. We feel that using motion blur to
`_
`; Suggest continuous motion, the "taffy" effect for manipulating
`i Objects, and defocusing when delays occur is a particularly
`, 800d exampleof this kind of visual language.
`
`effects be used in a wide variety of applications, while pro-
`viding extremely good performance. Wetested the nodesin
`an Inventor-based application called vrapp. This application
`supports wide-area distribution of immersive environments,
`including text, audio, and video communication, manipula-
`tion of shared objects, avatars, and support of specialized
`immersive hardware.
`
`All of the transparency or blur-based effects that we describe
`could be implemented in a straightforward way using multi-
`ple copies of the object and varying levels of transparency.
`Naively, this can be done by using Inventor’s support of mul-
`tiple instantiation of a scene graph, with varying transparency
`and transformation nodes interspersed between the instantia-
`tions. However, most of the visual techniques are amenable
`to more clever implementations that provide an improved ap-
`pearance and performance, as detailed in the papers that first
`described them.
`
`Future Work
`We currently have informal evidence that suggests that these
`effects make task performance in a wide area environment
`more productive. A formal user study that determines whether
`the added rendering costof the visual effects provides a mea-
`surable user performance benefit should be performed.
`
`The exact styles and parameters that are most effective can
`also be determined through a suitable user study. It may be
`the case that some effects work best in concert with others,
`or with others but only within a certain range of parameters.
`Choosing particular values, e.g., for level of transparency, at
`this point is simply guesswork.
`
`It may be possible that the system-oriented methods for ad-
`dressing lag, such as dead-reckoning and decoupling, can be
`integrated more closely with the visual effects we describe
`here. If this is the case, it might be possible to provide the
`user with even more information about what is happening in
`the virtual world and why, without confusing ber.
`
`Acknowledgements
`We would like to thank Andries van Dam and the Brown
`Graphics Group for their help, but most particularly Bob
`Zeleznik for help in porting his earlier work andin preparing
`the video and paper.
`
`We would alsolike to thank our sponsors: grants from NSF,
`NASA, Microsoft, Sun Microsystems and Taco; hardware
`from SGI, Hewlett-Packard, Sun.
`
`REFERENCES
`1. Ronald Azumaand Gary Bishop. A Frequency-Domain
`analysis of head-motion prediction.
`In Robert Cook,
`editor, SIGGRAPH 95 Conference Proceedings, Annual
`Conference Series, pages 401-408. ACM SIGGRAPH,
`Addison Wesley, August 1995. held in Los Angeles,
`California, 06-11 August 1995.
`
`
`
`47 MEDIATEK,Ex. 1014, Page 6
`IPR2018-00101
`
`MEDIATEK, Ex. 1014, Page 6
`IPR2018-00101
`
`

`

`plied to 3d computer animation. Computer Graphics
`(SIGGRAPH ’87 Proceedings), 21(4):35—44, July 1987.
`
`. Brenda Laurel. Computers as Theatre. Addison Wesley,
`1991.
`
`. M.R. Macedonia, D. P. Brutzmann, M.J. Zyda, D. R.
`Pratt, P. T. Barham,J. Falby, and J. Locke. NPSNET: A
`multi-player 3D virtual environment over the internet.
`In Pat Hanrahan and Jim Winget, editors, 1995 Sympo-
`sium on Interactive 3D Graphics, pages 93-94. ACM
`SIGGRAPH,April 1995. ISBN 0-89791-736-7.
`
`and
`. Toshikazu Ohshima, Hiroyuki Yamamoto,
`Hideyuki Tamura. Gaze-directed adaptive rendering for
`interacting with virtual space. In Virtual Reality Annual
`International Symposium, pages 103-110, April 1996.
`
`. Chris Shaw, Jiandong Liang, Mark Green, and Yunqi
`Sun. The decoupled simulation mode! for virtual reality
`systems, Proceedings of CHI’92, pages 321-328, May
`1992.
`
`. P. Strauss and R. Carey. An object-oriented 3D graphics
`toolkit. Computer Graphics (SIGGRAPH ’92 Proceed-
`ings), 26(2):341~349,July 1992.
`
`. Josie Wernecke. The Inventor Toolmaker. Addison-
`Wesley, 1994,
`
`10,
`
`11.
`
`G. Williams. The apple macintosh computer. Byte,
`9(2):709 — 718, 1984.
`
`Interactive
`Matthias Wloka and Robert C. Zeleznik.
`real-time motion blur. Visual Computer, pages 273 —
`295, 1996,
`
`48 MEDIATEK,Ex. 1014, Page 7
`IPR2018-00101
`
`48
`
`MEDIATEK, Ex. 1014, Page 7
`IPR2018-00101
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket