throbber
WebNet 96
`
`Proceedings of WebNet 96—
`World Conference of the Web Society
`San Francisco, California, USA; October 15-19, 1996
`
`AACE ASSOCIATION FOR THE ADVANCEMENT OF COMPUTING IN EDUCATION
`
`Page 1 of 11
`
`Google Inc.
`GOOG 1017
`IPR of U.S. Patent No. 6,014,698
`
`

`

`Copyright © 1996 by the Association for the Advancement of Computing in Education (AACE)
`
`All rights reserved. No part of this book may be reproduced, stored in a retrieval system, or
`transmitted, in any form or by any means, without the prior written permission of the publisher.
`
`The publisher is not responsibile for the use which might be made of the information contained in
`this book.
`
`Published by
`
`Association for the Advancement of Computing in Education (AACE)
`P.O. Box 2966
`Charlottesville, VA 22902 USA
`
`Printed in the USA
`
`Page 2 of 11
`
`

`

`FULL PAPERS
`
`BRIAN BEVIRT
`Designing hypertext navigation tools
`MARTIN BICHLER & STEFAN NUSSER
`Developing structured WWW-sites with W3DT
`JOHN BIGELOW
`Developing an Internet section of a management course: Transporting learning Premises Across Media
`MICHAEL BJORN & YIN YUE CHEN
`The world-wide market: Living with the realities of censorship on the Internet
`MANFRED BOGEN, MICHAEL LENZ & SUSANNE ZIER
`A Broadcasting Company goes Internet
`WILLIAM A. BOGLEY, JON DORBOLO, ROBBERT O.ROBSON & JOHN A. SECHREST
`New pedagogies and tools for Web-based calculus
`ELLEN BORKOWSKI, DAVID HENRY, LIDA LARSEN & DEBORAH MATEIK
`Supporting teaching and learning via the Web: Transforming hardcopy linear mindsets into Web flexible
`creative thinking
`ALEXANDRA BOST
`The WWW as a primary source of customer support
`WOLFGANG BROLL
`VRML and the Web: A fundament for multi-user virtual environments on the Internet
`CHRIS BROWN & STEVE BENFORD
`Tracking WWW Users: Experience from the Design of HyperVis
`PETER BRUSILOVSKY, ELMAR SCHWARZ & GERHARD WEBER
`A tool for developing adaptive electronic textbooks on WWW
`ROBERT BUCCIGROSSI, ALBERT CROWLEY & DANIEL TURNER
`A comprehensive system to develop secure Web accessible databases
`ANTONIO CAPANI & GABRIEL DE DOMINICIS
`Web Algebra
`SCOTT CAPDEVIELLE (BUSINESS/CORPORATE)
`Capturing the state of the web
`CURTIS A. CARVER & CLARK RAY
`Automating hypermedia course creation and maintenance
`LEE LI-JEN CHEN & BRIAN R. GAINES
`Methodological issues in studying and supporting awareness on the World Wide Web
`CAROL A. CHRISTIAN
`Innovative resources for educational and public information: Electronic services, data and information from
`NASA’s Hubble Space Telescope and other NASA missions
`BETTY COLLIS, TOINE ANDERNACH & NICO VAN DIEPEN
`The Web as process tool and product environment for group-based project work in higher education
`MARGARET CORBIT & MICHAEL HERZOG
`Explorations: The Cornell Theory Center’s online science book
`CHANTAL D’HALLUIN, STEPHANE RETHORE, BRUNO VANHILLE & CLAUDE VIEVILLE
`Designing a course on the Web: The point of view of a training institute
`P.M.D. DE BRA
`Teaching hypertext and hypermedia through the Web
`DIETER W. FELLNER & OLIVER JUCKNATH
`MRTspace: Multi-user 3D environments using VRML
`BARRY FENN & JENNY SHEARER
`Delivering the daily Us
`RICHARD H. FOWLER, WENDY A.L. FOWLER & JORGE L. WILLIAMS
`3D visualization of WWW semantic content for browsing and query formulation
`
`Page 3 of 11
`
`

`

`Tracking WWW Users: Experience from the Design of HyperVisVR
`
`Chris Brown
`Communications Research Group
`University of Nottingham, University Park
`Nottingham NG7 2RD ENGLAND
`ccb@cs.nott.ac.uk,
`
`Steve Benford
`Communications Research Group
`University of Nottingham, University Park
`Nottingham NG7 2RD ENGLAND
`sdb@cs.nott.ac.uk
`
`Abstract: We are generally concerned with the development of a hypertext database visualization
`framework, HyperVisVR, which supports3D collaborative visualization, dynamic responsiveness to
`database updates and representation of the underlying database users in the visualization. In developing
`a plug-in database access module for the World Wide Web (WWW), we have needed to solve several
`problems related to the WWW's loosely-coupled stateless architecture. One such problem is that of
`tracking WWW users as they move between pages and servers. This paper discusses current
`approaches to tracking WWW users, proposes new ones and explores issues of privacy and mutuality
`with respect to the monitoring of hypertext access.
`
`Introduction
`
`As the WWW increases in size and complexity, there becomes a need for advanced tools to manage
`information overload. Image maps, navigation bars, server push/pull, Java, ActiveX and Shockwave are among
`the layout techniques that are being used by content providers to generate the site designs which predominate
`the WWW today. Although these complex web site layouts may improve navigability within the site itself,
`when we consider the WWW as a global distributed hypertext database, they can cause confusion as browsing
`users are met with a bewildering array of non-standard navigational tools and controls. Despite this recent
`proliferation of these non-standard interfaces, hypertext navigability is not a new problem. In 1990, Jacob
`Nielsen addressed the issue of users becoming lost in hyperspace and proposed the use of overview diagrams
`(maps) and fish-eye techniques [Nielsen 1990]. These approaches have been adopted by several researchers
`trying to improve the navigability of the WWW, particularly Dmel with his WebMap tool [Domel 1994], and
`Mujherta with the Navigation View Builder [Mujherta 1994]. The motivation for this paper is a project entitled
`HyperVisVR, which is a framework for visualization of large hypermedia databases within a 3D virtual world.
`The driving goals of this framework are:
`
`• Extensible through plug-in modules to support new databases and visualization styles.
`• Fully dynamic visualization which is responsive to viewers movement through virtual world.
`• Database access is represented in the visualization: users moving through the database have
``embodiments'.
`• Modifications to the database are reflected in the visualization.
`• HyperVisVR applications support peer-peer communications to represent other visualization viewers in
`the visualization itself, and to share cached information about the database objects.
`
`As an underlying database for HyperVisVR, the WWW has the benefit of being ubiquitously available
`throughout the world. However, we believe it poses some unique problems due to its loosely-coupled, stateless,
`architecture. This makes it a particularly interesting first target for our visualization framework. To be more
`specific, the problems with the WWW as a database for HyperVisVR are:
`
`•
`
`It is difficult to extract rich meta information about objects on the WWW.
`
`Page 4 of 11
`
`

`

`• Objects and links are stored together, making it difficult to analyse the structure of the WWW without
`performing an exhaustive search through the objects themselves.
`• As a connectionless, stateless protocol, HTTP makes it very difficult to track users and their actions on the
`database on a global scale.
`
`However, we don't believe that these problems are limited to HyperVisVR and many researchers are trying to
`find ways around the limitations on HTTP as a hypertext database protocol, and HTML as a hypertext object
`description language. Finding backwardly-compatible solutions to these problems would be of benefit to the
`research community as a whole. In this paper, we turn our attention to the specific problem of tracking users of
`the WWW as they browse and search the global WWW infrastructure. Potential applications of our ideas
`include:
`
`• WWW providers might optimise their servers by analysing the paths of users through their servers.
`• WWW providers might provide their clients with more detailed usage logs.
`• WWW applications might provide a mutual awareness of users leading to opportunities for encounters.
`• Tools to help users understand the relation between their current location and its surrounding context.
`• Search tools might use the activity of users around a set of WWW resources to indicate interest in them.
`
`User Tracking
`
`We now focus on the problem of tracking users as they pass through the WWW. Users can be tracked at three
`different places. A server can track users moving through it, a browser can be modified to provide tracking
`information whenever a database access is made and proxy servers can track the users who use the proxy to
`access other areas of the WWW. The first two sub-sections below assume that all connections are direct
`between the user agent and the origin server. Proxies further complicate user tracking techniques, and are
`considered in section 2.3
`
`Tracking at the Server
`
`The key reason that user tracking at the server side is so difficult is that HTTP is stateless. Each request for an
`object from an HTTP server is completely separate, and cannot easily be associated with previous requests from
`the same user. The following sections discuss methods for user tracking that are available with current and
`proposed versions of HTTP, extensions to browsers that enable user tracking, and suggestions for
`improvements to HTTP which would make user tracking cleaner and more reliable.
`
`In HTTP/1.0
`
`There are several request headers specified in HTTP/1.0 [Berners-Lee et al. 1996] which can be used by the
`server to link up requests into a click-trail for a particular user in a given session. These are: From and
`Referer[sic]. In addition, the IP address of the machine generating the request can be used as a basis for
`tracking. The From request header is an optional field, specified to contain the Internet e-mail address of the
`user. However, only several less popular browsers actually make use of this field, as it is regarded by most to be
`a breach of the privacy of the user to send his personal information in each request. From fields are most
`widely used in WWW robots to identify the administrator in case there is a problem with the robot. The
`Referer request header can also help track click paths. This field can be used by the browser to specify the
`address of the resource from which the current request address was obtained. This can be interpreted by the
`server to link a previous request to the current one.
`
`As browsers follow outside links from the server to other servers, their movement can be tracked through the
`use of cgi-redirect scripts. All outside links on a server are modified so that they link to a local cgi-redirect
`script, and pass the location of the outside link to the script as a parameter. The cgi-redirect script can log the
`movement, and then issue a redirect to automatically route the browser to the outside resource.
`
`Some sites use intelligent algorithms which analyse log files and link up requests into click-trails through the
`server using a combination of IP address, the From and Referer request headers (if present), a maximum time
`
`Page 5 of 11
`
`

`

`between subsequent requests, and an understanding of server structure to recognise which items a user is likely
`to access based on his current position in the server. It may be possible to do this analysis in real-time as each
`request was served, but it would pose a large overhead to already over-loaded servers.
`
`Authentication is often used by sites needing a fail-safe and universal way to track users through their servers.
`The advantages of this method are that it provides a method of user tracking which works with most browsers
`and it identifies individual users on subsequent visits even if they are connecting from a different computer.
`However, it is extremely inconvenient as it requires users to first register with the site and then remember a
`username and password that they should use on all subsequent visits to the site. Many users find that it
`generally isn't possible to use the same username and password at all such sites as many won't allow users their
`primary choice of username/password. Furthermore, based on our own experiences, we expect that many users
`who would otherwise look through the site may be dissuaded because of the inconvenience of registering or
`remembering their username. Thus, in general, we would not recommend this technique for sites hoping to get
`a high volume of traffic and who wouldn't otherwise use authentication for controlling access to their site.
`
`HTTP/1.0 Extensions
`
`The Cookie mechanism for client-side stateful transactions in HTTP is an extension to the HTTP protocol
`proposed by Netscape Corporation and implemented by the Netscape browser and several servers [Cookies
`1995]. When a browser requests a resource from a server for the first time, the server responds with a cookie,
`which the browser stores and sends as part of each subsequent request. This allows the server to link up
`requests from a particular browser into a click-trail. Cookies can be persistent, linking requests from one
`browsing session with requests from the previous one.
`
`The Keep-Alive extension to HTTP/1.0 allows several resources to be requested over a single connection. This
`is implemented by the Netscape browser and several servers. In principle it allows several requests to be
`matched up together as coming from one browser. However, in practice browsers use it only for single pages,
`requesting the page itself and all its embedded objects in one request. This limits the usefulness of the
`extension to follow a browser between distinct pages.
`
`In HTTP/1.1
`
`The HTTP/1.1 proposal introduces a new persistent connection architecture as the default connection type
`[Fielding et al. 1996]. This supersedes the Keep-Alive extension header described in 2.1.2. Any number of
`requests can be made on a single connection, until either the server or browser closes the connection. The
`specification does not make clear the circumstances under which connections should be closed or maintained
`and as, at the time of writing, there are no widespread implementations of the protocol, it is difficult to
`comment on whether this new architecture will improve user tracking. It is likely, however, that for matching
`of click-trails where the requests are punctuated by hours or days, that this architecture will not help.
`
`Tracking at the Browser
`
`The chief difficulty with server-side click-trail tracking using any of the mechanisms described in section 2.1 is
`that you can only track requests to the server. Frequently, browsers cache pages, and provide history
`mechanisms to allow navigation `Back' to the previous and `Forward' to the next page in the history. The
`browsers quite correctly do not generate new requests for this navigation. A consequence of this is, however,
`that it is not possible to maintain an accurate position of a user within a site if the user has navigated using the
`history mechanism. This is a major problem as a study at Georgia Institute of Technology analysed browsing
`strategies and determined that a total of 42.7% of navigation was through the history mechanism [Catledge et
`al. 1995].
`
`An alternative to user-tracking at the server side is to extend users' browsers to send usage information to
`interested parties whenever a new page is accessed. This can be implemented very easily with Mosaic's
`Common Client Interface (CCI) and a small helper application which connects to the CCI port of the browser,
`and relays WWW movement information via TCP or multicast to interested parties. Both WebCast [Burns
`1995] and FollowWWW [Brown et al. 1996] are applications which make use of this technique. An alternative
`would be a Netscape plug-in which monitors the actions of the user and sends movement information.
`
`Page 6 of 11
`
`

`

`The major disadvantage of these schemes is that they need to be explicitly configured by the users. The
`Netscape plug-in needs to be downloaded and installed; the Mosaic CCI application needs to be downloaded,
`installed, and then Mosaic needs to be configured and the helper started at the start of each session. It is likely
`that due to this inconvenience few users will download and install the tracking software, unless some form of
`incentive is given (free access to a subscription service, etc.). Furthermore, there are then the problems shared
`by any public software developer: supporting multiple architectures, user support, fixing bugs and notifying
`users of updates.
`
`Commentary
`
`The loss of real-time server-side tracking accuracy due to history mechanisms described in Section 2.1 might
`suggest that a browser-oriented tracking mechanism would be more desirable. However, whilst browser-side
`tracking is certainly more accurate, it suffers from scalability problems, and potentially low uptake due to the
`explicit configuration required. It may be possible to increase the accuracy of server-side tracking with certain
`browsers through several techniques. One such technique is the use of an anchor to an unavailable background
`image, embedded in the page. Netscape and Mosaic's history mechanism causes all unloaded images to be re-
`requested whenever the page is revisited, so careful analysis of requests for this unavailable image, including
`the Referer header, will indicate when a particular page has been revisited through the history mechanism.
`
`A hybrid approach which combines server-side and browser-side tracking would be for the server to include a
`reference to a small Java tracking applet with each page. The applet would have the sole responsibility of
`contacting the server each time the user departs a page or navigated back to a page through the history
`mechanism. The applet would be stored in the browsers cache, and so wouldn't be loaded across the network
`for each page. The Java applets may even be able to track the browsing activity levels on the workstation to
`determine if a user is actively viewing a page. Doing so, however, might be regarded as a breach of privacy.
`
`Both of these approaches could be handled within the WWW server which issues the resources, or they could
`be served by `tracking servers', in a similar way to the current proliferation of `page counting servers'.
`Tracking servers could handle page tracking for a number of different sites. They could act standalone for their
`benefit only, or make the tracking information available to applications such as HyperVisVR through TCP,
`UDP or multicast communication.
`
`Cache/Firewall problems
`
`The need to accurately track the movement of browsers clashes horribly with the application of proxy cache
`servers and firewalls to drastically reduce the amount of network bandwidth consumed with redundant
`requests. The crux of the problem is based on the load-based algorithm proxy caches use to determine how
`long a particular object should be cached for. This results in the original WWW server `seeing' extremely few
`requests from proxy servers for its most popular resources (as the caches store these popular objects), and an
`disproportionately high volume of requests for its least popular resources. As the popularity of proxy caches
`increases, this could completely invalidate the use of visualization such as HyperVisVR, which relies heavily
`on usage and popularity information. Possible solutions to the proxy cache problem can be broadly categorised
`into `ignoring the cache', `beating the cache', and `working with the cache'. We now discuss each of these in
`turn.
`
`It is possible to completely ignore the WWW population which accesses the server from behind a proxy cache
`by disregarding all requests which contain a `Via' header or the word `via' in the User-Agent header from the
`cache. Ignoring these misleading requests would seem like a good, straightforward approach to the problem.
`However, it is not possible to assume that the `direct access population' will be a representative sample of the
`entire requests. Many WWW users access through a cache because of organisation rules or country-based
`bandwidth problems, so by eliminating these users from the statistics you could unwittingly be excluding whole
`classes of users from the tracking statistics.
`
`Many content providers have resorted to `beating the cache' when attempting to obtain full access statistics and
`tracking information. HTTP/1.0 specifies a `Pragma: no-cache' header, which is an instruction to the cache
`
`Page 7 of 11
`
`

`

`server not to cache that object. However, due to the misuse of the header by content providers, many proxy
`cache administrators have resorted to ignoring the header and caching the resource anyway. Cache busting
`methods include appending a random segment to the URL which confuses the cache into thinking that all the
`resources are different, and generating all pages dynamically through a cgi-script, the results of which do not
`get cached.
`
`Working with the cache implies that you allow the proxy to cache your resources, and you are supplied with a
`log of cache-served resources originating from your server. This is not feasible without some form of
`automated system, as the number of proxy-cache servers is huge, and the structure of the proxies is completely
`uncontrolled. Such systems are under development, and when ready, it may be in the proxy cache's best interest
`to supply such information, as it might help the cache-busters in their defection from enemy to ally! To send
`information about each request to the origin server of every resource in the cache would severely impact on
`Internet bandwidth, and reduce the benefit of running a cache in the first place. It may be feasible however to
`consider some form of periodic batched transfer of information to selected origin servers that requested the
`information by setting a special header when the resource was first requested from the server.
`
`It is worth taking a paragraph to consider the impact of proxy caches on the unavailable image and Java applet
`solutions presented above. Based on experiments with NCSA httpd 1.5.1, proxies do not cache 404 unavailable
`responses, which means that this solution will work to track users behind proxy caches and firewalls.
`Furthermore, it will not impact on the network as much as cache busting algorithms, because no data is being
`transferred - only the request for the unavailable image. Java applets are cached by the proxy, but they are still
`allowed to communicate directly with the tracking server. There have been some problems with running
`networked Java applets from within Netscape from behind firewalls. At the time of writing, the author knows
`no solution to this particular problem.
`
`This concludes our discussion of current and possible techniques for tracking users as they access the World
`Wide Web. The following section now briefly discusses some of the ethical issues that arise as a result of this
`idea.
`
`Privacy and Ethical Considerations
`
`Techniques for tracking users, as discussed in this paper, raise a number of ethical issues concerning privacy
`and security. Such issues are complex and hardly ever clear cut. However, they are also of great importance
`and so warrant discussion and consideration during the technical development process. We begin by
`identifying the kinds of information that might be gathered about the presence and activity of users on the
`WWW. These include:
`
`• Monitoring general access trends - recording patterns of access by groups of people.
`• Anonymous monitoring of individuals - recording details of an individual's access but ignoring their
`identity.
`• Non-anonymous monitoring of individuals - also recording the identities of people accessing the WWW.
`• Persistent recording vs instantaneous awareness - deciding whether monitoring information is recorded for
`subsequent storage, analysis and use or whether it is only made available at the time of access (e.g. to
`enable chance encounters and stimulate social interaction).
`
`There are may also be many possible uses of such information. For example:
`
`• Making colleagues generally aware of each other's presence in much the same way the shared buildings
`and offices support the coordination of activity through casual awareness between their occupants.
`• Encouraging chance encounters between people browsing the same or related information.
`• By information and service providers in order to enhance services (e.g. developing new paths through
`information based on analysis of patterns of use) or as part of billing and accounting.
`• To enhance security by providing better awareness of who is accessing which sites and information (in the
`same way that video surveillance improves the security or many urban areas).
`It may be made available to third parties such as advertisers and government agencies.
`
`•
`
`Page 8 of 11
`
`

`

`It is important to point out that this kind of tracking may have clear positive benefits in some circumstances,
`but may have a negative impact in others. For example, information about an individual's preferences and
`activities might be used both to their benefit (to tailor systems to their needs) and also to their disadvantage (to
`compile personal profiles). Consequently, we propose three principles to guide the application and use of
`tracking techniques:
`
`• Notification - visitors should be notified in advance of the mechanisms and policies that are in operation at
`a given site so that they can decide whether to visit or not.
`• Mutuality - in general, systems should be designed so as to provide some degree of symmetry or mutuality
`as to awareness of presence. Thus, visitors to a site who are being monitored should be aware of the
`presence of their observers. Furthermore, it might be sensible to maintain a rough balance between the
`levels of information known by each party.
`• Balance of power - both the observer and observed should be able to influence the level of awareness or
`monitoring. In particular, it might be argued that people should not be made visible without their consent
`and should not be able to become invisible without the consent of those around them.
`
`Related Work to HyperVisVR
`
`Related work fits into three main categories: visualizing hypertext databases, visualizing access to hypertext
`database servers, and mutual awareness mechanisms for users simultaneously accessing hypertext resources.
`Visualizing hypertext databases has been a hot topic at WWW and hypertext related conferences. The
`approaches can be grouped into two categories. The first is those that follow a user through the WWW and
`creating visual maps of browsing history, such as [Ayers et al. 1995], [Domel 1994] and [Takano et al. 1996].
`Slightly different from these is HyperSpace which uses a 3D map to display browsing history, however it does
`not update the visualization automatically to show the current browsing position [Wood et al. 1995]. Another
`area of hypertext visualization research concerns the collection of a dataset from the hypertext database to
`visualize directly. 2D visualisations include venn-diagrams [Ralha et al. 1995], graphs [Mukherjea et al. 1995]
`and a distance-representing relief structure [Giradin 1996]. Virtual terrains (2.5D) have been used in the
`Hyper-G Harmony browser [Andrews 1995]. Full 3D visualisations include WebViz and GopherVR. WebViz
`uses a batch oriented approach to retrieve documents from a WWW server, parse them into a Hyperbolic tree
`structure, and display them [Munzner et al. 1995]. GopherVR is a 3D spatial interface to the Gopher system
`[McCahill et al. 1995]. Other research has focused on visualizing the access patterns on a particular server.
`Lamm et al. discuss a system which displays a globe in virtual reality, and maps the level of accesses to their
`WWW server from a particular geographical region onto the height of a bar extending from the appropriate
`location on the virtual globe [Lamm et al. 1996]. However caches, firewalls, and service providers upset their
`technique, as they mask the true origin of the browser.
`
`Turning briefly to some of our own work in this area, the Internet Foyer is a virtual reality WWW visualization
`of a set of pages, with visual representations of browsing users moving over the structure as they access the
`pages and move between them. It supports mutual awareness of WWW users on the same or similar pages
`concurrently. It also has a link to the real world: a real-time image of the visualization is projected onto the
`wall of a real foyer, and a video wall within the visualization allows VR users to see back into the real foyer.
`As such it links three spaces: the real world, VR and the WWW, and can be though of as a "mixed reality"
`[Benford et al. 1996]. The Internet Foyer was the precursor of HyperVisVR, which adds fully dynamic
`visualization, peer-peer networking, generic hypertext database support and plugin visualization styles.
`
`References
`
`[Andrews 1995] Andrews, K. (1995). Visualizing Cyberspace: Information Visualization in the Harmony
`Internet Browser. First IEEE Symposium on Information Visualization, Atlanta, GA, Oct. 1995, pp. 97-104.
`
`[Ayers et al. 1995] Ayers, E.Z., & Stasko J.T. (1995). Using Graphic History in Browsing the World Wide
`Web. Fourth International WWW Conference, 1995.
`
`Page 9 of 11
`
`

`

`[Benford et al. 1996] Benford, S.D., Brown, C.C., Reynard, G.T., & Greenhalgh, C.M. (1996). Shared Spaces:
`Transportation, Artificiality and Spatiality. To appear in Proc. CSCW'96, Boston, November 1996, ACM
`Press.
`
`[Berners-Lee et al. 1995]: Berners-Lee, T., Fielding, R., & Frystyk H. (1995). Hypertext Transfer Protocol 1.0.
`This is a work in progress, expiring on August 19, 1995. <draft-ietf-http-v10-spec-05.txt> (this is now RFC
`1945)
`
`[Brown et al. 1996] Brown, C., Benford, S., & Snowdon, D. (1996). Collaborative Visualization of Large Scale
`Hypermedia Databases, in CSCW and the Web, Proceedings of the 5th ERCIM/W4G Workshop.
`Arbeitspapiere der GMD 984, GMD, Sankt Augustin, April 1996.
`http://orgwis.gmd.de/W4G/proceedings/visual.html
`
`[Burns 1995] Burns, E. (1995). Collaborative Document Sharing via the MBONE.
`http://www.ncsa.uiuc.edu/SDG/Software/XMosaic/CCI/webcast.html
`
`[Catledge et al. 1995] Catledge, L.D., Pitkow, J.E. (1995). Characterizing Browsing Strategies in the WWW.
`Third International WWW Conference, 1995.
`
`[Domel 1994] Domel P. (1994). Webmap - A Graphical Hypertext Navigation Tool. Second International
`WWW Conference, 1994.
`http://www.ncsa.uiuc.edu/SDG/IT94/Proceedings/Searching/doemel/www-fall94.html
`
`[Giradin 1996] Giradin L. (1996). Mapping the Virtual Geography for the World Wide Web. Poster
`Proceedings of the Fifth International WWW Conference, 1996.
`http://heiwww.unige.ch/girardin/cgv/report/
`
`[Fielding et al. 1996]: Fielding, R., Frystyk, H., Berners Lee, T. (1996). Hypertext Transfer Protocol 1.1. This
`is a work in progress, expiring on December 7, 1996. <draft-ietf-http-v11-spec-07.html>
`
`[Lamm et al. 1996] Lamm, S.E., Reed, D.A., Scullin, W.H. (1996). Real-time geographic visualization of
`World Wide Web traffic. Computer Networks and ISDN Systems, 28(1996), 1457-1468
`
`[McCahill et al. 1995] McCahill, M.P., Erickson, T. (1995). Design for a 3D Spatial User Interface for Internet
`Gopher. Proceedings of ED-MEDIA'95, pp 39-44, Graz, Austria, June 1995. AACE.
`
`[Netscape 1995]: Netscape Corporation. Persistent Client State HTTP Cookies Preliminary Specification.
`http://home.netscape.com/newsref/std/cookie_spec.html
`
`[Mukherjea et al. 1995] Mukherjea, S., & Foley, J. (1995). Visualizing the WWW with the Navigational View
`Builder. Third International WWW Conference, 1995.
`http://www.igd.fhg.de/www/www95/proceedings/papers/44/mukh/mukh.html
`
`[Munzner et al. 1995] Munzner, T., & Burchard, P. (1995). Visualizing the Structure of the World Wide Web
`in 3D Hyperbolic Space.
`http://www.geom.umn.edu/docs/research/webviz/
`
`[Nielsen 1990] Nielsen, J.. The Art of Navigating Hypertext, Commuications of the ACM, 33, 3, pp. 297-310,
`ACM Press.
`
`[Ralha et al. 1995] Ralha, C.G., & Cohn, A.G. (1995). Building maps of hyperspace. Internet Multimedia
`Information: WWW National Conference, 1995, Minho University, Portugal.
`http://agora.leeds.ac.uk/spacenet/ghedini.html
`
`[Spero 199?]: Spero, S. Progress on HTTP-NG.
`http://www.w3.org/pub/WWW/Protocols/HTTP-NG/http-ng-status.html
`
`Page 10 of 11
`
`

`

`[Takano et al. 1996] Takano, T., Kubo, N., Shimamura, H., Matsuura, H. (1990). A Directory Service
`Architecture for the World Wide Web. Poster Proceedings of the Fifth International WWW Conference, 1996.
`
`[Wood et al. 1995] Wood, A.M., Drew, N.S., Beale, R., Heldley, R.J. (1995). HyperSpace: Browsing with
`Visualization. Poster Proceedings of the Third International WWW Conference, Darmstadt, Germany.
`http://www.igd.fhg.de/www/www95/proceedings/posters/35/index.html
`
`Page 11 of 11
`
`

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket