throbber

`
`The MAGIC Project: From Vision to Reality
`
`BarbaraFuller, Mitretek Systems
`Ira Richer, Corporation for National Research Initiatives
`
`Abstract
`In the MAGIC project, three major components — an ATM internetwork, a
`distributed, network-based storage system, and a terrain visualization applica-
`tion — were designed, implemented, and integrated to create a testbed for
`demonstrating real-time, interactive exchange of data at high speeds among
`distributed resources. The testbed was developed as a system, with special
`consideration to how performance was affected by interactions among the
`components. This article presents an overview of the project, with emphasis
`on the challenges associated with implementing a complex distributed system,
`and with coordinating a multi-organization collaborative project that relied on
`distributed development. System-level design issues and performance measure-
`ments are described, as is a tool that was developed for analyzing perfor-
`mance and diagnosing problemsin a distributed system. The management
`challenges that were encountered and someof the lessons learned during the
`course of the three-year project are discussed, and a brief summary of
`MAGIC.Il, a recently initiated follow-on project,
`is given.
`
`to-end system performancerather than individual compo-
`igabit-per-second networks offer the promise
`nent performance.
`of a major advance in computing and commu-
`The objective of the MAGIC (which stands for “Multidi-
`nications: high-speed access to remote
`mensional Applications and Gigabit Internetwork Consor-
`resources, including archives, time-critical
`tium”) project was to build a testbed that could
`data sources, and processing power. Over the past six
`demonstrate real-time, interactive exchange of data at
`years, there have beenseveral efforts to develop gigabit
`gigabit-per-second rates among multiple distributed
`networks and to demonstratetheir utility, the most notable
`resources. This objective was pursued through a multidisci-
`being the five testbeds that were supported by ARPA and
`National Science Foundation (NSF) funding: Aurora,
`plinary effort involving concurrent development and subse-
`quent integration of three testbed components:
`BLANCA, CASA, Nectar, and VISTAnet [1]. Each of
`¢ An innovative terrain visualization application that
`these testbeds comprised a mix of applications and net-
`requires massive amounts of remotely stored data
`working technology, with some focusing more heavily on
`applications and others on networking. The groundbreak-
`¢ A distributed image server system with performance suf-
`
`ing work donein these testbeds hadasignificant impact ficient to support the terrain visualization application
`on the development of high-speed networking technology
`¢ A standards-based high-speed internetworkto link the
`and onthe rapid progress in this area in the 1990s.
`computing resources required for real-time rendering of
`the terrain
`It became clear, however, that a new paradigm for
`application development was neededin order to realize
`The three-year project began in mid-1992 and involved
`the full benefits of gigabit networks. Specifically, network-
`the participation, support, and close cooperation of many
`based applications and their supporting resources. such as
`diverse organizations from government, industry, and
`data servers, must be designed explicitly to operate effec-
`academia. These organizations had complementaryskills
`tively in a high-speed networking environment. For exam-
`andhadthe foresight to recognize the benefits of collabo-
`ple, an interactive application working with remote storage
`ration. The principal MAGICresearchparticipants were:
`devices must compensate for network delays. The MAGIC
`¢ Earth Resources Observation System Data Center, U.S.
`Geological Survey (EDC)!
`project, which is the subject of this article, is the first high-
`speed networking testbed that was implemented according
`¢ Lawrence Berkeley National Laboratory, U.S. Depart-
`ment of Energy (LBNL)!
`to this paradigm. The major componentsofthe testbed
`* Minnesota Supercomputer Center, Inc. (MSCI)!
`were considered to be interdependentparts of a system,
`¢ MITRE Corporation!
`and whereverpossible they were designed to optimize end-
`¢ Sprint
`¢ SRI International (SRI)!
`
`The work reportedhere wasperformed while the authors were with the
`MITRE Corp. in Bedford, MA, and was supported by the Advanced
`Research ProjectAgency (ARPA) under contract F19628-94-D-001.
`
`! These organizations were fundedby ARPA.
`
`TEBE NeBae MalPues£448
`
`0890-8044/96/$05.00 © RIAHFFEd Patents Exhibit 1005 App'x A-N®
`
`

`

`
`
`Image server system
`(Storage and transmission
`of raw image tiles)
`
`Distributed processing
`(Real-time image processing)
`
`—
`[Bic
`
`Image server system
`(Storage and transmission
`of processed tiles)
`
`Rendering engine
`(Rendering and visualization
`of terrain)
`
`Workstations
`(Over-the-shoulder
`
`
`
`™ Figure 1. Planned functionality ofthe MAGICtestbed.
`
`Overview of the MAGIC Testbed
`
`Or of the primary goals of the MAGICproject was to
`create a testbed to demonstrate advanced capabilities
`that would not be possible without a very high-speed inter-
`network. MAGIC accomplished this goal by implementing
`an interactive terrain visualization application, TerraVi-
`sion, that relies on a distributed image server system (ISS)
`to provide it with massive amountsof data in real time.
`The planned functionality of the MAGICtestbed is depict-
`ed in Fig. 1. Currently, TerraVision uses data processed
`off-line and stored on the ISS. In the future the applica-
`tion will be redesigned to enable real-time image process-
`ing as well as real-time terrain visualization (see the last
`section). Note that the workstations which house the appli-
`cation, the servers of the ISS, and the “over-the-shoulder”
`tool (see subsection entitled “The Terrain Visualization
`Application”), as well as those that will perform the on-
`line image processing, can reside anywhere onthe network.
`
`The MAGIC Internetwork
`The MAGICinternetwork, depicted in Fig. 2, includes six
`high-speed local area networks (LANs) interconnected by
`a wide area network (WAN) backbone. The backbone,
`which spansa distance of approximately 600 miles, is
`based on synchronous optical network (SONET)technolo-
`gy and provides OC-48 (2.4 Gb/s) trunks, and OC-3 (155
`Mb/s) and OC-12 (622 Mb/s) access ports. The LANs are
`based on asynchronous transfer mode (ATM)technology.
`Five of the LANs — those at BCBL in Fort Leavenworth,
`Kansas, EDC in Sioux Falls, South Dakota, MSCI in Min-
`neapolis, Minnesota, Sprint in Overland Park, Kansas, and
`U S WESTin Minneapolis, Minnesota — use FORE Sys-
`tems models ASX-100 and ASX-200 switches with OC-3c
`and 100 Mb/s TAXIinterfaces. The ATM LANat KUin
`Lawrence, Kansas, uses a DEC AN2switch, a precursor to
`the DEC GigaSwitch/ATM, with OC-3c interfaces. The
`network uses permanentvirtual circuits (PVCs) as well as
`switched virtual circuits (SVCs) based on both SPANS, a
`FORESystemssignaling protocol, and the ATM Forum
`User-NetworkInterface (UNI) 3.0 Q.2931 signaling stan-
`
`view of terrain)
`
`
`
`* University of Kansas (KU)!
`¢ US WEST Communications, Inc.
`Other MAGICparticipants that contributed equipment,
`facilities, and/or personnel to the effort were:
`¢ Army High-Performance Computing Research Center
`(AHPCRC)
`¢ Battle Command Battle Laboratory, U.S. Army Com-
`bined Arms Command (BCBL)
`* Digital Equipment Corporation (DEC)
`¢ Nortel, Inc./Bell Northern Research
`¢ Southwestern Bell Telephone
`¢ Splitrock Telecom
`This article presents an overview of the
`MAGICproject with emphasis on the chal-
`lenges associated with implementing a
`complex distributed system. Companion
`articles [2, 3] focus on a LAN/WANgate-
`way and a performanceanalysis tool that
`were developed for the MAGIC testbed.
`Thearticle is organized as follows. Thefol-
`lowing section briefly describes the three
`major testbed components: the internet-
`work, the image server system, and the
`application. The third section discusses
`some of the system-level considerations
`that were addressed in designing these
`components,and the fourth section pre-
`sents some high-level performance mea-
`surements. The fifth (affectionately
`entitled “Herding Cats”) and sixth sections
`describe howthis multi-organizational col-
`laborative project was coordinated, and the
`technical and managerial lessons learned.
`Finally, the last section provides a brief
`summary of MAGIC-II, a follow-on project
`begunin early 1996.
`
`— SONET OC-48
`— SONET OC-12 or OC-3
`Workstations include:
`DEC, SGI, SUN for ISS and
`over-the-shoulder, SGI for
`terrain visualization
`
`
`
`16 Unified Patents ExhibRE?00%AppxuteNp6Page 148 of 448
`
`
`
`™@ Figure 2. Configuration ofthe MAGIC ATM internetwork.
`
`

`

`
`
`
`
`
`Image tiles of
`terrain data
`
`
`@ Figure 3. Relationship betweentile resolutions andperspective view.
`(Source: SRIInternational)
`
`dard. The workstations at the MAGIC sites
`include models from DEC, SGI, and Sun. As
`part of MAGIC, an AN2/SONETgateway with
`an OC-12c interface was developed to link the
`AN2 LAN at KU to the MAGIC backbone[2].
`In addition to implementing the internetwork,
`a variety of advanced networking technologies
`were developed and studied under MAGIC. A
`high-performance
`parallel
`interface
`(HIPPI)/ATM gateway was developedtointer-
`face an existing HIPPI network at MSCIto the
`MAGIC backbone. The gateway is an IP router rather
`than a network-layer device such as a broadbandintegrat-
`ed services digital network (B-ISDN) terminal adapter,
`and was implementedin software on a high-performance
`workstation (an SGI Challenge). This architecture provides
`Data Preparation — In orderto render an image, TerraVi-
`a programmable platform that can be modified for net-
`sion requires a digital description of the shape and appear-
`workresearch, and in the future can readily take advan-
`ance of the subject terrain. The shape oftheterrain is
`tage of more powerful workstation hardware. In addition,
`represented by a two-dimensional grid of elevation values
`the platform is general-purpose; that is, it is capable of
`knownas a digital elevation model (DEM). The appearance
`supporting multiple HIPPI interfaces as well as other
`interfaces such as fiber distributed data interface (FDDI).
`of the terrain is represented byaset of aerial images,
`knownas orthographic projection images (ortho-images),
`Software was developed to enable UNIX hosts to com-
`municate using Internet Protocol (IP) over an ATM net-
`that have been specially processed(i.e., ortho-rectified) to
`work. This
`IP/ATM software currently runs on
`eliminate the effects of perspective distortion, and are in
`SPARCstations under Sun OS4.1 and includes a device
`precise alignment with the DEM. Tofacilitate processing,
`driver for the FORE SBAseries of ATM adapters.It sup-
`distributed storage, and high-speed retrieval over a net-
`ports PVCs, SPANS, and UNI3.0 signaling, as well as the
`work, the DEM andimagesare divided into small fixed-
`size units knownasfiles.
`“classical” IP and Address Resolution Protocol (ARP)
`Low-resolution tiles are required for terrain thatis dis-
`over ATM model[4]. The software should be extensible to
`other UNIX operating systems, ATM interfaces, and
`tant from the viewpoint, whereas high-resolution tiles are
`IP/ATM address-resolution and routing strategies, and will
`required for close-in terrain. In addition, multiple resolu-
`facilitate research on issues associated with the integration
`tions are required to achieve perspective. These require-
`of ATM networks into IP internets.
`ments are addressed by preparing a hierarchy of
`In order to enhance network throughput, flow-control
`increasingly lower-resolution representations of the DEM
`schemes were evaluated and applied, and IP/ATM host
`and ortho-imagetiles in which each level is at half the res-
`parameters were tuned. Experiments showed that through-
`olution of the previous level. The tiled, multiresolution
`put close to the maximum theoretically possible could be
`hierarchy and the use of multiple resolutions to achieve
`attained on OC-3 links over long distances. To achieve
`perspective are shownin Fig.3.
`high throughput, both the maximum transmission unit
`Rendering of the terrain on the screen is accomplished
`(MTU)and the Transmission Control Protocol (TCP) win-
`by combining the DEM and ortho-image tiles for the
`dow mustbe large, and flow control must be used to
`selected area at the appropriate resolution. As the user
`ensure fairness and to avoidcell loss if there are interact-
`travels over the terrain, the DEMtiles and their corre-
`ing traffic patterns [5, 6].
`sponding ortho-imagetiles are projected onto the screen
`using a perspective transform whose parametersare deter-
`mined by factors such as the user’s viewpoint andfield of
`view. The mapping of a transformed ortho-imagetoits
`DEMand the rendering of that image are shownin Fig.4.
`The data set currently used in MAGIC covers a 1200
`km? exercise area of the National Training Center at Fort
`Irwin, California, and is about 1 Gpixelin size. It is derived
`from aerial photographs obtained from the National Aerial
`Photography Program archives and DEM data obtained
`from the U.S. Geological Survey. The images are at
`approximately 1 m resolution (i.e., the spacing between
`pixels in the image corresponds to 1 m on the ground).
`The DEM dataare at approximately 30 m resolution(i.e.,
`elevation values in meters are at 30 m intervals).
`Software for producing the ortho-images and creating
`the multiresolution hierarchy of DEM and ortho-image
`tiles was developed as part of the MAGIC effort. These
`processes were performed “off-line” on a Thinking
`Machines Corporation Connection Machine (CM-5) super-
`computer owned by the AHPCRCandlocated at MSCI.
`The tiles were then stored on the distributed servers of the
`ISS and usedby terrain visualization software residing on
`rendering engines at several locations.
`
`cation, and rendering the image. MAGIC’s approach to
`accomplishing these activities is described below. Enhance-
`ments to the application that provide additional features
`and capabilities are also described.
`
`The Terrain Visualization Application
`TerraVision allows a user to view and navigate through(i.e.,
`“fly over”) a representation of a landscape created from
`aerial or satellite imagery [7]. The data used by TerraVision
`are derived from raw imagery and elevation information
`which have been preprocessed by a companion application
`known as TerraForm. TerraVision requires very large
`amounts of data in real time, transferred at both very
`bursty and high steady rates. Steady traffic occurs when a user
`moves smoothly through the terrain, whereas bursty traffic
`occurs when the user jumps (“teleports”) to a new position.
`TerraVision is designed to use imagery data that are locat-
`ed remotely and supplied to the application as needed by
`meansof a high-speed network. This design enables Ter-
`raVision to provide high-quality, interactive visualization
`of very large data sets in real time. TerraVision is of direct
`interest to a variety of organizations, including the Depart-
`ment of Defense. For example, the ability of a military
`officer to see a battlefield and to share a common view
`with others can be very effective for commandandcontrol.
`Terrain visualization with TerraVision involves two activi-
`ties: generating the digital data set required by the appli-
`
`TEEE NeBradse Mafiigsf'94a48
`
`Unified Patents Exhibit 1005 App'x A-N’
`
`

`

`
`ell
`
`Aerial terrain image
`
`Elevation model
`
`possible by precisely aligning the DEM and imagery
`data with a world coordinate system as well as with
`each other.
`A numberof buildings and vehicles have been
`created and stored on the rendering enginefordis-
`play as an overlay on theterrain. The locations of
`vehicles can be updatedperiodically by transferring
`vehicle location data, acquired with a global posi-
`tioning system receiver, to the rendering engine for
`integration into the terrain visualization displays.
`=
`eeae
`1?
`~
`<
`s=
`~_-
`Registration of the user’s viewpoint to a map enables
`
`
`— = boaBd= ba onthe .. ae
`
`
`
`>
`tl
`ee
`tT
`>
`the user to specify the area he wishes to explore by
`—w 4 Ls
`_—
`-
`=
`~~.
`pointing to it, and it aids the user in orienting him-
`self.
`In addition, an over-the-shoulder (OTS) tool was
`developedto allow a user at a remote workstation to
`view the terrain as it is rendered. The OTStoolis
`based on a client/server design and uses XWindow
`system calls. The user can view the entire image on
`the SGI screen at low resolution, and can also select
`a portion of the screen to view at higher resolution.
`The framerate varies with the size and resolution of
`the viewed image, and with the throughput of the
`workstation.
`
` ;
`= _
`
`Aerial terrain image is mapped onto elevation model
`
`-_—
`
`ee ae
`
`The Image Server System
`Elevation data rendered with orthographic imageofterrain
`The ISS stores, organizes, and retrieves the pro-
`cessed imagery and elevation data required by Ter-
`®@ Figure 4. Mapping an ortho-image ontoits digital elevation model.
`raVision for interactive rendering of the terrain. The
`(Source: SRIInternational)
`ISS consists of multiple coordinated workstation-
`based data servers that operate in parallel and are
`designed to be distributed around a WAN. This
`architecture compensatesfor the performancelimitations
`of current disk technology. A single disk can deliver data
`at a rate that is about an order of magnitude slower than
`that needed to support a high-performance application
`such as TerraVision. By using multiple workstations with
`multiple disks and a high-speed network, the ISS can deliv-
`er data at an aggregate rate sufficient to enable real-time
`rendering of the terrain. In addition, this architecture per-
`mits location-independentaccess to databases, allows for
`system scalability, and is low in cost. Although redundant
`arrays of inexpensive disks (RAID) systems can deliver higher
`throughput thantraditional disks, unlike the ISS they are
`implemented in hardware and, as such, do not support multi-
`ple data layout strategies; furthermore, they are relatively
`expensive. Such systemsare therefore not appropriate for
`distributed environments with numerousdata repositories
`serving a variety of applications.
`The ISS, as currently used in MAGIC, comprises four or
`five UNIX workstations (including Sun SPARCstations,
`DEC Alphas, and SGI Indigos), each with fourto six fast
`SCSI disks on two to three SCSI host adapters. Each serv-
`er is also equipped with either a SONET or a TAXInet-
`workinterface. The servers, operating in parallel, access
`the tiles and send them over the network, which delivers
`the aggregate stream to the host. This processis illustrated
`in Fig. 5. More details about the design and operation of
`the ISS can be foundin [8].
`
`Image Rendering — TerraVision provides for two modes
`of visualization: two-dimensional (2-D) and three-dimen-
`sional (3-D). The 2-D modeallowsthe userto fly over the
`terrain, looking only straight down. The user controls the
`view by means of a 2-D input device such as a mouse.
`Since virtually no processing is required, the speed at
`which imagesare generated is limited by the throughput of
`the system comprising the ISS, the network, and the ren-
`dering engine.
`In the 3-D mode, the usercontrols the visualization by
`meansof an input device that allows six degreesof free-
`dom in movement. The 3-D mode is computationally
`intensive, and satisfactory visualization requires both high
`framerates(i.e., 15-30 frames/s) and lowlatencies(i.e., no
`more than 0.1 s between the time the user moves an input
`device and the time the new frame appears onthe screen).
`High framerates are achieved by using a local very-
`high-speed rendering engine, an SGI Onyx, with a cache of
`tiles covering not only the area currently visible to the
`user, but also adjacent areas that are likely to be visible in
`the near future. A high-speed search algorithm is used to
`identify the tiles required to render a given view. For
`example, as noted above, perspective (i.e., 3-D) views
`require higher-resolution tiles in the foreground and
`lower-resolution tiles in the background. TerraVision
`requests the tiles from the ISS, places them in memory, and
`renders the view. Latency is minimized by separating image
`rendering from data input/output (I/O) so that the twoactiv-
`ities can proceed simultaneously rather than sequentially
`(see the section entitled “Design Considerations”).
`
`Additional Features and Capabilities — TerraVision
`includes two additional features: superposition of fixed
`and mobile objects on the terrain, and registration of the
`user’s viewpoint to a map. Both of these features are made
`
`Design Considerations
`MAGIC, the single most perspicuouscriterion of suc-
`cessful operation is that the end user observessatisfactory
`performanceofthe interactive TerraVision application.
`Whenthe userflies over the terrain, the displayed scene
`must flow smoothly, and when heteleports to an entirely
`
`
`
`18 Unified Patents ExhibRE?00%AppxuteNp6Page 150 of 448
`
`
`
`

`

`Tiles intersected
`by the ee oftravel
`4
`
`
`
`transmission to application
`
`
`
`Location tiles along path on
`ISS servers/disks
`
`Server and disk
`——-» S1D1
`—» S1D2
`—— $2D1
`® —-» 5S1D1
`——¥» S2D2
`———» S1D2
`Parallel retrieval of tiles and
`——¥ $2D1
`
`Server 2
`
`os
`ceepeee
`high-sarouution
`
`different location, the new scene must
`appear promptly. Obtaining such per-
`formance mightberelatively straight-
`forward if the terrain data were
`collocated with the rendering engine.
`However,one of the original premises
`underlying the MAGICprojectis that
`the data set and the application are
`not collocated. Thereare several rea-
`sons for this, the most important being
`that the data set could be extremely
`large, so it might not be feasible to
`transfer it to the user’s site. Moreover,
`experience has shownthat in many
`cases the “owner” of a datasetis also
`its “curator” and maybe reluctant to
`distribute it, preferring instead to keep
`the data locally to simplify mainte-
`nance and updates. Finally, it was
`anticipated that future versions of the
`application might work with a mobile
`user and with fused data from multiple
`sources, and neither of these capabili-
`ties would be practical with local data.
`Therefore, since the data will not be
`local, the MAGIC components must
`be designed to compensate for possible
`delays and other degradations in the
`end-to-end operation of the system.
`In order to understand system-level
`design issues, it is necessary to outline
`the sequence of events that occurs when the user moves the
`input device, causing a new sceneto be generated. TerraVi-
`sion first producesa list of new tiles required for the scene.
`This list is sent to an ISS master, which performs a name
`translation, mapping the logical address ofeachtile (the
`tile identifier) to its physical address (server/disk/location
`on disk). The master then sends each server an orderedlist
`of the tiles it must retrieve. The server discards the previ-
`ouslist (even if it has not retrieved all the tiles on that
`list) and begins retrieving the tiles on the new list. Thus,
`the design for the system comprising TerraVision, the ISS,
`and the internetwork must address the following questions:
`¢ How can TerraVision compensate fortiles it needs for
`the next image but have not yet been received?
`¢ Howoften should TerraVision request tiles from the ISS?
`Where should the ISS master be located?
`How should tiles be distributed among the ISS disks?
`«How cancell loss be minimized near the rendering site
`wherethetile traffic becomes aggregated and conges-
`tion may occur?
`
`™@ Figure 5. Schematic representation ofthe operation ofthe ISS. (Source: Lawrence
`Berkeley National Laboratory)
`
`used in place of the 16 level-3 tiles. This substitution mani-
`fests itself by the affected portion of the rendered image
`appearing “fuzzy” for a brief period of time. Temporary
`substitution of low-resolution tiles for high-resolution tiles
`is particularly effective for teleporting because that opera-
`tion requires a large number of newtiles, so it is more
`likely that one or morewill be delayed.
`Second, TerraVision attemptsto predict the path the
`user will follow, requesting tiles that might soon be need-
`ed, and assigning one of three levels of priority to eachtile
`requested. Priority-1 tiles are needed as soon aspossible;
`the ISS retrieves and dispatches thesefirst. This set of tiles
`is ordered by TerraVision, with the coarsest assigned the
`highest priority within the set. The reasonsare:
`¢ The rendering algorithm needsthecoarsetiles beforeit
`needs the next-higher-resolutiontiles.
`¢ There are fewertiles at the coarser resolutions, so it is
`less likely that they will be delayed.
`Thepriority-2 tiles are those that the ISS should retrieve
`but should transmit only if there are no priority-1 tiles to
`be transmitted; that is, priority-2 tiles are put on a lower-
`priority transmit queue in the I/O buffer of each ISS serv-
`er. (ATM switches would be allowed to dropthecells
`carrying these tiles.) Priority-3 tiles are those that should
`be retrieved and cachedat the ISSserver; these tiles are
`less likely to be needed by TerraVision. Note that there is
`a trade-off between “overpredicting” — requesting too
`many tiles — which would result in poor ISS performance
`and high network load, and “underpredicting,” which
`would result in poor application performance.
`Finally, a tile will continue to be included in Terra-
`Vision’s request list if it is still needed and has not
`yet been delivered. Thus, tiles or tile requests that are
`dropped or otherwise “lost” in the networkwill likely be
`delivered in response to a subsequent request from the
`application.
`
`Missing Tiles
`Network congestion, an overload at an ISS server, ora
`componentfailure could result in the late arrival or loss of
`tiles that are requested by the application. Several mecha-
`nisms were implementedto deal with this problem. First,
`although the entire set of high-resolution tiles cannot be
`collocated with the application, it is certainly feasible to
`store a complete set of lower-resolution tiles. For example,
`if the entire data set comprises 1 Tbyte of high-resolution
`tiles, then all of the tiles that are five or more levels coarser
`would occupyless than 1.5 Mbyte, a readily affordable
`amountof local storage. If a tile with resolution at, say,
`level 3 is requested but not delivered in time for the image
`to be rendered, then, until the missing level-3 tile arrives,
`the locally available coarser tile from level 5 would be
`
`TEEE Neos¢ Ma&/Jugsf'94a4§
`
`Unified Patents Exhibit 1005 App'x A-N9
`
`

`

`
`
`Fora typical
`MAGIC configura-
`
`tion, the interval
`
`between requests
`
`is currently set at
`
`200 ms, a value
`
`that was found
`
`empirically to yield
`
`satisfactory
`
`performance.
`
`aggregate throughput which can be
`obtained from the ISS. The data place-
`ment strategy depends onthe application
`andis a function of data type and access
`patterns. For example, the retrieval pat-
`tern for a database of video clips would be
`quite different from that for a database of
`images. A strategy was developed fora
`terrain visualization type of application
`that minimizes the retrieval time for a set
`of tiles: the tiles assigned to a given disk are
`as far apart as possiblein the terrain in
`order to maximize parallelism by minimizing
`the probability that tiles on a requestlist
`are on the same disk; and on each disk,
`tiles that are near each otherin the terrain are
`placed as close as possible to minimize
`retrieval time. Although this was shown to be
`an optimal strategy for terrain path-fol-
`lowing as in TerraVision [9], it was subse-
`quently shownthat ISS performance with
`random placementoftiles was only slightly
`worse. This was partly becausetile retrieval
`time is much less than the latency in the
`ISS servers and network transit time, and is therefore not
`currently a significant factor in overall performance. Ran-
`dom placementis simpler to implement and is expected to
`be satisfactory for many other applications. However, as
`discussed for the location of the ISS master, this conclu-
`sion may haveto berevisited if the performance or the
`geographic distribution of system components changessig-
`nificantly.
`
`Frequency of Requests
`Anothertrade-off pertains to the frequen-
`cy at which TerraVision sends its request
`list to the ISS. If the interval between
`requests is too large, then sometiles will
`not arrive when needed, resulting ina
`poor-quality display; in addition, the ISS
`will be idle and hencenot usedefficiently.
`Onthe other hand, if the interval is too
`short, then the request list might contain
`tiles that are currently in transit from
`servers to the application; this would
`result in poor ISS performance and redun-
`dant networktraffic. For a typical MAGIC
`configuration,
`the interval between
`requests is currently set at 200 ms, a value
`that was found empirically to yield satis-
`factory performance. This value is based
`roughly on the measuredlatency of the
`ISS (about 100 ms) and on the estimated
`time required for a tile request to travel
`through the network from the TerraVision
`
`host to the ISS master and then to the
`mostdistant ISS server, plus the time for
`the tile itself to travel back to the host (perhapsa total of
`50 ms). Additional measurements and analysis are needed
`to more precisely determine the appropriate request fre-
`quencyas a function of the performanceandlocation of
`system components and of network parameters.
`
`Location of ISS Master
`Since tile requests flow from TerraVision to the ISS mas-
`ter and thence to the servers themselves, the time for
`delivering the requests to the servers is minimized when
`the masteris collocated with the TerraVision host. Howev-
`er, locating the master with the host is neither desirable
`nor practical for several reasons. The masteris logically
`part of the ISS; therefore, its location should not be con-
`strained by the application. Also, an ISS may be used with
`several applications concurrently, by multiple simultaneous
`users of a particular application, or by a user whose host
`may be unableto supportany ISS functionality (e.g., a
`mobile user). Moreover, replication of the master would
`introduce problemsassociated with maintaining consisten-
`cy among multiple masters whenthe ISSis in a read/write
`environment, as it would be whenreal-time data are being
`stored on the servers.
`To first order, the delivery time oftile requests is limit-
`ed by the time t for a request to travel from TerraVision
`to the ISS server most distant from the TerraVision host.
`Hence, if the master is approximately on the path from the
`TerraVision host to that server, then t will not be much
`greater than when the master and hostare collocated. Fur-
`thermore, in the current MAGICtestbed, t is much smaller
`than the sum of the disk latency and the networktransit
`time. In other words, there is considerable freedom in
`choosing the location of the ISS master. Satisfactory sys-
`tem performance has been demonstrated, for example,
`with the TerraVision host in Kansas City, the ISS master
`in Sioux Falls, and servers in Minneapolis and Lawrence.
`Of course, this conclusion might change if faster servers
`reduce ISS latency considerably, or the geographic span of
`the network weresubstantially larger.
`
`Distribution of Tiles on ISS Servers
`The manner in which data are distributed among the
`servers determines the degreeof parallelism and hence the
`
`Avoiding Cell Loss
`Wheninitially implemented, the MAGIC internetwork
`exhibited very low throughputin certain configurations.
`One cause of the low throughput was found to be mis-
`matches between the burst rates of components in the com-
`munications path. Examples of such rate mismatches were:
`¢« An OC-3 workstation interface transmitting cells at full
`rate across the network to a 100 Mb/s TAXIinterface
`on another workstation
`¢ Two or more OC-3 input ports at an ATM switch send-
`ing data to the same OC-3 outputport
`A mismatch, coupled with small buffers at the output
`ports of ATM switches, caused cells to be dropped, which
`in turn resulted in the retransmission of entire TCP pack-
`ets, exacerbating the problem. In some cases the measured
`useful throughput was less than one percentof the capaci-
`ty of the lower speedline.
`Previously it was noted that in many cases a large
`MTUcan increase throughput. However, once again there
`is a trade-off. As the MTUsize is increased, the number
`of ATM cells needed to carry the MTUincreases. The
`probability that one or morecells from the MTUwill be
`dropped by the networktherefore increases, which in turn
`increases the probability that the MTU will have to be
`retransmitted, thus possibly decreasing the effective
`throughput. Flow-control techniques together with large
`switch buffers and proper choice of protocol parameters
`did provide satisfactory performance. N

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket