`
`: Towards a Scalable World Wide Web Server on Multicomputers
`
`Daniel Andresen Tao Yang Vegard Holmedahl Oscar H. Ibarra
`Department of Computer Science
`University of California
`Santa Barbara, CA 93 106
`{dandrese, tyang, veho, ibarra} @cs.ucsb.edu
`
`bstract
`
`We investigate the issues involved in developing a scalable
`World Wide Web (WWW) server on a cluster of workstations and
`parallel machines. The objective is to strengthen the processing
`capabilities of such a server by utilizing the power of multicom-
`puters to match huge demands in simultaneous access requests
`from the Internet. We have implemented a system called SWEB
`on adistributedmemorymachine, the Meiko CS-2, and networked
`workstations. The scheduling component of the system actively
`monitors the usages of CPU, U 0 channels and the interconnection
`network to effectively distribute HTTPrequests across processing
`units to exploit task and I/O parallelism. We present the experi-
`mental results on the performance of this system.
`
`1 Motivation
`
`The Scalable Web server (SWEB) project grew out of the needs
`of the Alexandria Digital Library (ADL) project at UCSB [A96].
`Digital library systems, which provide the on-line retrieval and
`processing of digitized documents through Internet, have increas-
`ingly turned into a topic of national importance. The Alexandria
`Project is focused on the design, implementation, and deployment
`of a distributed digital library for spatially-indexed information.
`The collections of the library currently involve geographically-
`referenced materials, such as maps, satellite images, digitized
`aerial photographs, and associated metadata. A rapid prototype
`system for the Alexandria digital library has already been devel-
`oped which allows users to browse spatially-indexed maps and
`images [AC95+]. To expose the system to a broad user commu-
`nity, the next version of the system (available in early 1996) will
`be connected to the World Wide Web (WWW) using the Hypertext
`Transport Protocol (HT") [A96].
`Our work is motivated by the fact that the Alexandria digital
`library WWW server has a potential to become the bottleneck in
`delivering digitized documents over high-speed Internet. Popu-
`lar WWW sites such as the Lycos and Yahoo [LYC95] receive
`over one million accesses a day. For WWW-based network in-
`formation systems such as digital libraries, the servers involve
`much more intensive I/O and heterogeneous CPU activities. To
`meet such demands, the S m B project makes use of existing net-
`worked computers and inexpensive disk resources to strengthen
`the processing and storage capabilities of WWW servers. We
`
`expect that using the idle cycles of those processing units and
`retrieving files in parallel frominexpensive disks can significantly
`improve the scalability of the server in response to a large amount
`of simultaneous H'ITP requests.
`Numerous other initiatives to create high-performance HTE'
`servers have been reported. The Znktomi server at UC Berkeley is
`based on the NOW technology pR96]. NCSA [KJ3M94] has built
`a multi-workstation H?Tp server based on round-robin domain
`name resolution (DNS) to assign requests to workstations. The
`round-robin technique is effective when H'ITP requests access
`HTML information of relatively uniform size and the load and
`computing powers of workstations are relatively comparable. Our
`assumption is that the computing powers of workstations and
`parallel machine resources can be heterogeneous. They can be
`used for other computing needs, and can leave and join the system
`resource pool at any time. Thus scheduling techniques which are
`adaptive to the dynamic change of system load and configuration
`are desirable. The DNS in a round-robin fashion cannot predict
`those changes. Another weakness of the technique is the degree of
`name caching which occurs. DNS caching enables a local DNS
`system to cache the name-to-IP address mapping, so that most
`recently accessed hosts can quickly be mapped. The downside is
`that all requests for a period of time from a DNS server's domain
`will go to a particular IP address.
`We have developed the preliminary version of a scalable
`WWW server (SWEB) [AC95+] running on a Meiko CS-2 dis-
`tributed memory machine and a network of workstations (NOW)
`such as SUN and DEC machines. Each processing unit is capable
`of handling a user request following the H'ITP protocol. The
`distinguishing feature of SWEB is effective resource utilization
`by close collaboration of multiple processing units. Server scala-
`bility is achieved by actively monitoring the run-time CPU, disk
`I/O, and network loads of system resource units, and dynami-
`cally scheduling user H'ITP requests to aproper node for efficient
`processing.
`Our dynamic scheduling scheme is closely related to the pre-
`vious work on load balancing on distributed systems, for which a
`collection of papers is available in [SHK95]. In these studies, tasks
`arrivals may temporarily be uneven among processors and the goal
`of load balancing is to adjust the imbalance between processors
`by appropriately transferring tasks from overloaded processors to
`underloaded processors. The task resource requirements are un-
`known, and the criteria for task migration are based on a single
`system parameter, i.e., the CPU load. We call them single-faceted
`
`1063-7133/96 $5.00 0 1996 IEEE
`Proceedings of IPPS '96
`
`850
`
`Petitioner Microsoft Corporation - Ex. 1010, p.1
`
`
`
`
`-
`~
`'
`scheduling strategies. In ~
`applications, there are: multiple
`parameters that affect the system performance. includling CPU
`loads, interconnection network performance and disk channel us-
`ages. The optimal I-HTTP request assignment to processors does
`not solely depend on CPU loads. Thus we need to develop a
`multi-faceted scheduling scheme that can effectively utilize the
`system resources by consiclering the aggregate impact of multiple
`parameters on system performance. kn
`[GDI93], resource re-
`quirements are predicted and suggested to guide the load shaxing.
`They mention multiple factors, bot utilize only the CPU factor in
`predicting rcsporrse times.
`The paper is organized a follows: Section 2 gives the back-
`ground and problem definiltion. Section 3 discusses the porjsible
`approaches for building a scalable WEB server and the SWEB
`organization arid load halaming strategies. Section 4 presents the
`experimental results showing the performance of S WEB and com-
`pares OLK approach with others. Section 5 discusses co"Ansions
`and future work.
`
`The World Wide Web is based on three critical components:
`the Uniform Resource Locator (URL); the HyperText Markup
`Language (RTIMIL), and the MyperText Transfer Protocol (HTTP).
`The URL defines which resource the user wishes to access, the
`HTMI, language allows the information to be presented1 in a
`platform-independen t bot still well-formatted manner, and the
`HTirP protocol is the appkication-level mechanism for achi.eving
`the transfer of information [HT95].
`A simple HTTP request would typically activate a sequence
`of events from initiation to completion as shown in Figure 1.
`First, the client determines the host name from the URL, and
`uses the local Domain Name System (13NS) server to determine
`its ZP address. The local DNS may not know the IP address of
`the destination, and may need tci contact the DNS system on the
`destination side to complete the resolution. After receiving the
`IP address, the client then sets up a TCPiIP connection to a well-
`known port on the server where theRTirPprocess is listening. The
`request is then passed in through the connection. After parsing
`the request, the server sends back a response code (e.g., :LO2 in
`the HTTP protocol stands for "OK. File founcl.", and 404 i:; "File
`not found.") followed by the results of the query. The connection
`i s then closed by either the client or the server.
`
`We call a system scalable if the system response time for
`individual requests is kept as small as theoretically possible: when
`the the number of simultaneous MTTP requests increases, while
`maintaining a low request drop rate and achieving a high peak
`request rate. We define the response time fo- a request as the
`length of the time period from when a request is initiated until all
`the reqnestecl information arrives at the client.
`For a single-workstation server, there is an upper bound for the
`number of requests per second (rps) that the server can handle. For
`
`.....................................
`
`................................
`
`....................
`..T~.~.~
`
`I.. ..........................
`
`,<molt
`
`Figure 1: A simple HTTP transaction. Client C looks up
`the address of server S, sends over request T , and receives
`response f.
`
`example, the NCSA has performed a number of tests using high-
`end workstations, ,and discovered in their working environment
`approximately 5-10 rps could be dealt with using the NCSA httpd
`server [KBM94], which cannot match the current and future loads
`(e.g. a digital library server). Thus multiple servers are needed
`for achieving scalable performance.
`Our overall objective of the system is to reduce and sustain
`the response time under large numbers of simultaneous requests.
`Goals considered in designing this system are twofold. First,
`we hope to demonstrate how to utilize existing inexpensive com-
`modity networks, heterogeneous workstations, and disks to build
`a scalable WWW slerver. Second, we attempt to develop dy-
`namic scheduling algorithms for exploiting task and I/O paral-
`lelism adaptive to the run-time change of system resource loads
`and availabilities. 'Ihe scheduler needs to incorporate multiple
`system performance parameters to assign user HlTP requests to
`a proper unit for efficient processing, while adding minimal over-
`head.
`There are several performance factors that affect the response
`time in processing HTTP requests These include processor load,
`caused by the overhead necessary to send bytes out on the network
`properly packetized and marshaled; disk I/O, which limits how
`quickly data can be delivered to the processor; the local network,
`over which remote data requests must be fetched; and the intemet
`connection to the client, which often is a severe bottleneck.
`Our goal is to get the requested information out of the server
`as fast as possible. Thus our scheduler primarily monitors the
`above first three perfformance factors. The Internet bandwidth in-
`formation is used partially in request re-direction. Our scheduling
`algorithm is multi-faceted in the sense that the proper decision on
`routing HTTP requests needs to aggregate the impact of the above
`multiple system parameters on the overall system response time.
`We wiII present the architecture for SWEB and its scheduling
`algorithm next.
`
`3.1 SWEB Architecture
`
`There are two approaches in designing the scheduler. One is
`to have a centralized scheduler running on one processor such
`that all HTTP requests go through this processor. The scheduler
`monitors the usages of all system resources, makes assignment
`decisions based on this information, and routes requests to appro-
`priate processors. We did not take this approach mainly because
`
`85 1
`
`Petitioner Microsoft Corporation - Ex. 1010, p.2
`
`
`
`Figure2: The computing and storage architecture of SWEB.
`DNS directs tasks to servers running task scheduler.
`
`Figure 3: The functional modules of a SWEB scheduler in
`a processor.
`
`the single central distributor becomes a single point of failure,
`making the entire system more vulnerable.
`The current version of SWEB uses a distributed scheduler.
`The user requests are first evenly routed to SWEB processors via
`the DNS rotation, as shown in Figure 2. The rotation on avail-
`able workstation network IDs is in a round-robin fashion. This
`functionality is available in current DNS systems. The major ad-
`vantages of this technique are simplicity, ease of implementation,
`and reliability [KBM94].
`The DNS assigns the requests without consulting dynamicdly-
`changing system load information. Then SWEB conducts a fur-
`ther assignment of requests. Each processor in SWEB contains
`a scheduler and those processors collaborate with each others to
`exchange system load information. After a request is routed to
`a processor via DNS, the scheduler in that processor makes a
`decision regarding whether to process this request or assign it to
`another processor. l b o approaches, URL redirection or request
`forwarding, could be used to achieve reassignment and we use
`the former. Request forwarding is very difficult to implement
`within H'ITP. URL redirection gives us excellent compatibility
`with current browsers and near-invisibility to users. Any H'ITP
`request is not allowed to be redirected more than once to avoid
`the ping-pong effect.
`The functional structure of the scheduler at each processor is
`depicted in Fig. 3. It contains a httpd daemon based on NCSA
`httpd code for handling httpd requests, with a broker module
`which determines the best possible processor to handle a given
`request. The broker consults with two other modules, the oracle
`and the Zoadd. The oracle is a miniature expert system, which
`uses a user-supplied table to characterize the CPU and disk de-
`mands for a particular task. The Zoadd daemon is responsible
`for updating the system CPU, network and disk load information
`periodically (every 2-3 seconds), and marhng those processors
`which have not responded in a preset period of time as unavail-
`able. When a processor leaves or joins the resource pool, the
`loadd daemon will be aware of the change.
`
`3.2 The multi-faceted scheduling algorithm
`
`In this subsection we present the algorithm that decides where
`a H'ITF' request should be routed. As we discussed before, several
`
`system load parameters affect such a decision. In a single-faceted
`scheduling system, a processor can be classified as lightly loaded
`and heavily loaded based on one parameter, e.g. CPU load. One
`purpose of such a classification is to update load information only
`when a classification changes to reduce unnecessary overhead,
`e.g. ISHK9.51. In our problem context, it is hard to classify a
`processor as heavily or lightly loaded since there are several load
`parameters. A processor could have a light CPU load but its
`local disk may receive many access requests from the network file
`system. Thus each processor in SWEB has a load daemon to detect
`its own CPU, disk, and network load, periodically broadcasting
`such information to other processors. In our experiments we will
`show that such overhead is insignificant.
`We address how multiple system load parameters can be used
`together in deciding the assignment of H'ITP requests. Since
`our goal is to minimize the response time for each request, we
`design the heuristic based on the estimated cost for processing
`each request using the following formula:
`
`t.s = tredirection + t d a t a + tCPlT + t n e t
`tredirection is the cost to redirect the request to another pro-
`cessor, if required. t d a t a is the time to transfer the required data
`from the disk drive, or from the remote disk if the file is not local.
`t C p U is the time to fork aprocess, perform diskreading to handle
`a HTI" request, plus any known associated computational cost if
`the request is a CGI operation. t n e t is the cost for transferring the
`processing results over the Internet. We discuss and model these
`individual cost terms as follows.
`0
`
`( Requested file size
`
`If the file is local, the time required to fetch the data is simply
`the file size divided by the available bandwidth of the local
`storage system, b d i s k . We also measure the disk channel
`load 61. If there are many requests, the disk transmission
`performance degrades accordingly.
`If the data is remote, then the file must be retrieved through
`the interconnection network. The local network bandwidth,
`b n e t r and load 62 must be incorporated. Experimentally,
`
`85 2
`
`Petitioner Microsoft Corporation - Ex. 1010, p.3
`
`
`
`we found on the Meiko approximately a 10% penalty for a
`remote NFS access, and on the SUN workstations connected
`by Ethernet the cost increases by SO%-70%.
`No. of operations required
`t C P U = C P U l o a d
`c p [I.pe e d
`The tcprr term estimates the amount of processing time re-
`quired to complete the task. This is based on the speed of the
`server, the estimated load on a destination node ((?P&ad),
`and the estimated number of operations required fior the
`task. The computation requirement for a particular request
`is estimated by the oracle component of the system (see
`Figure 3). The parameters for different archite:ctures are
`saved in a configuration file. It should be noted that some
`estimated CPU cycles may overlap with network an'd disk
`time and the overall cost may be overestimated sllightly, but
`this conservative estimation works well in our experience.
`The load estimation of remote processors is based on the
`periodic updating of information given by those remote pro-
`cessors. It is possible that a processor pr is imcoirrectly
`believed to be lightly loaded by other processors, and many
`requests will be redirected to it. To avoid this unsynchro-
`nized overloading, we conservatively increase the CPU load
`ofp, byo. Thisstrategyisfoundtobeeffectivein [SHEIS].
`We use U = 30%.
`tnet = #-bytessequiTed
`net-bandwidth
`This term is used to estimate the time necessary to return
`the results back to the client over the network. When the
`scheduler compares processors, we assume all processors
`will have basically the same cost for this term, so it is not
`estimated. Our research goal is to produce the query result
`for a HTTP request as fast as possible in the server site.
`
`0
`
`This term measures the time necessary to move a HTTP
`request from one server to another. This is stet to twice
`the estimated latency of the connection between the server
`and the client (tclient--seruer latency) plus the time for a
`server to set up a connection (tconnect). The conceptual
`model is that of a very short reply going back ti3 the client
`browser, who then automatically issues another request to
`the new server address. The transfer time is zero if the task
`is already local to the target server. The estimate: of the link
`latency is available from the TCP/IP implementation, but in
`the initial implementation is hand-coded into the server.
`Given the arrival of HTJT request T at processor 2, the sched-
`uler at processor z goes through the following steps.
`1. Preprocess a request The server parses the EI'ITE' com-
`mands, and completes the pathname given, detejrmining ap-
`propriate permissions along the way. It also determines
`whether the requested document exists, has moved, or is a
`CGI program to execute.
`2. Analyze request The server then determines whether itself
`or another server should fulfill the request. It does so by
`checking the results from the preprocessing phase. If r is
`
`already determined to be a redirection, does not exist, or is
`not a retrieval of information', then the request is always
`completed at z. Then the request is passed to the broker for
`analysis. The: broker then:
`
`(a) Determines the server on whose local disk the file
`resides cif any).
`
`(b) Calculates an estimated time for each available server-
`node for request T .
`
`Having determined the estimated time for each server to fill
`the request, the broker indicates its choice (determined by
`the minimum time to completion) to the main process.
`3. Redirection If the chosen server is not x, the request is
`redirected appropriately.
`4. Fd.1lment A,t this point the request is processed in the
`normal HTl'P server manner, with any CGI's executed as
`needed, and itny clientresponses shuttled out the appropriate
`socket.
`
`It should be noted that modeling the cost associated with pro-
`cessing a HTTP request accurately is not easy. We still need to
`investigate further the design of such a function. Our experiments
`show that the cutrent cost function does reflect the impact of
`multiple parameters on the overall system response performance,
`and that SWEB delivers acceptable performance based on such a
`heuristic function, adapting to dynamically-changing system re-
`source loads. In our experiments, we will show that the overhead
`of SWEB scheduling is insignificant compared to the system load
`used for request fulfillment and other activities.
`
`3.3 Performance Analysis
`
`We provide an analysis on the maximum sustained rps which
`are achievable within our schema. We use the following parame-
`ters: p is the number of nodes, T * is the theoretical max sustained
`rps, F is the average file size requested, bl is the bandwidth of
`local disk accessing, b2 is the bandwidth of remote disk accessing,
`d is the average rtdirection probability, A is overhead in prepro-
`cessing a request, 0 is the overhead of redirection, and H is the
`average processing time for each request. Then we can show the
`maximum sustained rps for file fetches is
`
`M -- (i + d ) E + (1 - - d)-
`
`1
`
`+ A + d ( A + 0 ) .
`
`The full andysis is discussed in [AY95+].
`For example, if bl = SMB/s and 62 = 4.SMB/s, 0 % 0,
`p = 6, T* = 2.88, then the maximum sustained rps is 17.3 for 6
`nodes. We will show this is close to our experimental results in
`Section 4.
`
`'SWEB currently focuses on GET and related commands used in the
`HlTPprotocol. Other commands (e.g., POST) arenothandled, but SWEB
`could be extended to do so in the future.
`
`853
`
`Petitioner Microsoft Corporation - Ex. 1010, p.4
`
`
`
`4 Experimental Results
`
`4.1 Scalability of overall performance
`
`Our primary experimental testbed consists of a Meiko CS-2
`distributed memory machine at UCSB. Each node has a scalar
`processing unit (a 40Mhz SuperSparc chip) with 32MB of RAM
`running Solaris 2.3. We mainly use six CS-2 nodes, each of which
`is connected to a dedicated 1GB hard drive on which the test files
`reside. Diskservice is available to all other nodes viaNSFmounts.
`These nodes are connected via a modified fat-tree network with a
`peak bandwidth of 40MB/s. Our secondary testbed is a network
`of 4 SparcStation LX's running Solaris 2.4 connected via standard
`10 Mb/s Ethernet. Each LX hac; a local 525 MB hard drive and
`16MB of RAM. The effective bandwidth of this Ethernet is low
`since it is shared by other UCSB machines.
`The complete SWEB system is based on NCSA httpd 1.3
`source wit11 modifications to support the distributed functional-
`ity. We plan to move to version 1.5 in the immediate future.
`All software uses the sockets library built on the Solaris TCP/IP
`streams implementation. The use of the built-in TCP/IP library
`was a deliberate decision based on several factors, including the
`following:
`
`1. Compafibility. We wanted to use current WWWIHTT'P
`compatible clients for testing purposes, as well as our own
`programs. Current browsers typically use TCPiIP and sock-
`ets or the equivalent.
`
`2. Portability. With an implementation built on standard
`libraries available on virtually any UNIX system, our soft-
`ware can easily be moved to a heterogeneous NOW with no
`modifications.
`
`There are, of course, disadvantages. The primary disadvan-
`tage is certainly performance: we were only able to achieve ap-
`proximately 51.5% of the peak communication performance on
`the Meiko, where a DMA transfer handled by the built-in Elan
`communications co-processor can achieve speeds near the peak
`bandwidth. We also felt that additional code complexity was
`caused by not using the built-in messaging libraries such as NX/2
`or Active Messages.
`We ran a series of tests where a burst of requests would ar-
`rive nearly simultaneously, simulating the action of a graphical
`browser such as Netscape where a number of simultaneous con-
`nections are made, one for each graphics image on the page.
`The clients were primarily situated within UCSB. We also tested
`SWEB via requests from the East coast of the US (Rutgers Uni-
`versity) to examine the benefits of scheduling in a high network
`latency situation. We concentrate on the UCSB data on the scal-
`ability of the system believing this more accurately reflects the
`high-bandwidth networks we anticipate.
`It should be noted that the results we report are average per-
`formances by running the same tests multiple times. The test
`performance is affected by dynamically-changed system loads
`since the machines are shared by many active users at UCSB.
`We report our experiments to examine the performance of our
`system in the following aspects: scalability of overall perfor-
`mance, scheduling strategy comparison, and overhead costs.
`
`Maximum rps on Meiko CS-2 and NOW. The first experi-
`ment was run to determine how many requests per second SWEB
`could process. This depends on the average file sizes requested
`and the number of nodes. In [IBM94], it is reported that a high-
`end workstation running NCSA httpd could fulfill approximately
`5 Ips. We examine how a one-node NCSA httpd 1.3 server per-
`forms, and compare it with the 6-node SWEB on Meiko CS-2 and
`the 4-node SWEB on NOW. The maximum rps is determined by
`fixing the average file size and increasing the rps until requests
`start to fail, which indicates that the system limit is reached. The
`duration of the test which simulates the burst of simultaneous re-
`quests also affects the experimental results. The requests coming
`in a short period can be queued and processed gradually. But
`the requests continuously generated in a long period cannot be
`queued without actively processing them since there are new re-
`quests coming after each second. We use two types of tests. One
`is a short period as a duration of 30 seconds and at each second
`a constant number of requests are launched. The long period has
`120 seconds, in order to obtain the sustained maximum rps.
`It can be seen from Table 1 that the maximum rps of a single-
`node server is improved significantly by the multi-node server.
`The speedups vary for different file sizes. The maximum rps
`obtained in a short period is much higher than that in a long
`period because requests accumulated in a short period can be
`queued. For NOW results with 1.5Ml3 file size, 11 rps is reached
`for duration of 30s but only 1 is achieved for sustained maximum
`rps. This is because the maximum disk and Ethernet bandwidth
`limit is reached for this case. These results have been confirmed
`via the analysis in Section 3.3, which gave an analyticalmaximum
`sustained 17.8 rps for 1.5M files on the Meiko, consistent with
`the 16 rps achieved in practice. It should be noted that in practice
`request7 come in periodic bursts. Thus we use a 30-second test
`period in the rest of experiments, representing a non-trivial but
`limited burst of requests.
`Response time and drop rate on Meiko CS-2 and NOW. In
`Table 2, we report the response time (the time from after the client
`sends a request until the completion of this request) when we vary
`the number of server nodes. The system starts to drop requests if
`the server reaches its rps limit. For small file requests (lK), the
`multi-node server performs much better than one-node server but
`the response remains constant when using 2+ processors. This is
`because none of the theoretical or practical limits on bandwidth
`or processing power have been reached for small files. The re-
`sponse time does increase to a certain degree after the number of
`processors exceeds 2, which reflects the overhead required by the
`scheduling algorithm and distributed file system.
`For relatively large files (1 SMB), the processing time is sub-
`stantially longer. When the number of processors increases, the
`SWEB provides substantially better performance. These results
`are consistent with those of NCSA, and also strongly confirm
`the notion that under heavy loads a distributed server solution
`can achieve significant speedups. Under especially heavy loads,
`which would tend to occur during peak hours at popular sites, a
`single server tends to drop almost half or more of the connections
`made, whereas a distributed server might have a larger overall
`average response time and fill every request. The superlinear
`
`854
`
`Petitioner Microsoft Corporation - Ex. 1010, p.5
`
`
`
`Table 1: Maximum rps for a test duration of 30s and 120s on likiko GS-2 and NOW.
`
`speedup we obtain reflects the fact that the total size of rnem-
`ory in SWEB is much larger than on a one-node server, and that
`the multi-node server accommodates more requests wilthin main
`memory while one-node server spends more time in swapping be-
`tween memory and the disk. Additionally, the network overhead
`is distributed among multiple servers rather than concentrated at
`a single node.
`
`4.2 Comparison between different scheduling
`strategies
`
`Our scheduling strategy takes into consideration the locality of
`requested files, and also the current resource loads. We cornpare
`our approach with others: one is the NCSA approach that uni-
`formly distributes requests to nodes (round robin); another is to
`purely exploit the file locality by assigning requests to the nodes
`that own the requested files. We present experimental results in
`retrieving a set of files with uniform and non-uniform sizes
`Request with non-uniform file size. With non-uniform file
`sizes, the load distribution between processors by the initial DNS
`assignment is heterogeneous. We expect that the round-robin ap-
`proach cannot adapt such load variations. We tested the abjhity of
`the system to handle requests with sizes varying from short, ap-
`proximately 100 bytes, to relatively long, approximatelly 1 SMB.
`Table 3 shows the actual response time in seconds for this case.
`For lightly loaded systems, SWEB performs comparably with the
`others. For heavily loaded systems (rps 2 20), SWEB has an
`advantage of 1560% over round robin and file locality.
`
`Table 3: Performance under non-uniform requests. 1 SMB
`file size, 30 sec. duration, 0% drop rate, Meiko CIS-2.
`
`It is important to exploit file locality and also considejr CPU
`load. We performed a skewed test to illustrate the fundamental
`weakness of the file locality heuristic where each client accessed
`the same file located on a single server, effectively reducing the
`parallel system to a single server. In this situation, round-robin
`handily outperforms file locality, with and average response times
`of 3.7s and 81.4s, respectively. This test was performed with six
`servers, 8 rps, for 45s, and file size of 1.5MB.
`
`855
`
`We also ran clients on the east coast (Rutgers University, New
`Jersey), and the testsresults show aperformance gain of over 10%
`using file locality instead of round robin from an Ethernet-linked
`server, in spite of the poor bandwidth and long latency over the
`connection from the east coast to the west coask.
`Requests with uniform file size. For uniform file sizes, the
`DNS scheme assigns HTIT requests evenly to nodes, which leads
`to a homogeneous distribution of CPU load, but the network com-
`munication load is not minimized. We expect that the advantages
`of the re-scheduling will manifest in a high communication cost
`environment. On I’vleiko CS-2, we conducted several tests and
`the result shows thai three strategies have similar performance.
`This is because NFS is irnpiemented in Meiko CS-2 through the
`fast Elan fat tree network and the chance of network contention
`is much smaller than on Ethernet. In a relatively slow, bus-type
`Ethernet in a NOW environment, the advantage of exploiting file
`locality is more clear. This is verified in an experiment shown in
`Table 4.
`
`Table 4: Performance under ungorm requests on NOW
`File size lSMB, 01% drop rate.
`
`4.3 Overhead of §WEB
`
`Overhead distiribution from the client point of view. SWEB
`code was instrumented to determine the overhead cost for a re-
`quest. Table 5 shows the case of a 1.5MB file fetched over a fairly
`heavily loaded system. The results indicate that the overall over-
`head introduced by SWEB analysis and scheduling algorithm is
`insignificant. For small files (approximately lK), the data transfer
`time is smaller, but the overhead of SWEB compared to the pre-
`processing time for parsing HlTP commands is still very small.
`The direct cost of analysis is typically about 1-4 ms. for cost
`estimation and 4 nns. to generate a redirection if necessary. Indi-
`rect costs which affect the time required to complete a redirection
`depend heavily on lthe latency between client and server. For a
`client fetching a 1 S M file on t