throbber
An Evaluation of Storage Systems based on Network-attached Disks
`
`A. L. Narasimha Reddy
`Gang Ma
`Dept. of Electrical Engineering,
`Dept. of Computer Science,
`Texas A & MUniversity
`College Station, TX  - 
`
`Abstract: The emergence of network-attached
`disks provides the possibility of transferring data
`between the storage system and the client directly.
`This oers new possibilities in building a distributed
`storage system.
`In this paper, we examine dierent
`storage organizations based on network-attached disks
`and compare the performance of these systems to
`a traditional system. Trace-driven simulations are
`used to measure the average response times of the
`client requests in two dierent workloads. The results
`indicate that it is possible to reduce the workload on
`the le manager and improve performance in some
`workloads. However, in other workloads, reduced cache
`hit ratios may oset the gains of distributing the
`network processing workload.
`
` Introduction
`
`With the increasing processor speeds and increasing
`use of IO intensive applications such as video retrieval,
`distributed and parallel storage organizations have
`received considerable attention recently. However, it
`has been observed that even as parallelism is being
`exploited in data retrieval, the le manager managing
`the data remains a bottleneck.
`In a traditional le
`system, le server has to be involved intensively in
`order to satisfy a client’s request. With the rapid
`growth of the number of network users and the number
`of requests, demands on the le server continue to
`increase.
`It has become imperative to nd ways
`to reduce workload on the le serversfor example
`through client caching and to increase the throughput
`capacity of the serversfor example, through parallel
`and distributed systems.
`Earlier work has been done in AFS   to reduce the
`le server load. AFS clients use the local disk as a
`cache to store the les that have been accessed before
`and will probably be needed in the future. While client
`caching can signicantly reduce the requests going to
`
` This work is supported in part by an NSF Career grant and
`by a grant from Texas Higher Education Board
`
`the le server, the load on the le server could still be
`too high due to a large number of misses in the client
`cache caused by le sharing and increasing le sizes.
`NetServer of Auspex exploits a unique functional
`multiprocessor architecture by coupling storage devices
`to network as directly as possible  to improve le
`system scalability. Server level striping is shown to be
`eective in improving data throughput in ZEBRA  
`and in SWIFT . NCSA HTTP server  employs an
`AFS based organization to distribute the workload at
`the le server to multiple HTTP servers. Cooperative
`caching scheme in xFS le system utilizes the idle
`client’s cache to obtain higher global hit ratio.
`With the introduction of network-attached disks,
`the workload of le server could be reduced by transfer-
`ring data directly between clients and the storage sys-
`tem. Fibre Channel attached disks  and Servernet-
`attached disks  are examples of such an approach.
`The le server may need to set up a session between
`a client’s request and the disk controller. After that
`data can be transferred directly between the client and
`the disk without going through the le server. This can
`reduce the server load and thus can potentially improve
`the overall performance. Several organizations for a le
`system based on network-attached disks are studied in
` . The authors measured the workload in the le
`server under the the proposed NetSCSI and NASD
`architecture and showed that the server load could be
`reduced by a factor of up to ten in NFS and ve in
`AFS. However, the average response time of the client
`requests, an important factor of overall performance
`was not measured.
`In this paper, we study several issues in organizing
`a storage system based on network-attached disks. In
`a regular le system, le server manages the le name
`space, allocates physical disk space and supports user’s
`requests for le service. A le server typically uses
`memory caching to improve response time for user
`requests. Without extensive modications of the le
`system, only a few of these functions can be delegated
`to other components in the system in order to reduce
`the load at the le server. Network-attached disks, by
`
`
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 1 of 8
`
`

`

`their ability to supply data directly to the user, can
`reduce the network precessing overhead on the server.
`This is one of the main perceived advantages of a
`storage system based on network-attached disks. Then
`how much impact does this make on user’s response
`time? If the network precessing work is to be done at
`the disks, how much processing capacity do the disks
`require to ooad this work from the server system?
`Typically disks have a small cache on the disk read-
`write arm. The size of this cache is in the range of
` k-MB. Is this cache sucient for providing better
`response time than a traditional system that employs
`server caching? These are some of the issues that will
`be studied in this paper.
`In order to answer the above questions, we use
`trace-driven simulations to compare storage systems
`based on network-attached disks with a traditional le
`system? We use the average response time of client
`requests as the primary measure of evaluation.
`The rest of this paper is organized as follows.
`Section  describes the system organizations we will
`study in this paper.
`In section , we discuss the
`simulations performed.
`In section , we present and
`analyze the simulation results. Section  describes
`related work in this area. Section  provides a summary
`of the results and directions for future work.
`
` Storage System Organizations
`
`In the following subsections, we discuss two organiza-
`tions for a storage system.
`
`. Server-disk: Regular storage sys-
`tem with server-attached disks
`
`This storage organization is used by the current dis-
`tributed le systems. The disks are attached to the le
`server via a private bustypically SCSI bus. Clients
`access the le server through a public networktypically
`a LAN. All the client requests are sent to the le
`server rst. The le server is responsible for handling
`and interpreting the requests. File servers typically
`employing caching to reduce the load on the disks
`at the server and to improve response time for client
`requests. If a miss occurs in the server cache, le server
`will issue a request to the disk, which in turn accesses
`the data and sends it back to the le server. Finally le
`server will satisfy the client request by sending the data
`to the client. In this situation, le server is intensively
`involved in handling every request from the clients.
`Fig. shows the sequence of operations that take place
`to serve a user’s request in this organization.
`
`File server
`
`3
`
`2
`
`4
`
`1
`
`SCSI bus
`
`Server-disk
`
`LAN
`
`Client machine
`
`1. Client sends request to file server.
`2. File server forwards the request to disk if necessary.
`3. Disk accesses the data and sends reply to file server.
`4. File server replies to client.
`
`Figure : Server-disk: Regular server-attached disk.
`
`. Net-disk: Storage System based on
`Network-attached disks
`
`In this system, the disks are attached to the le
`server via a private network. Disks are also directly
`connected to the LAN which connects clients and the
`le server together. Clients still send requests to the le
`server rst. The le server processes the requests and
`forwards them to the disks via the private network if
`necessary. Rather than sending the data back to the le
`server again, the disks send the desired data directly to
`the client via the LAN. Since the le server is no longer
`involved in block data transfer, the load on it will
`possibly be less than in a regular server-disk system.
`On the other hand, to ooad network processing work
`to disks, the le server does not employ data caching.
`Then how does the performance of this organization
`compare to regular server-disk organization?
`If we
`allow the server to continue to cache and reply data to
`the client, the network processing can not be ooaded
`to the network-attached disks on cache hits. Then the
`network-attached disks can reply data to clients only
`on cache misses at the server. Fig.  describes the
`sequence of operations that take place to serve a user’s
`request in this organization.
`employ
`Net-disk and server-disk organizations
`caching at dierent places in the system. A single large
`cache space in the le manager in server-disk enables
`ecient utilization of cache space across all the data
`sets in the system. The smaller caches at each disk
`in net-disk organizations can lead to ineciencies if
`disks are accessed non-uniformly. Our assumption of
`system-wide disk striping   however reduces this
`disadvantage. We assume that the le manager caches
`metadata in both the organizations and replies directly
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 2 of 8
`
`

`

`Private fast network
`
`5
`
`Net-disk
`
`2
`
`cache
`
`cache
`
`File manager
`
`6
`
`3
`
`4
`
`LAN
`
`1
`
`Client machine
`1. Client sends request to file server.
`2. File server forwards the request to disk if necessary.
`3. Disk controller accesses the data from disk cache if hits,
` then transfers to client directly.
`4. Or accesses the block data from disk if misses, and
` directly transfers to client.
`5. Disk sends completion status to file manager.
`6. File manager sends request completion code to client.
`
`Figure : Net-disk:Storage system based on Network-
`attached disks.
`
`to clients on metadata related requests. However, the
`le data caching is performed dierently in dierent
`organizations. Server-disk caches le data in the le
`manager cache while net-disk organization caches le
`data at the disk.
`We are also interested in other issues in organizing
`the storage system. For example, how powerful should
`the disk processor be in a net-disk system? How
`large a cache should disks have in order to maintain
`comparably high hit ratios? How will the overall
`performance change if we vary the number of disk
`nodes? We also want to identify which component
`among le server, network and disks is the bottleneck
`that impacts the system performance in these dierent
`approaches.
`
` Simulations
`
`In order to measure the overall system performance
`under the dierent storage system organizations, we
`performed a set of trace-driven simulations.
`In each organization, the simulated system consists
`of a le manager and a set of disks. The le manager
`and the disks are interconnected dierently in the
`dierent organizations.
`In the server-disk, the disks
`are attached to the le manager through a SCSI bus
`and in the net-disk case, the disks are attached to the
`le manager through a private fast network. In both
`the cases, cost of control messages to talk to disk is
`assumed to be us.
`In both organizations, we assume that the le
`manager caches the metadata of the le system.
`In
`the server-disk case, the le manager also caches the
`
`block data. The size of the le manager cache is varied
`from MB to GB. We vary the size of the disk cache
`from MB to MB in the network-attached disk. We
`assume LRU is used as the replacement policy for the
`caches both at the le managers and at the disks. We
`assume that the caches are organized on a KB block
`basis.
`It is assumed that data is striped across the disks
`in order to distribute the load evenly among the disks.
`The data will be stored in all available disks in a stripe
`size of  blocksKB. In our system, the rst chunk
`KB of each le is stored on a random starting disk,
`s. Subsequent chunks of that le are consecutively
`stored on disks s + mod n; s + mod n; ..., where
`n is the number of disks. Equal number of disks are
`employed in both the organizations.
`The simulations are written in CSIM. All the re-
`quests are sent to the le server rst via regular
`networkLAN, and then processed in separate ways
`depending on the organization as shown in gures
`and . The le manager processor is assumed to
`have a processing capacity of MIPS. We vary the
`disk processor speed from MIPS to MIPS to
`study the impact of disk processor power on the system
`performance.
`Table lists the costs of dierent operations that
`we will use as parameters in the simulations. We
`assume that the disk access latency is . ms. The
`cost for cache accesshitmissin a le manager and the
`network processing load are based on measurements
`done on IBM’s OS operating system  . We assume
`that the cache search time in the network-attached disk
`is half of that in the le manager cache because the le
`manager needs to do extra worktranslating le name
`into disk block address, checking access permissions,
`etc.. Costs for disk access and network transfer
`are based on the size of the request. The costs per
`KB block and KB chunk reect the eciency of
`transferring data in larger size blocks from the disk
`and over the network. We assume that the disk has a
`transfer rate of MB per second. Processor dependent
`costs are given in number of instructions such that the
`impact of changing the processor MIPS can be easily
`studied.
`In our experiments, we use request response time
`as the performance measure. Previous studies   have
`used the load on the server as a measure of evaluating
`the network-attached disk organizations. The goal
`of our study is to investigate the impact of the new
`organizations on the response time of user’s requests.
`The request response time is measured as the time
`interval to service all the blocks in a given request.
`We use two dierent traces in our simulations. A
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 3 of 8
`
`

`

`Table : Costs for dierent operations.
`
`Table : Description of the two trace les.
`
`Per-block
`KB cost
` ins.
` , ins.
`
`Per-chunk
`KB cost
`NA
`NA
`
` of client machines
`Total  of requests
`Requested bytes MB
`Trace Duration hours
`
`NFS trace Web trace
`
`  , 
`, , 
`, , 
` ,
`,
`
` 
`
`organizations, block read & write will occur between
`diskor disk cache and client without going through
`the le manager. The Internet trace le simply consists
`of le access operations, mostly le read operations.
`
` Results
`
`. NFS trace
`
`Fig. shows the average request response time as a
`function of the disk processor MIPS. We used one
`le manager and  disk nodes in this experiment.
`The le manager has a processor capacity of  MIPS
`and a MB cache. The disk cache in the net-disk
`organizations is varied from MB to  MB. Since
`requests are processed entirely at the le manager in
`the server-disk organization, changes in disk processing
`capacity do not aect its performance. Higher disk
`processor capacity improves performance considerably
`in net-disk organizations since block data transfer
`and network processing are performed by the disk
`processor. However, we see that the response time is
`not always better than in the server-disk organization.
`With MB disk cache, net-disk organization has worse
`performance than the server-disk organization. Only
`with larger disk caches, net-disk organizations have
`better response times than server-disk organization.
`This shows that the benet of distribution of data
`transfer brought by network-attached disks can be
`oset by the costs of extra disk accesses introduced
`due to insucient caching. With  MB of disk
`cache and  disk MIPS, we see an improvement
`in overall performance-response time by up to 
`compared to server-disk organization.
`Even at a
`low disk processing speed of MIPS, the processing
`capacity at the diskstotal of MIPS outweighs the
`capacity at the le manager. This is part of the reason
`why a net-disk organization outperforms the server-
`disk organization. Net-disk organization also has extra
`amount of cache at the disk nodes. It is not clear if
`the higher MIPS in the system or the larger amount of
`cache space or the distribution of workload contributes
`to the improvement in response time in the net-disk
`organization. We will explore these issues further in
`
`Operations
`
`Getattr & similar
`Cache hit in le manager
`Cache miss in
`le manager
`Cache hit in network-
`attached disk cache
`Cache miss in network-
`attached disk cache
`Disk access time
`Private network
`transfer time
`Private network latency
`for control messages
`Public networkLAN
`transfer time
`
`, ins.
`
` , ins.
`
`NA
`
`NA
`
` , ins.
`  ms
`
`NA
`. ms
`
`. ms
`
` . ms
`
` us
`
` us
`
` , ins.
`
` , ins.
`
`trace based on NFS workload and a trace based on web
`accesses are used in this study. The NFS trace data
`is obtained from University of California at Berkeley
`. This trace consists of network requests from  
`clients during one week period that are serviced by an
`Auspex le server. The trace was taken by snooping
`the network, so it only contains the post-client-cache
`request data.
`In other words, these requests are
`actually the local misses occurred at client caches. The
`original NFS trace includes a large amount of backup
`activity over weekends and at night. We only used the
`daytime activity between  AM to  PM on the server
`as input to our simulations.
`The other trace we used in our simulations is the
`workload trace of the ClarkNet WWW server  .
`ClarkNet is a busy Internet service provider for the
`Metro Baltimore-Washington DC area. This trace
`contains the requests from  ,  clients during a two
`week period. The original trace consists of successful
`accesses as well as unsuccessful accesses. We only con-
`sider the successful accesses since unsuccessful accesses
`do not incur any data transfer. Table  lists a brief
`description of these two trace les.
`The NFS trace le consists of three major types
`of activities: getattr & setattr, block read & write,
`directory read & write.
`In all the organizations we
`study here, le manager has to deal with the accesses
`to le metadata and the translation of client requests
`into disk commands. Therefore the getattr & setattr
`and directory read & write operations are handled by
`the le manager in all the systems.
`In the net-disk
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 4 of 8
`
`

`

`the organizations. The server-disk cannot eectively
`utilize the enhancements at the disk increased MIPS
`or increased caches. Given the same processing
`capacity and cache memory, the dierent organizations
`can utilize them more eectively in dierent locations
`in the system. How do the systems compare if we
`put the extra memory and the extra processing power
`at the le manager in the server-disk organization?
`To answer this question, we consider systems with
`equal processing capacity and equal amounts of cache
`memory.
`Fig.  shows the average request response time as
`a function of total memory in the system. In server-
`disk organization, all the memory is located at the le
`manager. We assume that the le manager in the
`net-disk has  MB and the rest of the memory is
`distributed equally across the  disks in the system.
`For example, with a total memory of  MB, each disk
`has  -   = MB in the net-disk organization.
`We assume that le managers have equal processing
`capacity of  MIPS in all the organizations. The disks
`are assumed to have a capacity of  MIPS. We notice
`that the response time of the server-disk organization
`is better than the net-disk organization in all the
`cases. However, as the amount of memory on each
`disk increases, the dierences in response times reduce.
`Beyond MB per disk, the dierence in response times
`doesn’t reduce signicantly further.
`
`server-disk
`net-disk
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`0
`
`average response time
`
`192
`
`256
`
`640
`384
`total amount of memery
`
`1152
`
`2176
`
`Figure : Response time vs. total amount of memory.
`
`Next we considered le managers with dierent
`processing capacities.
`In net-disk organizations, the
`processing capacity and the load is distributed between
`the le managers and the disks. In server-disk organi-
`zation, the processing capacity at the disks is unutilized
`and hence seems wasteful to locate any MIPS at the
`disk. For example, a net-disk organization with 
`MIPS at the le manager and  MIPS at each of the
`
`server-disk
`net-disk-4MB
`net-disk-8MB
`net-disk-16MB
`net-dsk-32MB
`
`later experiments.
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`0
`
`average response time(ms)
`
`10
`
`50
`25
`individual disk mips
`Figure : Response time vs. disk processor MIPS.
`
`100
`
`Fig.  shows the average request response time as
`a function of the number of disk nodes. We assumed
`that the system has a le manager with  MIPS and
`  MB of cache. With a larger number of disks, the
`disk accesses can be distributed over more disks to
`reduce the disk waiting times in all the organizations.
`Net-disk organizations see even more improvement
`because the data transfer load can be distributed to
`more disk nodes. However, we notice that with MB-
`MB of disk cache, the net-disk organization has a
`higher response time than the server-disk organization.
`From gures and ,
`it seems that a disk cache
`of MB- MB is needed for this NFS workload for
`net-disk organizations to outperform the server-disk
`organization.
`
`server-disk
`net-disk-4MB
`net-disk-8MB
`net-disk-16MB
`net-dsk-32MB
`
`4.5
`
`4
`
`3.5
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`0
`
`average response time(ms)
`
`4
`
`12
`8
`number of disk nodes
`Figure : Response time vs. number of disks.
`
`16
`
`So far, we have considered varying the disk param-
`eters number, MIPS and cache sizes while keeping
`the conguration of the le manager constant in all
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 5 of 8
`
`

`

`server-disk-block data
`server-disk-metadata
`net-disk-block data
`net-disk-metadata
`
`1.1
`
`1
`
`0.9
`
`0.8
`
`0.7
`
`0.6
`
`0.5
`
`0.4
`
`cache hit ratio
`
`192
`
`256
`
`1152
`640
`384
`total amount of memory(MB)
`
`2176
`
`Figure : Hit ratios as a function of memory.
`
`Fig.  shows the dierent components contributing
`to the response times in server-disk on the left and the
`net-disk on the right organizations. Fig.  shows that
`the disk access times are signicantly higher in the net-
`disk organization. As shown earlier, this is mainly due
`to lower hit ratios for block data at the disks in the net-
`disk organizations. This lower hit ratio, which results
`in higher average disk service times, dominates the
`other possible advantages of the net-disk organization
`for the NFS workload.
`
`. Web trace
`
`We performed the same experiments using Clarknet
`trace le.
`In Web access, popular les are accessed
`frequently and other les are accessed rarely.
`It is
`shown in   that  of the distinct les were
`responsible for  of all the requests received by
`the server. Fig. shows the performance of dierent
`organizations as a function of total memory in the
`system with a le manager of  MIPS and MB of
`cache memory and  disks each with  MIPS. In all
`our experiments, cache hit ratio is already so high that
`increasing the total memory in the system does not
`show much performance improvement beyond MB.
`We observe that net-disk organization has  lower
`response time than the server-disk organization. This
`is in contrast to the earlier results observed in the NFS
`workload as shown in Fig. . This is mainly due to
`the high hit ratios  -  observed even with small
`caches at the disks in the web workload.
`Fig. shows the contribution of dierent compo-
`nents to response times in both the organizations con-
`gured as explained above. Service times and waiting
`times at the le manager are the main contributers
`to the response time in the server-disk organization.
`
`  disks in the system has a total processing capacity
`of + * =  MIPS. How would such a system
`compare to a server-disk organization where all the 
`MIPS are located at the le manager? Fig.  shows
`the results from such experiments. We consider two
`processing capacities of  MIPS as explained above
`and  MIPS with MIPS at  disks and a 
`MIPS le manager in the net-disk organization. We
`also kept the total amount of the memory the same
`MB in both the organizations. The server-disk
`organization has signicantly better response times for
`both the processing capacities. With increased amount
`of memory, the performance dierences are reduced. It
`is also noticed that the dierences in response times
`have increased from the earlier results in gure .
`It could be possible to divide the total MIPS more
`optimally in the net-disk organization to reduce the
`dierences in performance. But, Fig.  indicates that
`when the total memory in the two systems is the same,
`with NFS workload, the net-disk organization has
`worse performance even with extra processing capacity.
`
`server-disk-210MIPS
`server-disk-450MIPS
`net-disk-210MIPS
`net-disk-450MIPS
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`0
`
`average response time(ms)
`
`192
`
`256
`
`1152
`640
`384
`total amount of memory(MB)
`
`2176
`
`Figure : Response time with equal system MIPS.
`
`Metadata requests constitute roughly  of the
`NFS workload requests and about  of the bytes
`accessed. These requests are processed at the le
`manager in both server-disk and net-disk organiza-
`tions. With lower processor MIPS at the le man-
`ager in the net-disk organization, these requests will
`experience worse response times than in the server-
`disk organization. What about the response times for
`data requests? We observed that the data requests
`experience lower hit ratios in the net-disk organization
`as shown in gure . This reduced hit ratio osets the
`gains due to distributing the network processing load.
`Hence, the overall response time is higher in the net-
`disk organization.
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 6 of 8
`
`

`

`fm-queueing-time
`fm-service-time
`disk-network-time
`disk-queueing-time
`disk-service-time
`
`256
`
`640
`384
`total amount of memory(MB)
`
`1152
`
`2176
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`service-time(ms)
`
`0
`192
`
`fm-queueing-time
`fm-service-time
`disk-queueing-time
`disk-service-time
`
`256
`
`640
`384
`total amount of memory(MB)
`
`1152
`
`2176
`
`Figure : Components of request response time  NFS trace.
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`service-time(ms)
`
`0
`192
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`0
`
`average response time(ms)
`
`server-disk
`net-disk
`
`192
`
`256
`
`640
`384
`total amount of memory
`
`1152
`
`2176
`
`Figure : Response time vs. total amount of memory
`Web trace.
`
`It is
`The disk access time is not a major factor.
`observed that the distribution of network processing
`cost to the disks in the net-disk organization resulted in
`smaller service and waiting times at the le manager.
`This results in improved performance in the net-disk
`organization. At smaller amounts of disk cache, the
`disk access times become signicant because of lower
`hit ratios in the net-disk organization.
`
` Conclusions and future work
`
`issues in
`In this paper, we studied a number of
`organizing a storage system based on network-attached
`disks. Through trace-driven simulations, we found
`that although using network-attached disks can reduce
`
`workload at the le server, if sucient caching is not
`employed at the disks, the overall system performance
`will be worse than the traditional system.
`The two traces we have studied exhibited dierent
`behavior. Caching made a bigger impact on the
`NFS workload and distribution of network processing
`load made a bigger impact on the Clarknet Web
`workload. With the NFS workload, distributed caching
`at multiple disk caches performed worse than a single
`centralized pool at the le manager osetting any
`possible gains due to distributing network processing
`load. However in the Clarknet Web workload, where
`a small cache is enough to make a big contribution
`to hit ratio, network processing played a bigger role in
`determining the performance. We conducted a number
`of other experiments to study the impact of varying
`the cost of a network reply, the cost of an average
`disk access, the fraction of metadata related requests in
`the workload and the amount of client caching. These
`results can be found in  .
`When systems with equal memory and equal pro-
`cessing capacity are compared, the traditional server-
`disk organization performed better than any net-disk
`organization in both the workloads.
`If we relax the
`constraint of equal processing capacity, the net-disk
`organizations could provide better performance than
`a server-disk organization in the web workload.
`If
`the server has to process the data, rather than just
`supplying it to the client, the server has to store this
`data in memory.
`In this case, caching at the disks
`in net-disk will not be useful for processing the data
`at the server. Database and transaction processing
`systems which process retrieved data may not be able
`
`Authorized licensed use limited to: N.C. State University Libraries - Acquisitions & Discovery S. Downloaded on October 24,2022 at 21:25:00 UTC from IEEE Xplore. Restrictions apply.
`
`Google Exhibit 1016
`Google v. LS Cloud Storage Technologies
`IPR2023-00120, Page 7 of 8
`
`

`

`fm-queueing-time
`fm-service-time
`disk-network-time
`disk-queueing-time
`disk-service-time
`
`256
`
`640
`384
`total amount of memory(MB)
`
`1152
`
`2176
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`service -time(ms)
`
`0
`192
`
`fm-queueing-time
`fm-service-time
`disk-queueing-time
`disk-service-time
`
`256
`
`640
`384
`total amount of memory(MB)
`
`1152
`
`2176
`
`3
`
`2.5
`
`2
`
`1.5
`
`1
`
`0.5
`
`service-time(ms)
`
`0
`192
`
`Figure : Components of request response timeWeb trace.
`
`to exploit the advantages of network-attached disks as
`well as a le system.
`A number of issues require further study. Can a
`net-disk organization support higher loads due to the
`reduced load on the le manager? How does the
`workload impact the minimum required size of the disk
`cache in the net-disk organization? What workload
`characteristics determine if net-disk organization can
`provide better performance than a server-disk organi-
`zation?
`
`References
`
`  J. H. Howard, M. L. Kazar, S. G. Menees, D.A.
`Nichols, M. Satyanarayanan, R. N. Sidebotham, and
`M. J. West. Scale and performance in a distributed
`le system. ACM Transactions on Computer Systems,
` :  , Feb .
`
` P. Trautman, B. Nelson, and Auspex Engineering. An
`overview of NFS server using functional multiprocess-
`ing. Technical Report , Auspex Corporation, April
` .
`
`  J. H. Hartman and J. K. Ousterhout. The Zebra
`striped network le system. In Proc. of the th SOSP,
`pages   , Dec. .
`
` D. D. E. Long, B. R. Montague, and L. Cabrera.
`SwiftRAID: A distributed RAID system. Computing
`Systems,  :   , .
`
` E. D. Katz, M. Butler, and R. McGrath. A scalable
`HTTP server: The NCSA prototype. In Proceedings of
`the First International WWW Conference, May .
`
` M. D. Dahlin, R. Y. Wang, Anderson T. E., and
`D. A. Patterson. Cooperative caching: Using remote
`
`client memory to improve le system performance.
`In Proceedings of the First Symposium on Operating
`System Design and Implementation, pages ,
`November .
`
` Seagate Corporation. Fibre channel: The digital high-
`way made practical. Technical report, Seagate Cor-
`poration, . http:www.seagate.comsupport
`discpapersfibp.shtml.
`
` Robert W. Horst. TNet: A reliable system area
`network. IEEE Micro,  : , Feb. .
`
`  Garth A. Gibson and et al. File server scaling with
`network-attached secure disks. In Proceedings of the
`Sigmetrics Conference on Measurement and Modeling
`of Computer Systems, Seattle, June .
`
`  D. A. Patterson, G. Gibson, and R. H. Katz. A case
`for re

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket