throbber
DISTRIBUTED SYSTEMS AND PLATFORMS
`
`141
`
`file
`access
`request
`
`e:/~d. GClient
`Client 8(a)
`
`................
`file
`access
`request
`
`block
`readlwrite
`
`(b)
`
`Figure 4.8. Distributed File Models: (a) Download-Upload; (b) Remote Access
`
`are going to briefly review some successful systems, that have answered the
`previous questions in different ways.
`
`4.2.1 NFS
`
`'The Network File System (NFS), developed by Sun Microsystems in 1984, was
`the first commercial distributed file system.
`It is a striking example of good
`distributed systems design.
`The NFS is built using a client-server approach. Servers store files and re(cid:173)
`spond to requests from remote applications that need to access those files. How(cid:173)
`ever, the applications are not requested to contact the server directly. Instead,
`file system calls from the application are redirected to a proxy of the remote
`service that executes on the client's machine. This proxy is simply called the
`NFS client, and it takes care of the interaction with the remote server. This
`interaction is performed using SUN's remote procedure call service. The design
`decision of using RPC to access the server, along with the strategic option of
`making the server's interface public, was one of the reasons that made NFS so
`popular. Since the interfaces and the format of the associated messages were
`known, it was possible for different companies to develop NFS components for
`many different architectures and operating systems.
`The use of the client-server approach is made transparent to the client appli(cid:173)
`cation through the addition of a Virtual File System (VFS) layer to the Unix
`kernel. The layer introduces an additional level of indirection in the file system
`calls. Instead of calling a specific file system primitive, the application calls the
`VFS interface that, in turn, calls the NFS client primitives. The VFS allows
`the kernel to support several file systems simultaneously, including the NFS
`and the native file system.
`The NFS server uses a stateless approach, i.e., servers do not keep any state
`about open files. Also, servers keep no information about the number and
`state of their clients. This means that each request must be self-contained, i.e.,
`the server must be able to process the request just looking at its parameters,
`without any knowledge from past requests. Since the server remembers nothing,
`
`Exhibit 2026 Page 160
`
`

`

`142
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`it does not lose any relevant information when it crashes. Thus, a stateless
`approach has some advantages from the fault tolerance point of view: a server
`may crash, recover, pick a new request and continue as if nothing happened (we
`postpone a further discussion about fault tolerance issues to the next part of
`the book). On the other hand, since the server does not keep state, it is unable
`to check if a file is being accessed by one or more clients. This, as we will
`see, makes the task of preserving the Unix semantics of file sharing impossible.
`Due to this reason, NFS does not support the open primitive (open semantics
`require the system to "remember" that the file has been opened).
`Instead, a
`lookup primitive, that provides a handle for a file name is provided.
`A partial list of the NFS server interface is presented in Table 4.1. The
`reader will notice that read and write calls include an offset parameter. This
`is required because the server, being stateless, does not store the file pointer
`on behalf of the client.
`Instead, the file pointer must be stored by the NFS
`client and sent explicitly on read and write calls. The cookie parameter in the
`readdir call plays a similar role, allowing a client to read a large directory in
`pieces (the cookie points to the next directory entry to be read). The lookup
`primitive returns a file handle that consists of a pair: a unique file system
`identifier, and a unique file identifier. The unique file identifier is made of the
`Unix inode1 of the file and a generation number, which is incremented whenever
`the inode is re-used. This ensures that identifiers are not reusable, and that
`the identifier of a new file cannot be confused with the identifier of a previous
`file on the same file system.
`
`Table 4.1.
`
`NFS Interface (partial)
`
`Name (parameters)
`
`Returns
`
`lookup (dirfh, name)
`create (dirfh, name , attr)
`remove (dirfh, name)
`read (fb, offset, count)
`write (fh, offset, count, data)
`mkdir (dirfh, name, attr)
`readdir (dirfh, cookie, count)
`
`(fb, attr)
`(newfh, attr)
`(status)
`(attr, data)
`(attr)
`(newfh, attr)
`(direntries)
`
`Consider now an application that reads a file, reading only one byte at a
`time.
`If a remote procedure call was to be performed for each byte read, the
`performance of the system would be unacceptable. This problem is not specific
`of distributed systems, in a centralized system one also avoids performing an
`I/O operation for each byte read by caching one or more file blocks in main
`
`1In the Unix file system, files are uniquely identified by a index node, or simply, inode.
`
`Exhibit 2026 Page 161
`
`

`

`DISTRIBUTED SYSTEMS AND PLATFORMS
`
`143
`
`memory. In the NFS a similar approach can be followed, by caching the block
`both in the server's memory and in the client's memory. Unfortunately, having
`the same data cached in different machines introduces the problem of cache
`coherence. In a centralized system, the cache is physically shared by all pro(cid:173)
`cesses. Since a single copy of the cached data exists, the sharing semantics
`of a centralized Unix file system is the following:
`if a process does a write,
`the results become immediately visible to all other processes. Clearly this is
`very expensive to obtain in a distributed system, since propagating an update
`requires at least one remote procedure call. The NFS approach implements a
`weaker consistency model that tries to balance consistency with performance
`requirements.
`When reading a file, the NFS client reads a complete disk block (which is 8
`kilobytes in the Unix BSD 4.2 Fast File System). The block is cached in the
`client's memory and is considered valid for some amount of time (typically, 3
`seconds for files}. Thus, subsequent reads that fall into the same block do not
`require a remote access to the server. After this period, a new access must first
`check with the server if the cache is still valid. For this purpose, the client also
`remembers the "version" it has cached, more precisely, the last time the data
`has been updated in the server. If the cache is still valid, it is assumed valid for
`another 3 seconds period; if not, the new version of the block is fetched again
`from the server.
`Instead of contacting the server
`Writes are executed in a similar manner.
`each time a byte is written, the client caches all the writes for some time. The
`cache has to be flushed if the file is closed or if a sync call is performed by the
`application. Otherwise, updates are sent asynchronously to the server, using
`periods of low activity in the client. This task is performed by a Unix daemon,
`called the block io daemon (or simply the bio-daemon). The daemon can also
`try to optimize reads by performing read-ahead, i.e., requesting in anticipation
`the next block of a file being read by an application. To ensure that writes are
`guaranteed to be stored on disk when the remote procedure call returns, the
`cache on the server operates in write-through mode, i.e., writes are immediately
`forced to disk.
`So far, we have presented the interaction between a client and a server. We
`have not discussed how the client finds the appropriate server in the first place.
`The name of the servers storing remote files is configured at each client using
`an extension to the Unix mount facility. The mount mechanism allows a file
`system to be "attached" to another file system at a given directory, called the
`mount point. The NFS mount procedure allows a sub-tree of the server's file
`system to be mounted on a specific directory of the client machine (if the client
`has no disk, it can be mounted on the root directory of the client machine).
`When performing the mount operation the client contacts the server, which
`checks access rights and, in case of success, returns to the client a file handle
`to the mounted directory. If when translating the textual file name into a file
`handle, the NFS client traverses a mount point, it uses the file handle returned
`by the mount operation to perform the subsequent lookups.
`
`Exhibit 2026 Page 162
`
`

`

`144
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`4.2.2 AFS
`
`The Andrew File System (AFS) was originally developed at Carnegie-Mellon
`University and later became a product of Transarc. The major design goal
`of AFS was to achieve scalability in terms of number of clients. Many of the
`design decisions behind the development of AFS aimed at overcoming known
`limitations of previous systems. For instance, with regard to the NFS file
`system described above, it was observed that the cache validation procedure,
`where clients inquire the servers about the validity of cached data, could easily
`overload the server with too many requests. Furthermore, it was noticed that
`most of these requests were unnecessary, given that the majority of files are not
`shared and thus, caches are usually valid.
`To support their design decisions, the AFS development team made extensive
`measurements of file usage on their academic environment. The observations
`made at that time and in those environments have shown interesting facts: files
`were usually small; reads were much more common than writes and typically
`sequential; most files were updated by a single user; and when one file was
`accessed it was likely to be accessed again in the near future.
`To address these access patterns, and assuming that local disks were available
`at client machines, a file system based on whole file caching was proposed, i.e.,
`in AFS clients cache complete files (this approach has later been relaxed to
`accommodate very large files, by supporting caching of file portions). Once
`a file is cached, all read and write accesses are purely local and require no
`synchronization with the server. Once closed, the file remains in cache. When
`re-opened, the local cache is used whenever possible.
`Enough disk space should be reserved in the clients' cache to hold the files
`needed for the typical operation of most applications. A daemon process in
`the client, called Venus is responsible for managing the AFS cache and for
`transferring files from and back to the server. The counterpart of Venus on the
`server is called Vice. To the clients, the file system appears seamless, though
`some files are local and others shared through Venus. This architecture is
`illustrated in Figure 4.9.
`AFS supports read-write files accessed in competition by clients, but a RW
`file can have several read-only copies hosted at more than one Vice. Whenever
`the master file is updated, a release command makes sure that the RO copies
`are also updated. The above characteristics make AFS well-suited for a few
`classes of applications over file systems:
`
`• shared read-only repositories, with occasional updates (e.g., news, price
`lists)- typically using RO copies for wider availability to many readers,
`single-client updates;
`• shared read-write repositories, with infrequent updates (e.g., cooperative
`editing)- competitive few-writer activity, local caches remain valid for long;
`• non-shared repositories (e.g., personal files)- single-writer activity, local cache
`remains valid wherever user is.
`
`Exhibit 2026 Page 163
`
`

`

`DISTRIBUTED SYSTEMS AND PLATFORMS
`
`145
`
`ClientWS
`
`Figure 4.9.
`
`AFS Architecture
`
`The goal of whole file caching is to relieve the server from unnecessary load.
`But, if the client should try to limit the interaction with a server to a minimum,
`how can it check the validity of data in its cache? The AFS approach consists
`in delegating on the server the responsibility for invalidating the client's cache
`when some other client updates the same file (actually, this optimization was
`only implemented in the second version of AFS).
`When Vice gives Venus a copy of a file, it also makes a callback promise,
`or CBP. A CBP remains valid until hearing otherwise from Vice. When a file
`is changed at Vice, it calls back all clients holding valid CBPs for that file, so
`that they cancel the CBP, which invalidates the file cache. When Venus opens
`If the file is not in cache, it requests a copy to
`a file, it analyzes the cache.
`Vice. If the file is in cache, it checks the CBP. If the CBP is valid, it opens the
`local copy; if not, it requests a current copy to Vice.
`The reader will note that there is a window during which a local copy may
`be opened that is not the most recent one2 .
`In order to reduce this risk, an
`expiration mechanism exists that supersedes the algorithm described above: a
`file is only opened locally if it is less than T since the local Venus has last heard
`from Vice concerning this file. That is, if a file is opened after T, the latest
`version is downloaded from Vice. A typical value for T is 10 minutes. This
`mechanism also recovers from the loss of callback messages.
`If the client crashes it may miss one or several callbacks from the server.
`Thus, when a client recovers it has to contact the server and determine the
`
`2Erlier versions of AFS checked directly with Vice before opening, instead of the CBP, but
`this did not scale well.
`
`Exhibit 2026 Page 164
`
`

`

`146
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`status of all the files it holds, by checking the timestamps of the relevant CBPs
`with the file information on the server. To prevent the server from storing
`callback information indefinitely, access rights (also called the file tokens) are
`only valid for some limited period.
`The semantics of AFS is not exactly one-copy. When more than one client
`open a file concurrently, the server will hold the state of the last file to be
`closed. This form of consistency is however adequate for the example classes
`we have enumerated earlier. Furthermore, applications can always superimpose
`their synchronization on top of these basic mechanisms.
`
`4.2.3 Coda
`
`The Coda file system is a follow-up of the AFS project at CMU lead by some
`members of the AFS development team. The main goals of Coda were to im(cid:173)
`prove reliability and availability vis-a-vis partitioning, and to support nomadic
`and mobile computing. This was achieved by whole volume replication, and by
`disconnected operation. Coda supports the use of portable computers as file
`system clients, and tries to offer what the authors call constant data availability.
`Coda can be in one of three states (Figure 4.10). The whole file caching
`approach of AFS allows the client to cache in his local disk the files that he
`will need while disconnected. Caching files that are going to be needed in
`the future is called hoarding. Manual hoarding is possible but the authors have
`studied techniques to automate the task of selecting which files to cache. When
`the portable computer disconnects from the network, Coda is in the emulation
`phase: the user can work on the files cached in the local disk.
`The servers containing replicas of a file form its volume storage group or VSG.
`Often, only part of the replicas are available (partitioning, disconnection), the
`available VSG or AVSG. Opening a file consists of reading it from one of the
`AVSG replicas and caching it locally. When the file is closed, it remains valid
`at the client, and a copy (reconciliation) is made to all AVSG servers.
`Naturally, while disconnected the client is unable to receive any callbacks
`from the servers, and this presents an opportunity for conflicting updates on
`the same file. When the portable is later reconnected to the network, an auto(cid:173)
`matic file system reintegration procedure is used, as illustrated in Figure 4.10.
`The proce'd'ure compares the versions of the client files with those of the server
`files and checks for conflicts. When no conflicts are found, the system auto(cid:173)
`matically reconciles both versions of the file system. When conflicts are found,
`two versions of the conflicting files are stored and manual intervention of the
`user is requested.
`
`4.3 DISTRIBUTED COMPUTING ENVIRONMENT (DCE)
`
`In this part of the book we have referred to a number of technologies that
`help in the design and implementation of distributed applications and systems.
`Examples of these technologies are remote procedure call services, directory
`services, time services, distributed file systems, secuTity systems, etc. For each
`
`Exhibit 2026 Page 165
`
`

`

`DISTRIBUTED SYSTEMS AND PLATFORMS
`
`147
`
`Hoarding
`
`~
`
`Figure 4.10.
`
`Coda Operation
`
`of these services, several commercial products have emerged on the market.
`The diversity of technologies, often incompatible, makes the task of integrating
`applications made by different developers extremely hard.
`The Distributed Computing Environment (DCE) is a standard endorsed
`by a consortium of several companies, including major players such as IBM,
`former DEC, and Hewlett Packard, under the name of Open System Foundation
`(OSF). DCE selects a particular technology for each of the services previously
`listed and offers them in an integrated package. The package was initially
`supported on Unix but was later ported to other operating systems as well.
`Conceptually, there is not much really exciting in DCE, even though some
`of its services represent excellent technical solutions. This does not come as
`a surprise since DCE adopted standards and technologies that had already
`proven their value, some of which are addressed in this book. For instance,
`the directory service is based on X.500, the file system is derived from AFS,
`the time service is derived from NTP. The merit of DCE is to provide all these
`technologies as a coherent set.
`Another feature that makes it hard to describe DCE in a few lines, is that
`it contains components that operate at different levels of abstraction. DCE is
`independent of the platform and operating system, thus these two layers are
`somehow outside of the scope of the DCE package. On top of the native op(cid:173)
`erating system, DCE adds a dedicated threads interface. The availability of
`threads was felt to be a fundamental requirement for the efficient implementa(cid:173)
`tion of distributed applications (in particular servers), so DCE includes its own
`thread package. The thread package, like most of DCE components, has plenty
`of options and operational modes, enough to make almost everybody happy.
`For instance, three different scheduling policies are supported, priority based,
`round robin within the same priority and time-sliced round-robin; three types
`of mutexes are available; and so on.
`Using the operating system (augmented with the DCE thread interface), the
`DCE remote procedure call package is implemented. The main computational
`model supported by DCE is client-server and RPC is a fundamental building
`block for the remaining services. Like almost every RPC package, DCE RPC
`allows server interfaces to be written in an Interface Definition Language and
`
`Exhibit 2026 Page 166
`
`

`

`148
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`provides the compiler to automatically generate client and server stubs from
`these interfaces. Each service is identified by a unique identifier. To help pro(cid:173)
`grammers to obtain unique identifiers that are really unique, a unique identifier
`generator is also provided (which encodes the location and date of generation).
`The RPC package provides optional authentication and encryption (see Secure
`Client-Server with RPC in Chapter 18). On top of the DCE RPC service, the
`time service, the directory service and the security service are implemented.
`The Distributed Time Service (DTS) is an evolution of NTP (see Network
`Time Protocol in Chapter 14). Its role is, of course, to keep local clocks syn(cid:173)
`chronized. The service is of paramount importance for many other services.
`Among other applications, the global notion of time is used by the file system
`to timestamp updates and compare file versions. It is also used by the security
`service to check the validity of a credential. An interesting feature of the DCE
`time service is that the user is informed of the actual accuracy of the value
`provided. DTS uses this information when comparing two dates, to check that
`the timestamping error is small enough that they are comparable.
`The directory service (names and structure are inspired by X.500, see X.500
`in this chapter) is organized as a set of cooperative cells. Each cell manages its
`own name space and has a local Cell Directory Server (CDS). To "glue" different
`cells, two global directory servers can be used: the DCE Global Directory Server
`or the Internet DNS. Cell directory servers interact with the global service to
`a Global Directory Agent, that shields the CDS from the details of the GDS or
`DNS.
`The security server of DCE is based on the Kerberos security server (see
`Kerberos in Chapter 19).
`It manages access rights based on Access Control
`Lists and implements authenticated RPCs.
`Finally, we can find the DCE file system, called the Distributed File System
`(DFS). It contains two main components: a local component, called Episode,
`and a global component based on AFS (see AFS in this chapter). In interesting
`feature of the DFS is that the file naming service is integrated with the directory
`service CDS, so files can be relocated just by updating directory data.
`
`4.4 OBJECT-ORIENTED ENVIRONMENTS (CORBA)
`
`In some sense, CORBA, the Common Object Request Broker Architecture is an
`object-oriented DCE. It has also been proposed by a consortium of major indus(cid:173)
`try companies, the Object Management Group (OMG). Essentially, CORBA
`also follows a client server approach but at a higher level of abstraction. Instead
`of having client processes interacting with server processes, CORBA provides
`the basis for having objects interacting with other objects.
`The state of a CORBA object is encapsulated by a well-defined interface.
`Like in an RPC system, object interfaces are written in an Interface Definition
`Language, in this case in CORBA IDL, whose grammar is a subset of C++.
`Following object-oriented principles, CORBA IDL supports inheritance, thus
`new interfaces can be built by extending previously defined interfaces.
`
`Exhibit 2026 Page 167
`
`

`

`DISTRIBUTED SYSTEMS AND PLATFORMS
`
`149
`
`OMG started by defining the architecture illustrated in Figure 4.11. The
`core of the architecture is the Object Request Broker, an abstraction that sup(cid:173)
`ports interaction among objects. The broker is responsible for making sure that
`an object can invoke another object, implementing the required protocols to
`send the requests and receive the replies. Of course, application programs need
`an interface to the broker in order to instantiate objects, create references to
`remote objects, issue object invocations and so on. The first CORBA 1.1 spec(cid:173)
`ification defined the CORBA IDL, the IDL mappings to common programming
`languages, and the application programming interfaces to the ORB. This al(cid:173)
`lowed to develop application code that was more or less portable through ORBs
`from different vendors. The "catch" was that some vendors did include some
`non-standard features in their ORBs. These features were added to enhance
`the standard ORB functionality, but in practice, these proprietary enhance(cid:173)
`ments prevented seamless portability. Another catch was that protocols and
`message formats were not part of the standard. The idea was to give room for
`each vendor to pick the most appropriate solution for their target market and
`architectures. The less positive aspect of that decision was that ORBs from
`different vendors did not inter-operate. This problem was eventually fixed with
`the release of CORBA 2.0, that defined a common protocol to be supported by
`every ORB, the Internet Inter-ORB Protocol, or simply IIOP (actually, to be
`more precise, IIOP is an implementation of a more General Inter-ORB Protocol
`(GlOP) over the TCPlIP protocol suite).
`
`Figure 4.11.
`
`CORBA Architecture
`
`If you have read the previous section on DCE, you already know that in order
`to build useful complex distributed applications you need more than remote
`invocations. CORBA has defined an extensive set of services, characterized
`by their standard CORBA IDL interfaces. No less than 15 services have been
`defined. We can describe some of the most relevant:
`
`Exhibit 2026 Page 168
`
`

`

`150
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`• Naming Service: as in any other system, it keeps associations among names
`and object references.
`• Persistence Service: defines an interface to store the object state on storage
`servers, which can be implemented using traditional file systems or advanced
`database systems.
`• Concurrency Control Service: provides a lock manager that can be used in
`the context of transactions to enforce concurrency control.
`• Transaction Service: supports transactions, offering atomic commitment ser(cid:173)
`vices.
`• Event Service: allows some components to produce events that are dis(cid:173)
`tributed through an event channel to all interested subscribers.
`• Time Service: provides a common time frame in the distributed system.
`• Trader Service: allows objects to register their properties and clients to
`search for appropriate servers using this information.
`
`Other services include the Life Cycle Service, the Relationship Service, the
`Externalization Service, the Query Service, the Licensing Service, and the Col(cid:173)
`lection Service. In addition to these general purpose services, many interfaces
`have been standardized for specific business domains.
`Of course, it is possible to build applications using just a few of these services.
`In fact, none of them is mandatory (but it is hard to build something useful
`without the naming service). The basic Corba functionality is pretty "con(cid:173)
`ventional" when compared with an RPC system. The programmer writes the
`object interface in IDL. The IDL specification is compiled and a description of
`the interface is stored on the Interface Repository. From the IDL specification
`both client and server stubs are created for the target programming language
`(support for at least C++ and Java is now fairly common). The application
`programmer still has to write the actual object code, which is linked with the
`server stub and with an Object Adapter. The adapter supports the interface
`between the ORB and the object, providing the functionality to register the
`object within the ORB, to dispatch requests to the appropriate objects, and to
`send back replies.
`The most straightforward way to activate an object is to execute it in the
`context of a dedicated process (this approach is called the unshared server ap(cid:173)
`proach). This process can be started when the system boots or just when an
`invocation is received by the ORB. As long as this process remains active, all
`invocations are forwarded to it. However, other policies can be implemented.
`For instance, it is possible to create a different process to execute each method.
`This approach, called the server-per-method approach is more suitable for state(cid:173)
`less objects, where no shared state needs to be preserved on volatile memory
`across invocations.
`Corba can be used to develop new applications from scratch. As with DCE,
`one advantage of using Corba is that many of the annoying details related
`with the implementation of RPC, server and client instantiation, etc, are han(cid:173)
`dled by the ORB. An application built this way will be able to inter-operate
`
`Exhibit 2026 Page 169
`
`

`

`DISTRIBUTED SYSTEMS AND PLATFORMS
`
`151
`
`with any other application adhering to the Corba standards. Additionally,
`"transformer" objects can be built to wrap legacy applications, adding Corba(cid:173)
`compliant interfaces to old code. This type of architecture is known as a "Corba
`3-tier client-server architecture" and is illustrated in Figure 4.12. Using this
`3-tier architecture and Corba IIOP, it is also possible to build powerful appli(cid:173)
`cations for the World-Wide Web, but this is the topic of our next section.
`
`Legacy system
`
`Object Request Broker
`
`I
`
`Figure 4.12.
`
`Corba 3-tier Architecture
`
`4.5 WORLD-WIDE WEB
`
`In the late 80's, despite the enormous potential and relative maturity of dis(cid:173)
`tributed systems technologies, it was felt among researchers that a "killer ap(cid:173)
`plication" was lacking, one that could show the benefits of distributed systems
`in an obvious and indisputable way. The World- Wide Web (known simply by
`WWW or the web) was the killer application of the nineties.
`It is interesting to observe that the application that had such a big impact in
`industry and society was in its inception relatively simple in terms of distributed
`systems concepts. It is basically a client-server application: clients, known as
`browsers, make requests to WWW servers in order to fetch documents and
`launch the execution of commands.
`The WWW was created by Tim Berners-Lee and Robert Cailliau, working
`at CERN, with the goal of supporting information sharing among physics sci(cid:173)
`entists. We can now say that the system was extremely successful in that task;
`it was the genesis of a global infrastructure that supports sharing and dissem(cid:173)
`ination of information at a scale never seen before. The key for this success
`relies on the simplicity of its interface. Using a browser, remote information
`can be accessed just by clicking a button. Previously, cumbersome and often
`arcane sequences of commands had to be issued to achieve the same goal.
`Documents in the WWW have a structured impure name called the Uniform
`Resource Locator (URL).
`The URL has the following format: <protocol>: / / <serveraddr>{/ <path>}
`where:
`the first field specifies which protocol must be used to interact with
`the server (several protocols are supported, being HTTP the most common);
`serveraddr is the address of the server to be contacted; and path indicates
`
`Exhibit 2026 Page 170
`
`

`

`152
`
`DISTRIBUTED SYSTEMS FOR SYSTEM ARCHITECTS
`
`which document should be fetched (if no name is specified, a document called
`index.htrnl is read). Documents may be stored in several formats, from simple
`text to audio and video. Some of these formats are recognized and interpreted
`by the browser itself. Others are interpreted by companion applications that
`can launched by the browser in order to display the document.
`Having a simple way to name and access documents is already a major
`contribution of the WWW architecture. However, if users were required to
`memorize the URL of all documents they were interested in, WWW would not
`have been such a big success. The other key factor of success was the use of hy(cid:173)
`permedia documents, which include links to other documents. The browser is
`able to interpret and display hypermedia documents in the HyperText Markup
`Language (HTML). Additionally, the browser allows the user to activate a link
`(typically, by clicking a mouse button) and automatically fetches the document
`whose URL is associated with the link.
`In this way, the user just has to re(cid:173)
`member the URL of the main page of an information repository or site, also
`known as the home page, which in turn holds the links for all other relevant
`documents (actually, most browsers have a way to store URLs, so that users
`do not even need to memorize the URLs of home pages). Today, it is a major
`business to provide pages, known as portals with links to useful information on
`the web, shops, advertising and much more. From a major portal, and just by
`clicking, the user is able to navigate through a huge net of documents, an often
`addicting activity also known as surfing the web.
`The infrastructure we have just described is extremely useful and efficient,
`but it is somehow limited since it only supports the flow of information from
`the server to the client. Often, we also want the clients to send information to
`the server and request the execution of remote actions. For instance, the user
`may want to perform a query on a database, or issue an order when buying
`some goods. Thus, the first step to make the web more interactive was to allow
`clients to request the execution of programs in the servers. To support that
`type of interaction, servers were extended in several,\~:ays, the most common of
`which is an interface called the Common Gateway Interface (CGI). The CGI
`specifies how the browser indicates which programs should be executed and how
`parameters are sent to that program (and results sent back to the browser).
`According to this interface, WWW servers are able to launch programs upon
`request, which are executed as a separate process in the server machine. Th

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket