throbber
A Note on Distributed Computing
`
`Jim Waldo
`Geoff Wyant
`Ann Wollrath
`Sam Kendall
`
`SMLI TR-94-29
`
`November 1994
`
`Abstract:
`
`We argue that objects that interact in a distributed system need to be dealt with in ways that are
`intrinsically different from objects that interact in a single address space. These differences are
`required because distributed systems require that the programmer be aware of latency, have a dif-
`ferent model of memory access, and take into account issues of concurrency and partial failure.
`
`We look at a number of distributed systems that have attempted to paper over the distinction
`between local and remote objects, and show that such systems fail to support basic requirements
`of robustness and reliability. These failures have been masked in the past by the small size of the
`distributed systems that have been built. In the enterprise-wide distributed systems foreseen in the
`near future, however, such a masking will be impossible.
`
`We conclude by discussing what is required of both systems-level and application-level program-
`mers and designers if one is to take distribution seriously.
`
`A Sun Microsystems, Inc. Business
`
`M/S 29-01
`2550 Garcia Avenue
`Mountain View, CA 94043
`
`email addresses:
`jim.waldo@east.sun.com
`geoff.wyant@east.sun.com
`ann.wollrath@east.sun.com
`sam.kendall@east.sun.com
`
`1
`
`LYFT 1031
`
`

`

`A Note on Distributed Computing
`
`Jim Waldo, Geoff Wyant, Ann Wollrath, and Sam Kendall
`
`Sun Microsystems Laboratories
`2550 Garcia Avenue
`Mountain View, CA 94043
`
`1
`
`Introduction
`
`1.1
`
`Terminology
`
`In what follows, we will talk about local and distributed
`computing. By local computing (local object invocation,
`etc.), we mean programs that are confined to a single
`address space. In contrast, we will use the term distributed
`computing (remote object invocation, etc.) to refer to pro-
`grams that make calls to other address spaces, possibly on
`another machine. In the case of distributed computing,
`nothing is known about the recipient of the call (other than
`that it supports a particular interface). For example, the
`client of such a distributed object does not know the hard-
`ware architecture on which the recipient of the call is run-
`ning, or the language in which the recipient was
`implemented.
`
`Given the above characterizations of “local” and “distrib-
`uted” computing, the categories are not exhaustive. There
`is a middle ground, in which calls are made from one
`address space to another but in which some characteristics
`of the called object are known. An important class of this
`sort consists of calls from one address space to another on
`the same machine; we will discuss these later in the paper.
`
`Much of the current work in distributed, object-oriented
`systems is based on the assumption that objects form a sin-
`gle ontological class. This class consists of all entities that
`can be fully described by the specification of the set of
`interfaces supported by the object and the semantics of the
`operations in those interfaces. The class includes objects
`that share a single address space, objects that are in sepa-
`rate address spaces on the same machine, and objects that
`are in separate address spaces on different machines (with,
`perhaps, different architectures). On the view that all
`objects are essentially the same kind of entity, these differ-
`ences in relative location are merely an aspect of the
`implementation of the object. Indeed, the location of an
`object may change over time, as an object migrates from
`one machine to another or the implementation of the
`object changes.
`
`It is the thesis of this note that this unified view of objects
`is mistaken. There are fundamental differences between
`the interactions of distributed objects and the interactions
`of non-distributed objects. Further, work in distributed
`object-oriented systems that is based on a model that
`ignores or denies these differences is doomed to failure,
`and could easily lead to an industry-wide rejection of the
`notion of distributed object-based systems.
`
`2
`
`

`

`2
`
`The Vision of Unified Objects
`
`There is an overall vision of distributed object-oriented
`computing in which, from the programmer’s point of view,
`there is no essential distinction between objects that share
`an address space and objects that are on two machines
`with different architectures located on different continents.
`While this view can most recently be seen in such works
`as the Object Management Group’s Common Object
`Request Broker Architecture (CORBA) [1], it has a his-
`tory that includes such research systems as Arjuna [2],
`Emerald [3], and Clouds [4].
`
`In such systems, an object, whether local or remote, is
`defined in terms of a set of interfaces declared in an inter-
`face definition language. The implementation of the object
`is independent of the interface and hidden from other
`objects. While the underlying mechanisms used to make a
`method call may differ depending on the location of the
`object, those mechanisms are hidden from the programmer
`who writes exactly the same code for either type of call,
`and the system takes care of delivery.
`
`This vision can be seen as an extension of the goal of
`remote procedure call (RPC) systems to the object-ori-
`ented paradigm. RPC systems attempt to make cross-
`address space function calls look (to the client program-
`mer) like local function calls. Extending this to the object-
`oriented programming paradigm allows papering over not
`just the marshalling of parameters and the unmarshalling
`of results (as is done in RPC systems) but also the locating
`and connecting to the target objects. Given the isolation of
`an object’s implementation from clients of the object, the
`use of objects for distributed computing seems natural.
`Whether a given object invocation is local or remote is a
`function of the implementation of the objects being used,
`and could possibly change from one method invocation to
`another on any given object.
`
`Implicit in this vision is that the system will be “objects all
`the way down”; that is, that all current invocations or calls
`for system services will be eventually converted into calls
`that might be to an object residing on some other machine.
`There is a single paradigm of object use and communica-
`tion used no matter what the location of the object might
`be.
`
`thing. The vision is that developers write their applications
`so that the objects within the application are joined using
`the same programmatic glue as objects between applica-
`tions, but it does not require that the two kinds of glue be
`implemented the same way. What is needed is a variety of
`implementation techniques, ranging from same-address-
`space implementations like Microsoft’s Object Linking
`and Embedding [5] to typical network RPC; different
`needs for speed, security, reliability, and object co-location
`can be met by using the right “glue” implementation.
`
`Writing a distributed application in this model proceeds in
`three phases. The first phase is to write the application
`without worrying about where objects are located and how
`their communication is implemented. The developer will
`simply strive for the natural and correct interface between
`objects. The system will choose reasonable defaults for
`object location, and depending on how performance-criti-
`cal the application is, it may be possible to alpha test it
`with no further work. Such an approach will enforce a
`desirable separation between the abstract architecture of
`the application and any needed performance tuning.
`
`The second phase is to tune performance by “concretiz-
`ing” object locations and communication methods. At this
`stage, it may be necessary to use as yet unavailable tools to
`allow analysis of the communication patterns between
`objects, but it is certainly conceivable that such tools could
`be produced. Also during the second phase, the right set of
`interfaces to export to various clients—such as other
`applications—can be chosen. There is obviously tremen-
`dous flexibility here for the application developer. This
`seems to be the sort of development scenario that is being
`advocated in systems like Fresco [6], which claim that the
`decision to make an object local or remote can be put off
`until after initial system implementation.
`
`The final phase is to test with “real bullets” (e.g., networks
`being partitioned, machines going down). Interfaces
`between carefully selected objects can be beefed up as
`necessary to deal with these sorts of partial failures intro-
`duced by distribution by adding replication, transactions,
`or whatever else is needed. The exact set of these services
`can be determined only by experience that will be gained
`during the development of the system and the first applica-
`tions that will work on the system.
`
`In actual practice, of course, a local member function call
`and a cross-continent object invocation are not the same
`
`A central part of the vision is that if an application is built
`using objects all the way down, in a proper object-oriented
`
`3
`
`

`

`fashion, the right “fault points” at which to insert process
`or machine boundaries will emerge naturally. But if you
`initially make the wrong choices, they are very easy to
`change.
`
`Unfortunately, all of these principles are false. In what fol-
`lows, we will show why these principles are mistaken, and
`why it is important to recognize the fundamental differ-
`ences between distributed computing and local computing.
`
`One conceptual justification for this vision is that whether
`a call is local or remote has no impact on the correctness
`of a program. If an object supports a particular interface,
`and the support of that interface is semantically correct, it
`makes no difference to the correctness of the program
`whether the operation is carried out within the same
`address space, on some other machine, or off-line by some
`other piece of equipment. Indeed, seeing location as a part
`of the implementation of an object and therefore as part of
`the state that an object hides from the outside world
`appears to be a natural extension of the object-oriented
`paradigm.
`
`Such a system would enjoy many advantages. It would
`allow the task of software maintenance to be changed in a
`fundamental way. The granularity of change, and therefore
`of upgrade, could be changed from the level of the entire
`system (the current model) to the level of the individual
`object. As long as the interfaces between objects remain
`constant, the implementations of those objects can be
`altered at will. Remote services can be moved into an
`address space, and objects that share an address space can
`be split and moved to different machines, as local require-
`ments and needs dictate. An object can be repaired and the
`repair installed without worry that the change will impact
`the other objects that make up the system. Indeed, this
`model appears to be the best way to get away from the
`“Big Wad of Software” model that currently is causing so
`much trouble.
`
`This vision is centered around the following principles
`that may, at first, appear plausible:
`
`• there is a single natural object-oriented design for a
`given application, regardless of the context in which
`that application will be deployed;
`• failure and performance issues are tied to the imple-
`mentation of the components of an application, and
`consideration of these issues should be left out of an
`initial design; and
`• the interface of an object is independent of the context
`in which that object is used.
`
`3
`
`Déjà Vu All Over Again
`
`For those of us either old enough to have experienced it or
`interested enough in the history of computing to have
`learned about it, the vision of unified objects is quite
`familiar. The desire to merge the programming and com-
`putational models of local and remote computing is not
`new.
`
`Communications protocol development has tended to fol-
`low two paths. One path has emphasized integration with
`the current language model. The other path has empha-
`sized solving the problems inherent in distributed comput-
`ing. Both are necessary, and successful advances in
`distributed computing synthesize elements from both
`camps.
`
`Historically, the language approach has been the less influ-
`ential of the two camps. Every ten years (approximately),
`members of the language camp notice that the number of
`distributed applications is relatively small. They look at
`the programming interfaces and decide that the problem is
`that the programming model is not close enough to what-
`ever programming model is currently in vogue (messages
`in the 1970s [7], [8], procedure calls in the 1980s [9], [10],
`[11], and objects in the 1990s [1], [2]). A furious bout of
`language and protocol design takes place and a new dis-
`tributed computing paradigm is announced that is compli-
`ant with the latest programming model. After several
`years, the percentage of distributed applications is discov-
`ered not to have increased significantly, and the cycle
`begins anew.
`
`A possible explanation for this cycle is that each round is
`an evolutionary stage for both the local and the distributed
`programming paradigm. The repetition of the pattern is a
`result of neither model being sufficient to encompass both
`activities at any previous stage. However, (this explana-
`tion continues) each iteration has brought us closer to a
`unification of the local and distributed computing models.
`The current iteration, based on the object-oriented
`approach to both local and distributed programming, will
`
`4
`
`

`

`be the one that produces a single computational model that
`will suffice for both.
`
`A less optimistic explanation of the failure of each attempt
`at unification holds that any such attempt will fail for the
`simple reason that programming distributed applications is
`not the same as programming non-distributed applications.
`Just making the communications paradigm the same as the
`language paradigm is insufficient to make programming
`distributed programs easier, because communicating
`between the parts of a distributed application is not the dif-
`ficult part of that application.
`
`The hard problems in distributed computing are not the
`problems of how to get things on and off the wire. The
`hard problems in distributed computing concern dealing
`with partial failure and the lack of a central resource man-
`ager. The hard problems in distributed computing concern
`insuring adequate performance and dealing with problems
`of concurrency. The hard problems have to do with differ-
`ences in memory access paradigms between local and dis-
`tributed entities. People attempting to write distributed
`applications quickly discover that they are spending all of
`their efforts in these areas and not on the communications
`protocol programming interface.
`
`This is not to argue against pleasant programming inter-
`faces. However, the law of diminishing returns comes into
`play rather quickly. Even with a perfect programming
`model of complete transparency between “fine-grained”
`language-level objects and “larger-grained” distributed
`objects, the number of distributed applications would not
`be noticeably larger if these other problems have not been
`addressed.
`
`All of this suggests that there is interesting and profitable
`work to be done in distributed computing, but it needs to
`be done at a much higher-level than that of “fine-grained”
`object integration. Providing developers with tools that
`help manage the complexity of handling the problems of
`distributed application development as opposed to the
`generic application development is an area that has been
`poorly addressed.
`
`4
`
`Local and Distributed Computing
`
`failure, and concurrency.1 The difference in latency is the
`most obvious, but in many ways is the least fundamental.
`The often overlooked differences concerning memory
`access, partial failure, and concurrency are far more diffi-
`cult to explain away, and the differences concerning par-
`tial failure and concurrency make unifying the local and
`remote computing models impossible without making
`unacceptable compromises.
`
`4.1
`
`Latency
`
`The most obvious difference between a local object invo-
`cation and the invocation of an operation on a remote (or
`possibly remote) object has to do with the latency of the
`two calls. The difference between the two is currently
`between four and five orders of magnitude, and given the
`relative rates at which processor speed and network
`latency speeds are changing, the difference in the future
`promises to be at best no better, and will likely be worse. It
`is this disparity in efficiency that is often seen as the essen-
`tial difference between local and distributed computing.
`
`Ignoring the difference between the performance of local
`and remote invocations can lead to designs whose imple-
`mentations are virtually assured of having performance
`problems because the design requires a large amount of
`communication between components that are in different
`address spaces and on different machines. Ignoring the
`difference in the time it takes to make a remote object
`invocation and the time it takes to make a local object
`invocation is to ignore one of the major design areas of an
`application. A properly designed application will require
`determining, by understanding the application being
`designed, what objects can be made remote and what
`objects must be clustered together.
`
`The vision outlined earlier, however, has an answer to this
`objection. The answer is two-pronged. The first prong is to
`rely on the steadily increasing speed of the underlying
`hardware to make the difference in latency irrelevant.
`This, it is often argued, is what has happened to efficiency
`concerns having to do with everything from high level lan-
`guages to virtual memory. Designing at the cutting edge
`has always required that the hardware catch up before the
`design is efficient enough for the real world. Arguments
`from efficiency seem to have gone out of style in software
`
`The major differences between local and distributed com-
`puting concern the areas of latency, memory access, partial
`
`1. We are not the first to notice these differences; indeed, they
`are clearly stated in [12].
`
`5
`
`

`

`engineering, since in the past such concerns have always
`been answered by speed increases in the underlying hard-
`ware.
`
`The second prong of the reply is to admit to the need for
`tools that will allow one to see what the pattern of commu-
`nication is between the objects that make up an applica-
`tion. Once such tools are available, it will be a matter of
`tuning to bring objects that are in constant contact to the
`same address space, while moving those that are in rela-
`tively infrequent contact to wherever is most convenient.
`Since the vision allows all objects to communicate using
`the same underlying mechanism, such tuning will be pos-
`sible by simply altering the implementation details (such
`as object location) of the relevant objects. However, it is
`important to get the application correct first, and after that
`one can worry about efficiency.
`
`Whether or not it will ever become possible to mask the
`efficiency difference between a local object invocation and
`a distributed object invocation is not answerable a priori.
`Fully masking the distinction would require not only
`advances in the technology underlying remote object invo-
`cation, but would also require changes to the general pro-
`gramming model used by developers.
`
`If the only difference between local and distributed object
`invocations was the difference in the amount of time it
`took to make the call, one could strive for a future in
`which the two kinds of calls would be conceptually indis-
`tinguishable. Whether the technology of distributed com-
`puting has moved far enough along to allow one to plan
`products based on such technology would be a matter of
`judgement, and rational people could disagree as to the
`wisdom of such an approach.
`
`However, the difference in latency between the two kinds
`of calls is only the most obvious difference. Indeed, this
`difference is not really the fundamental difference
`between the two kinds of calls, and that even if it were
`possible to develop the technology of distributed calls to
`an extent that the difference in latency between the two
`sorts of calls was minimal, it would be unwise to construct
`a programming paradigm that treated the two calls as
`essentially similar. In fact, the difference in latency
`between local and remote calls, because it is so obvious,
`has been the only difference most see between the two,
`and has tended to mask the more irreconcilable differ-
`ences.
`
`4.2 Memory access
`
`A more fundamental (but still obvious) difference between
`local and remote computing concerns the access to mem-
`ory in the two cases—specifically in the use of pointers.
`Simply put, pointers in a local address space are not valid
`in another (remote) address space. The system can paper
`over this difference, but for such an approach to be suc-
`cessful, the transparency must be complete. Two choices
`exist: either all memory access must be controlled by the
`underlying system, or the programmer must be aware of
`the different types of access—local and remote. There is
`no inbetween.
`
`If the desire is to completely unify the programming
`model—to make remote accesses behave as if they were in
`fact local—the underlying mechanism must totally control
`all memory access. Providing distributed shared memory
`is one way of completely relieving the programmer from
`worrying about remote memory access (or the difference
`between local and remote). Using the object-oriented para-
`digm to the fullest, and requiring the programmer to build
`an application with “objects all the way down,” (that is,
`only object references or values are passed as method
`arguments) is another way to eliminate the boundary
`between local and remote computing. The layer under-
`neath can exploit this approach by marshalling and unmar-
`shalling method arguments and return values for intra-
`address space transmission.
`
`But adding a layer that allows the replacement of all point-
`ers to objects with object references only permits the
`developer to adopt a unified model of object interaction.
`Such a unified model cannot be enforced unless one also
`removes the ability to get address-space-relative pointers
`from the language used by the developer. Such an
`approach erects a barrier to programmers who want to
`start writing distributed applications, in that it requires that
`those programmers learn a new style of programming
`which does not use address-space-relative pointers. In
`requiring that programmers learn such a language, more-
`over, one gives up the complete transparency between
`local and distributed computing.
`
`Even if one were to provide a language that did not allow
`obtaining address-space-relative pointers to objects (or
`returned an object reference whenever such a pointer was
`requested), one would need to provide an equivalent way
`of making cross-address space reference to entities other
`
`6
`
`

`

`than objects. Most programmers use pointers as references
`for many different kinds of entities. These pointers must
`either be replaced with something that can be used in
`cross-address space calls or the programmer will need to
`be aware of the difference between such calls (which will
`either not allow pointers to such entities, or do something
`special with those pointers) and local calls. Again, while
`this could be done, it does violate the doctrine of complete
`unity between local and remote calls. Because of memory
`access constraints, the two have to differ.
`
`The danger lies in promoting the myth that “remote access
`and local access are exactly the same” and not enforcing
`the myth. An underlying mechanism that does not unify all
`memory accesses while still promoting this myth is both
`misleading and prone to error. Programmers buying into
`the myth may believe that they do not have to change the
`way they think about programming. The programmer is
`therefore quite likely to make the mistake of using a
`pointer in the wrong context, producing incorrect results.
`“Remote is just like local,” such programmers think, “so
`we have just one unified programming model.” Seemingly,
`programmers need not change their style of programming.
`In an incomplete implementation of the underlying mech-
`anism, or one that allows an implementation language that
`in turn allows direct access to local memory, the system
`does not take care of all memory accesses, and errors are
`bound to occur. These errors occur because the program-
`mer is not aware of the difference between local and
`remote access and what is actually happening “under the
`covers.”
`
`The alternative is to explain the difference between local
`and remote access, making the programmer aware that
`remote address space access is very different from local
`access. Even if some of the pain is taken away by using an
`interface definition language like that specified in [1] and
`having it generate an intelligent language mapping for
`operation invocation on distributed objects, the program-
`mer aware of the difference will not make the mistake of
`using pointers for cross-address space access. The pro-
`grammer will know it is incorrect. By not masking the dif-
`ference, the programmer is able to learn when to use one
`method of access and when to use the other.
`
`Just as with latency, it is logically possible that the differ-
`ence between local and remote memory access could be
`completely papered over and a single model of both pre-
`sented to the programmer. When we turn to the problems
`
`introduced to distributed computing by partial failure and
`concurrency, however, it is not clear that such a unification
`is even conceptually possible.
`
`4.3 Partial failure and concurrency
`
`While unlikely, it is at least logically possible that the dif-
`ferences in latency and memory access between local
`computing and distributed computing could be masked. It
`is not clear that such a masking could be done in such a
`way that the local computing paradigm could be used to
`produce distributed applications, but it might still be possi-
`ble to allow some new programming technique to be used
`for both activities. Such a masking does not even seem to
`be logically possible, however, in the case of partial failure
`and concurrency. These aspects appear to be different in
`kind in the case of distributed and local computing.2
`
`Partial failure is a central reality of distributed computing.
`Both the local and the distributed world contain compo-
`nents that are subject to periodic failure. In the case of
`local computing, such failures are either total, affecting all
`of the entities that are working together in an application,
`or detectable by some central resource allocator (such as
`the operating system on the local machine).
`
`This is not the case in distributed computing, where one
`component (machine, network link) can fail while the oth-
`ers continue. Not only is the failure of the distributed com-
`ponents independent, but there is no common agent that is
`able to determine what component has failed and inform
`the other components of that failure, no global state that
`can be examined that allows determination of exactly what
`error has occurred. In a distributed system, the failure of a
`network link is indistinguishable from the failure of a pro-
`cessor on the other side of that link.
`
`These sorts of failures are not the same as mere exception
`raising or the inability to complete a task, which can occur
`in the case of local computing. This type of failure is
`caused when a machine crashes during the execution of an
`object invocation or a network link goes down, occur-
`rences that cause the target object to simply disappear
`rather than return control to the caller. A central problem
`in distributed computing is insuring that the state of the
`
`2. In fact, authors such as Schroeder [12] and Hadzilacos and
`Toueg [13] take partial failure and concurrency to be the defining
`problems of distributed computing.
`
`7
`
`

`

`whole system is consistent after such a failure; this is a
`problem that simply does not occur in local computing.
`
`The reality of partial failure has a profound effect on how
`one designs interfaces and on the semantics of the opera-
`tions in an interface. Partial failure requires that programs
`deal with indeterminacy. When a local component fails, it
`is possible to know the state of the system that caused the
`failure and the state of the system after the failure. No such
`determination can be made in the case of a distributed sys-
`tem. Instead, the interfaces that are used for the communi-
`cation must be designed in such a way that it is possible
`for the objects to react in a consistent way to possible par-
`tial failures.
`
`Being robust in the face of partial failure requires some
`expression at the interface level. Merely improving the
`implementation of one component is not sufficient. The
`interfaces that connect the components must be able to
`state whenever possible the cause of failure, and there
`must be interfaces that allow reconstruction of a reason-
`able state when failure occurs and the cause cannot be
`determined.
`
`If an object is coresident in an address space with its
`caller, partial failure is not possible. A function may not
`complete normally, but it always completes. There is no
`indeterminism about how much of the computation com-
`pleted. Partial completion can occur only as a result of cir-
`cumstances that will cause the other components to fail.
`
`The addition of partial failure as a possibility in the case of
`distributed computing does not mean that a single object
`model cannot be used for both distributed computing and
`local computing. The question is not “can you make
`remote method invocation look like local method invoca-
`tion?” but rather “what is the price of making remote
`method invocation identical to local method invocation?”
`One of two paths must be chosen if one is going to have a
`unified model.
`
`The first path is to treat all objects as if they were local and
`design all interfaces as if the objects calling them, and
`being called by them, were local. The result of choosing
`this path is that the resulting model, when used to produce
`distributed systems, is essentially indeterministic in the
`face of partial failure and consequently fragile and non-
`robust. This path essentially requires ignoring the extra
`failure modes of distributed computing. Since one can’t
`
`get rid of those failures, the price of adopting the model is
`to require that such failures are unhandled and cata-
`strophic.
`
`The other path is to design all interfaces as if they were
`remote. That is, the semantics and operations are all
`designed to be deterministic in the face of failure, both
`total and partial. However, this introduces unnecessary
`guarantees and semantics for objects that are never
`intended to be used remotely. Like the approach to mem-
`ory access that attempts to require that all access is
`through system-defined references instead of pointers, this
`approach must also either rely on the discipline of the pro-
`grammers using the system or change the implementation
`language so that all of the forms of distributed indetermi-
`nacy are forced to be dealt with on all object invocations.
`
`This approach would also defeat the overall purpose of
`unifying the object models. The real reason for attempting
`such a unification is to make distributed computing more
`like local computing and thus make distributed computing
`easier. This second approach to unifying the models makes
`local computing as complex as distributed computing.
`Rather than encouraging the production of distributed
`applications, such a model will discourage its own adop-
`tion by making all object-based computing more difficult.
`
`Similar arguments hold for concurrency. Distributed
`objects by their nature must handle concurrent method
`invocations. The same dichotomy applies if one insists on
`a unified programming model. Either all objects must bear
`the weight of concurrency semantics, or all objects must
`ignore the problem and hope for the best when distributed.
`Again, this is an interface issue and not solely an imple-
`mentation issue, since dealing with concurrency can take
`place only by passing information from one object to
`another through the agency of the interface. So either the
`overall programming model must ignore significant modes
`of failure, resulting in a fragile system; or the overall pro-
`gramming model must assume a worst-case complexity
`model for all objects within a program, making the pro-
`duction of any program, distributed or not, more difficult.
`
`One might argue that a multi-threaded application needs to
`deal with these same issues. However, there is a subtle dif-
`ference. In a multi-threaded application, there is no real
`source of indeterminacy of invocations of operations. The
`application programmer has complete control over invoca-
`tion order when desired. A distributed system by its nature
`
`8
`
`

`

`my handlin

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket