`
`A NOTE ON DISTRIBUTED COMPUTING
`
`Ignoring the difference between the performance of local and remote invoca—
`tions can lead to designs whose implementations are virtually assured of having
`performance problems because the design requires a large amount of communica—
`tion between components that are in different address spaces and on different
`machines. Ignoring the difference in the time it takes to make a remote object
`invocation and the time it takes to make a local object invocation is to ignore one
`of the major design areas of an application. A properly designed application will
`require determining, by understanding the application being designed, what
`objects can be made remote and what objects must be clustered together.
`The vision outlined earlier, however, has an answer to this objection. The
`answer is two—pronged. The first prong is to rely on the steadily increasing speed
`of the underlying hardware to make the difference in latency irrelevant. This, it is
`often argued, is what has happened to efficiency concerns having to do with every-
`thing from high level languages to virtual memory. Designing at the cutting edge
`has always required that the hardware catch up before the deSign is efficient
`enough for the real world. Arguments from efficiency seem to have gone out of
`style in software engineering, since in the past such concerns have always been
`answered by speed increases in the underlying hardware.
`The second prong of the reply is to admit to the need for tools that will allow
`one to see what the pattern of communication is between the objects that make up
`an application. Once such tools are available, it will be a matter of tuning to bring
`objects that are in constant contact to the same address space, while moving those
`that are in relatively infrequent contact to wherever is most convenient. Since the
`vision allows all objects to communicate using the same underlying mechanism,
`such tuning will be possible by simply altering the implementation details (such
`as object location) of the relevant objects. However,
`it is important to get the
`application correct first, and after that one can worry about efficiency.
`Whether or not it will ever become possible to mask the efficiency difference
`between a local object invocation and a distributed object invocation is not
`answerable a priori. Fully masking the distinction would require not only
`advances in the technology underlying remote object invocation, but would also
`require changes to the general programming model used by developers.
`If the only difference between local and distributed object invocations was the
`difference in the amount of time it took to make the call, one could strive for a
`future in which the two kinds of calls would be conceptually indistinguishable.
`Whether the technology of distributed computing has moved far enough along to
`allow one to plan products based on such technology would be a matter of judge-
`ment, and rational people could disagree as to the wisdom of such an approach.
`However, the difference in latency between the two kinds of calls is only the
`most obvious difference. Indeed, this difference is not really the fundamental dif-
`ference between the two kinds of calls, and that even if it were possible to develop
`
`
`
`316
`
`316
`
`
`
`314
`
`A NOTE ON nrsrarsmsn compurrvo
`
`the technology of distributed calls to an extent that the difference in latency
`between the two sorts of calls was minimal, it would be unwise to construct a pro-
`gramming paradigm that treated the two calls as essentially similar. In fact, the
`difference in latency between local and remote calls, because it is so obvious, has
`been the only difference most see between the two, and has tended to mask the
`more irreconcilable differences.
`
`A.4.2 Memory Access
`
`A more fundamental (but still obvious) difference between lOCal and remote eom~
`puting concerns the access to memory in the two cases—specifically in the use of
`pointers. Simply put, pointers in a local address space are not valid in another
`(remote) address space. The system can paper over this difference, but for such an
`approach to be successful, the traHSparency must be complete. Two choices exist:
`either all memory access must be controlled by the underlying system, or the pro—
`grammer must be aware of the different types of access—local and remote. There
`is no inbetween.
`
`If the desire is to completely unify the programming model—t0 make remote
`accesses behave as if they were in fact localfithe underlying mechanism must
`totally control all memory access Providing distributed shared memory is one
`way of completely relieving the programmer from Worrying about remote mem-
`ory access (or the difference between local and remote). Using the object-oriented
`paradigm to the fullest, and requiring the programmer to build an application with
`“objects all the way dowu,” (that is, only object references or values are passed as
`method arguments) is another way to eliminate the boundary between local and
`remote computing. The layer underneath can exploit this approach by marshalling
`and unmarshalling method arguments and return values for intra~address space
`transmission.
`
`But adding a layer that allows the replacement of all pointers to objects with
`object references only permits the developer to adopt a unified model of object
`interaction. Such a unified model cannot be enforced unless one also removes the
`ability to gel: address-space-relative pointers from the language used by the devel-
`OpeI'. Such an approach erects a barrier to programmers who want to start writing
`distributed applications, in that it requires that those programmers learn a new
`style of programming which does not use address-space-relative pointers.
`In
`requiring that programmers learn such a language, moreover, one gives up the
`complete tranSparency betWeen local and distributed computingw
`Even if one were to provide a language that did not allow obtaining address-
`space-relative pointers to objects (or returned an object reference whenever such a
`pointer was requested), one would need to provide an equivalent way of making
`
`
`
`317
`
`317
`
`
`
`cm paints-115mno3105;
`
`eS
`
`’-
`5'
`
`Oo a
`
`to
`
`A NOTE ON DISTRIBUTED COMPUTING
`
`315
`
`cross-address space reference to entities other than objects. Most programmers
`use pointers as references for many different kinds of entities. These pointers must
`either be replaced with something that can be used in cross-address space calls or
`the programmer will need to be aware of the difference between such calls (which
`will either not allow pointers to such entities, or do something special with those
`pointers) and local calls. Again, while this could be done, it does violate the doc-
`trine of complete unity between local and remote calls. Because of memory access
`constraints, the two have to differ.
`The danger lies in promoting the myth that “remote access and local access
`are exactly the same” and not enforcing the myth. An underlying mechanism that
`does not unify all memory accesses while still promoting this myth is both miss
`leading and prone to error. Programmers buying into the myth may believe that
`they do not have to change the way they think about programming. The program-
`mer is therefore quite likely to make the mistake of using a pointer in the wrong
`context, producing incorrect results. “Remote is just like local,” such program-
`mers think, “so we have just one unified programming model.” Seemingly, pro-
`grammers need not change their style of programming.
`In an incomplete
`implementation of the underlying mechanism, or one that allows an implementa-
`tion language that in turn allows direct access to local memory, the system does
`not take care of all memory accesses, and errors are bound to occur. These errors
`occur because the programmer is not aware of the difference between local and
`remote access and what is actually happening “under the covers.”
`The alternative is to explain the difference between local and remote access,
`making the programmer aware that remote address space access is very different
`from local access. Even if some of the pain is taken away by using an interface
`definition language like that specified in [l] and having it generate an intelligent
`language mapping for operation invocation on distributed objects, the program—
`mer aware of the difference will not make the mistake of using pointers for cross-
`address space access. The programmer will know it is incorrect. By not masking
`the difference, the programmer is able to learn when to use one method of access
`and when to use the other.
`Just as with latency, it is logically possible that the difference between local
`and remote memory access could be completely papered over and a single model
`of both presented to the programmer. When we turn to the problems introduced to
`distributed computing by partial failure and concurrency, however, it is not clear
`that such a unification is even conceptually possible.
`
`
`
`
`
`318
`
`318
`
`
`
`——fi
`
`316
`
`A NOTE ON wsrarsurao COMPUTING
`
`A.5 Partial Failure and Concurrency
`
`While unlikely, it is at least logically possible that the differences in latency
`and memory access between local computing and distributed computing could be
`masked. It is not clear that such a masking could be done in such a way that the
`local computing paradigm could be used to produce distributed applications, but it
`might still be possible to allow some new programming technique to be used for
`both activities. Such a masking does not even seem to be logically possible, how-
`ever, in the case of partial failure and concurrency. These aspects appear to be dif-
`ferent in kind in the case of distributed and local computing.2
`Partial failure is a central reality of distributed computing. Both the local and
`the distributed world contain components that are subject to periodic failure. In
`the case of local computing, such failures are either total, affecting all of the enti-
`ties that are working together in an application, or detectable by some central
`resource allocator (such as the operating system on the local machine).
`This is not the case in distributed computing, where one component (machine,
`network link) can fail while the others continue. Not only is the failure of the dis
`tributed components independent, but there is no common agent that is able to
`determine what component has failed and inform the other components of that
`failure, no global state that can be examined that allows determination of exactly
`what error has occurred. In a distributed system, the failure of a network link is
`indistinguishable from the failure of a processor on the other side of that link.
`These sorts of failures are not the same as mere exception raising or the
`inability to complete a task, which can occur in the case of local computing. This
`type of failure is caused when a machine crashes during the execution of an object
`invocation or a network link goes dowri, occurrences that cause the target object to
`simply disappear rather than return control to the caller. A central problem in dis-
`tributed computing is insuring that the state of the whole system is consistent after
`such a failure; this is a problem that simply does not occur in local computing.
`The reality of partial failure has a profound effect on how one designs inter-
`faces and on the semantics of the operations in an interface. Partial failure requires
`that programs deal with indeterminacy. When a local component fails, it is possi-
`ble to know the state of the system that caused the failure and the state of the sys-
`tem after the failure. No such determination can be made in the case of a
`distributed system. Instead, the interfaces that are used for the communication
`must be designed in such a way that it is possible for the objects to react in a con-
`sistent way to possible partial failures.
`
` 2
`
`In fact, authors such as Schroedcrmf and Hadzilacos and Touegm] take partial failure and
`concurrency to be the defining problems of distributed computing.
`
`I 3
`
`19
`
`319
`
`
`
`317
`
`at
`
`A NOTE ON DISTRIBUTED C0MPUTING
`
`Being robust in the face of partial failure requires some expression at the
`interface level. Merely improving the implementation of one component is not
`sufficient. The interfaces that connect the components must be able to state when-
`
`ever possible the cause of failure, and there must be interfaces that allow recon-
`struction of a reasonable state when failure occurs and the cause cannot be
`determined.
`
`If an object is (so—resident in an address space with its caller, partial failure is
`not possible. A function may not complete normally, but it always completes.
`There is no indeterminism about how much of the computation completed. Partial
`completion can occur only as a result of circumstances that will cause the other
`components to fail.
`The addition of partial failure as a possibility in the case of distributed com-
`puting does not mean that a single object model cannot be used for both distrib—
`uted computing and local computing. The question is not “can you make remote
`method invocation look like local method invocation?” but rather “what is the
`
`price of making remote method invocation identical to local method invocation?”
`One of two paths must be chosen if one is going to have a unified model.
`The first path is to treat all objects as if they were local and design all inter-
`faces as if the objects calling them, and being called by them, were local. The
`result of choosing this path is that the resulting model, when used to produce dis-
`tributed systems, is essentially indeterministic in the face of partial failure and
`consequently fragile and non-robust. This path essentially requires ignoring the
`extra failure modes of distributed computing. Since one can't get rid of those fail—
`ures, the price of adopting the model is to require that such failures are unhandled
`and catastrophic.
`The other path is to design all interfaces as if they were remote. That is, the
`semantics and operations are all designed to be deterministic in the face of failure,
`both total and partial. However,
`this introduces unnecessary guarantees and
`semantics for objects that are never intended to be used remotely. Like the
`approach to memory access that attempts to require that all access is through sys-
`tem-defined references instead of pointers, this approach must also either rely on
`the discipline of the programmers using the system or change the implementation
`language so that all of the forms of distributed indeterminacy are forced to be
`dealt with on all object invocations.
`This approach would also defeat the overall purpose of unifying the object
`models. The real reason for attempting such a unification is to make distributed
`computing more like local computing and thus make distributed computing easier.
`This second approach to unifying the models makes local computing as complex
`as distributed computing. Rather than encouraging the production of distributed
`applications, such a model will discourage its own adoption by making all object
`based computing more difficult.
`
`
`
`320
`
`320
`
`
`
`—
`
`318
`
`a NOTE ON DISTRIBUTED COMPUTING
`
`Similar arguments hold for concurrency. Distributed objects by their nature
`must handle concurrent method invocations. The same dichotomy applies if one
`insists on a unified programming model. Either all objects must bear the weight of
`concurrency semantics, or all objects must ignore the problem and hepe for the
`best when distributed. Again, this is an interface issue and not solely an imple—
`mentation issue, since dealing with concurrency can take place only by passing
`information from one object to another through the agency of the interface. So
`either the overall programming model must ignore significant modes of failure,
`resulting in a fragile system; or the overall programming model must assume a
`worst-case complexity model for all objects within a program, making the produc—
`tion of any program, distributed or not, more difficult.
`One might argue that a multi-threaded application needs to deal with these
`same issues. However, there is a subtle difference. In a multi-threaded application,
`there is no real source of indeterminacy of invocations of operations. The applica
`tion programmer has complete control over invocation order when desired. A dis—
`tributed system by its nature introduces truly asynchronous operation invocations.
`Further, a non—distributed system, even when multi-threaded, is layered on top of
`a single operating system that can aid the communication between objects and can
`be used to determine and aid in synchronization and in the recovery of failure. A
`distributed system, on the other hand, has no single point of resource allocation,
`synchronization, or failure recovery, and thus is conceptually very different.
`
`A.6 The Myth of “Quality of Service”
`
`One could take the position that the way an object deals with latency, memory
`access, partial failure, and concurrency control is really an aspect of the imple—
`mentation of that object, and is best described as part of the “quality of service”
`provided by that implementation. Different implementations of an interface may
`provide different levels of reliability, scalability, or performance. If one wants to
`build a more reliable system, one merely needs to choose more reliable implemen—
`tations of the interfaces making up the system.
`On the surface, this seems quite reasonable. If I want a more robust system, I
`go to my catalog of component vendors. I quiz them about their test methods. I see
`if they have 1809000 certification, and I buy my components from the one I trust
`the most. The components all comply with the defined interfaces, so I can plug
`them right in; my system is robust and reliable, and I’m happy.
`Let us imagine that I build an application that uses the (mythical) queue inter-
`face to enqueue work for some component. My application dutifully enqueues
`records that represent work to be done. Another application dutifully dequeues
`them and performs the Work. After a while, I notice that my application crashes
`
`
`
`321
`
`321
`
`
`
`EV
`
`. .5;a‘
`‘5‘»(D_
`
`or...
`
`A NOTE ON DISTRIBUTED COMPUITNG
`
`3.19
`
`due to time—outs. I find this extremely annoying, but realize that it‘s my fault. My
`application just isn’t robust enough. It gives up too easily on a time—out. So I
`change my application to retry the operation until it succeeds. Now I’m happy. 1
`almost never see a time-out. Unfortunately, I now have another problem. Some of
`the requests seem to get processed two, three, four, or more times. How can this
`be? The component I bought which implements the queue has allegedly been rig-
`orously tested. It shouldn’t be doing this. I’m angry. I call the vendor and yell at
`him. After much fingerpointing and research, the culprit is found. The problem
`turns but to be the way I’m using the queue. Because of my handling of partial
`failures (which in my naivete, I had thought to be total), I have been enqueuing
`work requests multiple times.
`yveu,t yeh atthe vendorthatitis sfilltheh‘fauh.'Then queue should be
`demcnngtheduptheenuyandrmnofingitIhnnotgomgtoconfinueufingthm
`software unless this is fixed. But, since the entities being enqueued are just values,
`there is no way to do duplicate elimination. The only way to fix this is to change
`the protocol to add request IDs. But since this is a standardized interface, there is
`no way to do this.
`The moral of this tale is that robustness is not simply a function of the imple-
`mentations of the interfaces that make up the system. While robustness of the
`individual components has some effect on the robustness of the overall systems, it
`is not the sole factor determining system robustness. Many aspects of robustness
`can be reflected only at the protocol/interface level.
`Similar situations can be found throughout the standard set of interfaces. Sup-
`pose I want to reliably remove a name from a context. I would be tempted to write
`codethatlookslike:
`
`while (true) {
`
`try {
`context—>remove(name);
`break;
`
`} c
`
`atch (NotFoundInContext)
`break;
`
`{
`
`} c
`
`}
`
`}
`
`atch (NetworkServerFaliure) {
`continue;
`
`That is, '[ keep trying the operation until it succeeds (or until 1 crash). The problem
`is that my connection to the name server may have gone down, but another client’s
`may have stayed up. I may have, in fact, successfully removed the name but not
`
`
`
`322
`
`322
`
`
`
`
`
`"_—_"
`
`320
`
`A NO TE 0N orsrnrs urea comeurmo
`
`discovered it because of a network disconnection. The other client then adds the
`same name, which I then remove. Unless the naming interface includes an opera-
`tion to lock a naming context, there is no way that I can make this Operation com—
`pletely robust. Again, we see that robustness/reliability needs to be expressed at
`the interface level. In the design of any operation, the question has to be asked:
`What happens if the client chooses to repeat this operation with the exact same
`parameters as previously? What mechanisms are needed to ensure that they get
`the desired semantics? These are things that can be expressed only at the interface
`level. These are issues that can’t be answered by supplying a “more robust imple—
`mentation” because the lack of robustness is inherent in the interface and not
`something that can be changed by altering the implementation.
`Similar arguments can be made about performance. Suppose an interface
`describes an object which maintains sets of other objects. A defining property of
`sets is that there are no duplicates. Thus, the implementation of this object needs
`to do duplicate elimination. If the interfaces in the system do not provide a way of
`testing equality of reference, the objects in the set must be queried to determine
`equality. Thus, duplicate elimination can be done only by interacting with the
`objects in the set. It doesn’t matter how fast the objects in the set implement the
`equality operation. The overall performance of eliminating duplicates is going to
`be governed by the latency in communicating over the slowest communications
`link involved. There is no change in the set implementations that can overcome
`this. An interface design issue has put an upper bound 0n the performance of this
`operation.
`
`A.7 Lessons From NFS
`
`We do not need to look far to see the consequences of ignoring the distinction
`between local and distributed computing at the interface level. NFS®, Sun’s dis-
`tributed computing file systemLM'lS] is an example of a non-distributed application
`programer interface (API) (open, read, write, close, etc.) re-implemented in a dis-
`tributed way.
`Before NFS and other network file systems, an error status returned from one
`of these calls indicated something rare: a full disk, or a catastrophe such as a disk
`crash. Most failures simply crashed the application along with the file system.
`Further, these errors generally reflected a situation that was either catastrophic for
`the program receiving the error or one that the user running the program could do
`something about.
`NFS opened the door to partial failure within a file system. It has essentially
`two modes for dealing with an inaccessible file server: soft mounting and hard
`mounting. But since the designers of NFS were unwilling (for easily understand-
`
`
`
`323
`
`323
`
`
`
`
`
`321
`
`-'P91fi‘I!J'JS!(itro'ame_'j-s
`E.M
`
`_"U
`1.5,
`
`A NOTE ON DISTRIBUTED COMPUTING
`
`able reasons) to change the interface to the file system to reflect the new, distrib~
`uted nature of file access, neither option is particularly robust.
`Soft mounts expose network or server failure to the client program. Read and
`write operations return a failure status much more often than in the single-system
`case, and programs written with no allowance for these failures can easily corrupt
`the files used by the program. In the early days of NFS, system administrators tried
`to tune various parameters (time—out length, number of retries) to avoid these
`problems. These efforts failed. Today, soft mounts are seldom used, and when
`they are used, their use is generally restricted to read-only file systems or special
`applications.
`Hard mounts mean that the application hangs until the server comes back up.
`This generally prevents a client program from seeing partial failure, but it leads to
`a malady familiar to users of workstation networks: one server crashes, and many
`workstations—even those apparently having nothing to do with that server—
`freeze. Figuring out the chain of causality is very difficult, and even when the
`cause of the failure can be determined, the individual user can rarely do anything
`about it but wait. This kind of brittleness can be reduced only with strong policies
`and network administration aimed at reducing interdependencies. Nonetheless,
`hard mounts are now almost universal.
`
`Note that because the NFS protocol is stateless, it assumes clients contain no
`state of interest with respect to the protocol; in other words, the server doesn‘t
`care what happens to the client. NFS is also a “pure” client-server protocol, which
`means that failure can be limited to three parties: the client, the server, or the net-
`work. This combination of features means that failure modes are simpler than in
`the more general case of peer-to-peer distributed object-oriented applications
`where no such limitation on shared state can be made and where servers are them-
`selves clients of other servers. Such peer-to-peer distributed applications can and
`will fail in far more intricate ways than are currently possible with NFS.
`The limitations on the reliability and robustness of NFS have nothing to do
`with the implementation of the pans of that system. There is no “quality of ser-
`vice" that can be improved to eliminate the need for hard mounting NFS volumes.
`The problem can be traced to the interface upon which NFS is built, an interface
`that was designed for non-distributed computing where partial failure was not
`possible. The reliability of NFS cannot be changed without a change to that inter-
`face, a change that will reflect the distributed nature of the application.
`This is not to say that NFS has not been successful. In fact, NFS is arguably the
`most successful distributed application that has been produced. But the limitations
`on the robustness have set a limitation on the scalability of NFS. Because of the
`intrinsic unreliability of the NFS protocol, use of NFS is limited to fairly small
`numbers of machines, geographically corlocated and centrally administered. The
`way NFS has dealt with partial failure has been to informally require a centralized
`
`
`
`324
`
`324
`
`
`
`
`
`"'———"—'—_V
`
`322
`
`A NOTE ON DISIRl'B urge COMPUTING
`
`resource manager (a system administrator) who can detect system failure, initiate
`resource reclamation and insure system consistency. But by introducing this cen-
`tral resource manager, one could argue that NFS is no longer a genuinely distrib-
`uted application.
`
`A.8 Taking the Difference Seriously
`
`Differences in latency, memory access, partial failure, and concurrency make
`merging of the computational models of local and distributed computing both
`unwise to attempt and unable to succeed. Merging the models by making local
`computing follow the model of distributed computing WOuld require major
`changes in implementation languages (or in how those languages are used) and
`make local computing far more complex than is otherwise necessary. Merging the
`models by attempting to make distributed computing follow the model of local
`computing requires ignoring the different failure modes and basic indeterminacy
`inherent in distributed computing, leading to systems that are unreliable and inca-
`pable of scaling beyond small groups of machines that are geographically co-
`located and centrally administered.
`there are irreconcilable differences
`that
`A better approach is to accept
`between local and distributed computing, and to be conscious of those differences
`at all stages of the design and implementation of distributed applications. Rather
`than trying to merge local and remote objects, engineers need to be constantly
`reminded of the differences between the two, and know when it is appropriate to
`use each kind of object.
`Accepting the fundamental difference between local and remote objects does
`not mean that either sort of object will require its interface to be defined differ-
`ently. An interface definition language such as IDLlB] can still be used to specify
`the set of interfaces that define objects. However, an additional part of the defini-
`tion of a class of objects will be the specification of whether those objects are
`meant to be used locally or remotely. This decision will need to consider what the
`anticipated message frequency is for the object, and whether clients of the object
`can accept the indeterminacy implied by remote access. The decision will be
`reflected in the interface to the object indirectly, in that the interface for objects
`that are meant to be accessed remotely will contain operations that allow reliabil-
`ity in the face of partial failure.
`It is entirely possible that a given object will often need to be accessed by
`some objects in ways that cannot allow indeterminacy, and by other objects rela-
`tively rarely and in a way that does allow indeterminacy. Such cases should be
`split into two objects (which might share an implementation) with one having an
`
`
`
`325
`
`325
`
`
`
`
`
`A NOTE ON DISTRIBUTED COMPUTING
`
`interface that is best for local access and the other having an interface that is best
`for remote access.
`
`A compiler for the interface definition language used to specify classes of
`objects will need to alter its output based on whether the class definition being
`compiled is for a class to be used locally or a class being used remotely. For inter
`faces meant for distributed objects, the code produced might be very much like
`that generated by RPC stub compilers today. Code for a local interface, however,
`could be much simpler, probably requiring little more than a class definition in the
`target language.
`While writing code, engineers will have to know whether they are sending
`messages to local or remote objects, and access those objects differently. While
`this might seem to add to the programming difficulty, it will in fact aid the pro-
`grammer by providing a framework under which he or she can learn what to
`expect from the different kinds of calls. To program completely in the local envi-
`ronment, according to this model, will not require any changes from the program-
`mer‘s point of view. The discipline of defining classes of objects using an
`interface definition language will insure the desired separation of interface from
`implementation, but the actual process of implementing an interface will be no
`different than what is done today in an obj ect-oriented language.
`Programming a distributed application will require the use of different tech-
`niques than those used for non—distributed applications. Programming 21 distrib—
`uted application will require thinking about the problem in a different way than
`before it was thought about when the solution was a non-distributed application.
`But that
`is only to be expected. Distributed objects are different from local
`objects, and keeping that difference visible will keep the programmer from forget-
`ting the difference and making mistakes. Knowing that an object is outside of the
`local address space, and perhaps on a different machine, will remind the program-
`mer that he or she needs to program in a way that reflects the kinds of failures,
`indeterminacy, and concurrency constraints inherent in the use of such objects.
`Making the difference visible will aid in making the difference part of the design
`of the system.
`Accepting that local and distributed computing are different in an irreconcil-
`able way will also allow an organization to allocate its research and engineering
`resources more wisely. Rather than using those resources in attempts to paper over
`the differences between the two kinds of computing, resources can be directed at
`improving the performance and reliability of each.
`One consequence of the view espoused here is that it is a mistake to attempt to
`construct a system that is “objects all the way down” if one understands the goal
`as a distributed system constructed of the same kind of objects all the way down.
`There will be a line where the object model changes; on one side of the line will
`be distributed objects, and on the other side of the line there will (perhaps) be
`
`
`
`326
`
`326
`
`
`
`—
`
`324
`
`A NOTE ON DISTRIBUTED COMPUTING
`
`local objects. On either side of the line, entities on the other side of the line will be
`opaque; thus one distributed object will not know (or care) if the implementation
`of another distributed object with which it communicate