`Operating Systems
`
`M. Rozier, V. Abrossimov, F. Armand,
`I. Boule, M. Gien, M. Guillemont,
`F. Herrmann, C. Kaiser, S. Langlois,
`P. Léonard, and W. Neuhauser
`
`Chorus systémes
`
`CHORUS:
`
`in ancient Greek drama, a company of
`performers providing explanation and
`elaboration of the main action.
`(Webster’s New World Dictionary)
`
`ABSTRACT: The CHORUStechnology has been
`designed for building ‘new generations” of open,
`distributed, scalable Operating Systems. CHORUS
`has the following main characteristics:
`
`¢ a communication-based technology, relying on
`a minimal Nucleus integrating distributed
`processing and communication at the lowest
`level, and providing generic services used by a
`set of subsystem servers to provide extended
`standard operating system interfaces (a UNIX
`interface has been developed, others such as
`OS/2 and object oriented systems are
`envisaged);
`
`© Computing Systems, Vol. 1° No. 4° Fall 1988 305
`
`Google Exhibit 1032
`Google Exhibit 1032
`Google v. Valtrus
`Google v. Valtrus
`
`
`
`* real-time services provided by the real-time
`Nucleus, and accessible by “system program-
`mers” at the different system levels,
`
`e a modular architecture providing scalability,
`and allowing in particular dynamic
`configuration of the system and its applica-
`tions over a wide range of hardware and net-
`work configurations, including parallel and
`multiprocessor systems.
`
`CHORUS-V3 is the current version of the CHORUS
`Distributed Operating System, developed by Chorus
`systémes. Earlier versions were studied and imple-
`mented within the Chorus research project at INRIA
`between 1979 and 1986.
`
`This paper presents the CHORUSarchitecture and
`the facilities provided by the CHORUS-V3 Nucleus.
`It describes the UNIX subsystem built with the
`CHORUStechnology that provides:
`
`« binary compatibility with UNIX,
`
`e extended UNIX services supporting distri-
`buted applications (network IPC, distributed
`virtual memory), light-weight processes, and
`real-time facilities.
`
`1. Introduction
`
`The evolution of computer applications has led to the design of
`large distributed systems for which the requirements for efficiency
`and availability have increased, as has the need for higher level
`tools to help in their construction, operation and administration.
`This evolution introduces requirements for new system struc-
`tures which are difficult to fulfill merely by extending current
`monolithic operating systems into networks of cooperating
`
`306 M.Rozieret al.
`
`
`
`systems. This has led to a new generation of distributed operating
`systems.
`
`e Separate applications running on different machines, from
`different suppliers, supporting different operating systems,
`and written in a variety of programming languages need to
`be tightly coupled and logically integrated. The loose cou-
`pling provided by current computer networkingis insuffic-
`ient. The requirementis for tighter logical coupling.
`« Applications often evolve by growing in size. Typically this
`leads to distributing programs on several machines, to
`grouping several geographically distributed sets of files into
`a unique logical one, to upgrading hardware and software to
`take advantage of the latest technologies, newer releases, etc.
`The requirement is for a gradual on-line evolution.
`
`« Applications grow in complexity and get more and more
`difficult to master, i.e., to specify, to debug, to tune. The
`requirementis for a clear, logical architecture, which allows
`the mapping of the modularity of the application onto the
`operational system and to hide distribution when it does not
`directly reflect the distributed nature of organizations.
`
`These structural properties can best be accomplished through a
`set of unified, coherent and standard basic concepts and structures
`providing a rigorous framework adapted to constructing distri-
`buted operating systems.
`The CHORUSarchitecture has been designed to meet these
`requirements.
`Its foundation is a generic Nucleus running on
`each machine; communication and distribution are managed at
`the lowest level by this Nucleus; customary operating systems are
`build as subsystems on top of the generic Nucleus using its basic
`services; user application programsrun in the context of these
`operating systems. CHORUSprovides the generic Nucleus and a
`set of servers implementing generic operating system services,
`which are used to build complete host operating systems. The
`generic CHORUS Nucleus implements the real-time services
`required by real-time users. Although it is not dedicated to a par-
`ticular system, CHORUSprovides also a standard UNIX subsystem
`that can execute UNIX programs with a distributed architecture,
`as a direct result of the CHORUStechnology.
`
`CHORUSDistributed Operating Systems 307
`
`
`
`This paper focuses on the CHORUSarchitecture, the facilities
`provided by its Nucleus, and the — distributed - UNIX subsystem
`implementation. Extensions to UNIX services concerning real-
`time, multi-thread processes, distributed applications and servers
`are outlined.
`The CHORUShistory and its transition from research to indus-
`try is summarized in section 2. Section 3 introduces the key con-
`cepts of the CHORUSarchitecture and the facilities provided by
`the CHORUS Nucleus. Section 4 explains how the “old” UNIX
`kernel has been adjusted to state-of-the-art operating system tech-
`nology while preserving its semantics, and gives examples of how
`its services can then beeasily extended to handle distribution.
`Section 5 gives some implementation considerations and conclud-
`ing remarks.
`Comments about some of the important design choices, often
`related to previous experience, are given in small paragraphsenti-
`tled “RATIONALE.”
`
`2. Background and Related Work
`
`“Chorus” was a research project on Distributed Systemsat
`INRIA! in France from 1979 to 1986. Three iterations were
`developed, referred to as CHORUS-V0, CHORUS-V1, and CHORUS-
`V2, all based on a communications-oriented kernel [Zimmermann
`et al. 1981; Guillemont 1982(2); Zimmermannet al. 1984; Rozier &
`Legatheaux-Martins 1987]. The basic concept for handling distri-
`buted computing within CHORUS,for system as well as applica-
`tion services, is for a ““Nucleus” to manage the exchangeof ‘“‘Mes-
`sages” between “Ports” attached to “Actors.”
`While early versions of CHORUS had a custom interface,
`CHORUS-V2 [Armandet al. 1986] was compatible with UNIX Sys-
`tem V, and had been used asa basis for supporting half a dozen
`research distributed applications. CHORUS-V3 is the current ver-
`sion, developed by Chorus systémes. Itbuilds on previous
`CHORUSexperience [Rosier & Legatheaux-Martins 1987] and
`integrates many concepts from state-of-the-art distributed systems
`
`1.
`
`INRIA:
`
`Institut National de Recherche en Informatique et Automatique.
`
`308 M.Rozier etal.
`
`
`
`developed in several research projects, while taking into account
`constraints of the industrial environment.
`The CHORUS-V3 message-passing Nucleus comparesto the
`V-system [Cheriton 1988(1)] of Stanford University, distributed
`virtual memory and “‘threads”’ are similar to that of Mach
`[Accetta et al. 1986] of Carnegie Mellon University, network
`addressing incorporates ideas from Amoeba [Mullenderetal.
`1987] of the University of Amsterdam, and uniform file naming is
`based on a scheme similar to the one used in Bell Laboratories’
`9th Edition UNIX [Presotto 1986; Weinberger 1986].
`This technology has been used to implement a distributed
`UNIX system [Herrmann et al 1988] as a set of servers using the
`generic services provided by the CHORUS Nucleus.
`
`2.1 Early Research
`
`The Chorus project at INRIA was initiated with a combined
`experience from previous research in packet switching Computer
`Networks — Cyclades [Pouzin et al. 1982] — and time sharing
`Operating Systems — Esope [Bétournéet al. 1970]. The idea was
`to bring distributed control techniques originated in the context of
`packet switching networks into distributed operating systems.
`In 1979 INRIA also launched another project, Sol, to reimple-
`ment a complete UNIX environment on French micro and mini
`computers [Gien 1983]. The Sol team joined Chorus in 1984,
`bringing their UNIX expertise to the project.
`
`2.2 CHORUS-V0 (1980-1982)
`
`CHORUS-V0 experimented with three main concepts:
`
`¢ A distributed application which was an ensemble of
`independent actors communicating exclusively by exchange
`of messages through ports or groups of ports; port manage-
`ment and naming was designedso as to allow port migra-
`tion and dynamic reconfiguration of applications.
`
`¢ The operation of an actor which was an alternate sequence
`of indivisible execution phases, called processing-steps, and
`
`Cuorus Distributed Operating Systems 309
`
`
`
`of communication phases; it provided a message-driven
`automaton style of processing.
`
`« The operating system was built as a small nucleus, simple
`and reliable, replicated on each site and complemented by
`distributed system actors, in charge of ports, actors, files,
`terminal and network management.
`
`These original design choices were revealed to be sound and
`were maintained in subsequent versions.
`These CHORUSconcepts have been applied in particular for
`fault tolerance:
`the “coupled actors” scheme [Banino & Fabre
`1982] provided a basis for non-stop services.
`CHORUS-VO0 was implemented on Intel 8086 machines, inter-
`connected by a 50 Kb/s ring network (Danube). The prototype
`was written in UCSD Pascal and the code was interpreted.
`It was
`running by mid-1982.
`
`2.3 CHORUS-V1 (1982-1984)
`
`This version moved CHORUSfrom a prototype to a real system.
`The sites were SM90 multi-processor micro-computers — based on
`Motorola 68000 and later 68020 — interconnected by a 10 Mb/s
`Ethernet.
`In a multi-processor configuration, one processor ran
`UNIX, as a development system and for managing the disk; other
`processors (up to seven) ran CHORUS, one of them interfacing to
`the network. The Pascal code was compiled.
`The main focus of this version was experimentation with a
`native implementation of CHORUS on multi-processor architec-
`ture.
`The design had few changes from CHORUS-VO, namely
`
`¢ Structured messages were introduced to allow embedding
`protocols and migrating their contexts.
`
`e The concept of an activity message, transporting the context
`of embedded computations and the graph of future distri-
`buted computation, was experimented on for a fault tolerant
`application [Banino et al. 1985(1)].
`
`310° M.Rozieretal.
`
`
`
`It was distributed to a
`CHORUS-V1 was running in mid-1984.
`dozen places, some of whichstill use it in 1988.
`It was docu-
`mented for these users.
`
`2.4 CHORUS-V2 (1984-1986)
`
`Adopting UNIX forced the CHORUS interface to be recast and the
`system actors to be changed. The Nucleus, on the other hand, did
`not change a great deal. The UNIX subsystem was developed
`partly from results of the Sol project (File Manager) and partly
`from scratch (Process Manager). Concepts such as ports, mes-
`sages, processing steps, and remote procedurecalls were revisited
`in order to be closer to UNIX semantics and to allow a protection
`scheme 4 la UNIX. The UNIX interface was extended to support
`distributed applications (distant fork, distributed signals, distri-
`buted files).
`CHORUS-V2 was an opportunity to reconsider the whole UNIX
`kernel architecture with two objectives:
`
`1. Modularity: all UNIX services were split into several
`independent actors. This implied splitting UNIX kernel
`data into several independent CHORUSactors along with
`cooperation protocols between these actors.
`
`2. Distribution: objects managed by system actors(files,
`processes) could be distributed; services offered by system
`actors could also be distributed (e.g. distant fork, remotefile
`access); this implied new protocols, naming, localization,
`etc.; the designation and naming levels for distributed
`objects, groups and the communication protocols were
`redesigned.
`
`A distributed file system was implemented. A distributed shell
`for UNIX was also developed.
`All this work was an irreplaceable exercise for CHORUS-V3:
`CHORUS-V2 maybe considered as the draft of the current version.
`CHORUS-V2 was running at the end of 1986.
`It has been docu-
`mented and used by research groups outside the Chorusproject.
`
`CHORUS Distributed Operating Systems 311
`
`
`
`2.5 CHORUS-V3 (1987- )
`
`The objectives of this current version are to provide an industrial
`product integrating all positive aspects of the previous experiences
`and research versions of CHORUSand of other systems along with
`several new significant features. CHORUS-V3 is described in the
`rest of the paper.
`
`2.6 Appraisal of Four CHORUS Versions
`
`Thefirst lesson which can be pulled out from the CHORUSstory is
`that several steps and successive whole redesigns and implemen-
`tations of the same basic concepts provide an exceptional oppor-
`tunity for refining, maturing and validating initial intuitions:
`think about UNIX!
`Onthe technical side, the basic modular structure, kernel, and
`system actors never really changed; some conceptsalso resisted all
`versions: ports, port groups, messages.
`However, the style of communication (IPC) evolved in each
`version: naming and protection of ports experimented with local
`names, global names, protection identifiers. The protocols which
`were purely asynchronous at the beginning movedby steps to syn-
`chronous communicationsand led finally to synchronous RPC.
`Consequently, structured messages were no longer useful and pro-
`cessing steps within an actor were in contradiction with the extent
`of the RPC.
`Actors evolved from a purely sequential automaton with pro-
`cessing steps to a real-time multi-thread virtual machine, which is
`now used for resource allocation and as an addressing space.
`Protection and fault tolerance are still open questions since
`UNIX leaves few choices and because earlier experiments were not
`convincing as to the value of implementing specific mechanisms
`inside the kernel(e.g., reliable broadcast, atomic transactions,
`commit, redundancy).
`Early versions of CHORUS handled fixed memory spaces, with
`possibility to use memory managementunits for relocation. This
`evolved to dynamic virtual memory systems with demand paging,
`mapped into distributed and sharable segments.
`
`312 M. Rozieret al.
`
`
`
`Finally, although Pascal did not cause any major problem as
`an implementation language, it has been replaced by C++ which
`can rely on the wider audience that C has now in the industry.
`C++ also brings facilities (classes, inheritance, tight coupling with
`C) that have been quite useful as a system language.
`Since the beginning of the project, most design concepts and
`experiments have been reported. A summary ofthese publica-
`tions is given in §8.
`
`3. CHORUS Concepts and Facilities
`
`3.1 The CHORUSArchitecture
`
`3.1.1 Overall Organization
`
`A CHORUSSystem is composed of a small-sized Nucleus and a
`number of System Servers. Those servers cooperate in the context
`of Subsystems (e.g., UNIX) providing a coherent set of services
`and interfaces to their ‘‘users” (Figure 1).
`
`RATIONALE. This overall organization is a logical view of
`an open operating system.
`It can be mapped on a centralized as
`well as on a distributed configuration. At this level, distribution
`is hidden.
`The choice has been to build a two level logical structure,
`with a “generic nucleus” at the lowest level, and almost auto-
`nomous “‘subsystems”’ providing applications with usual operat-
`ing system services.
`Therefore the CHORUS Nucleus has not been built as the
`core of a specific operating system (e.g., UNIX), rather it pro-
`vides generic tools designed to support a variety of host subsys-
`tems, which can co-exist on top of the Nucleus.
`This structure allows support of application programs which
`already run on existing (usually centralized) operating systems,
`by reproducing those existing operating system interfaces within
`a given subsystem. This approachis illustrated with UNIX in
`this paper.
`Note also the nowclassic idea of separating the functions of
`an operating system into groupsof services provided by
`
`Cuorus Distributed Operating Systems 313
`
`
`
`
`
`Pl
`
`Ql
`
`Library
`Lib.
`
`
`Application Programs
`
`&
`
`Libraries
`
`System Servers
`
`
`
`
`
`
`
`
`
`CHORUS Nucleus
`
`Generic Nucleus
`
`Figure 1: The CHORUS Architecture
`
`In monolithic systems,
`autonomousservers inside subsystems.
`these functions are usually part of the “‘kernel.”” This separa-
`tion of functions increases modularity and therefore portability
`and scalability of the overall system.
`3.1.1.1 THE CHORUS NUCLEUS The CHORUSNucleus (Fig-
`ure 2) plays a doublerole:
`
`1. Local services:
`It manages, at the lowest level, the local physical computing
`resources of a ““computer,” called a Site, by means of three
`clearly identified components:
`
`¢ allocation of local processor(s) is controlled by a
`Real-time multi-tasking Executive. This executive pro-
`vides fine grain synchronization and priority-based
`preemptive scheduling,
`
`¢
`
`local memory is managed by the Virtual Memory
`Manager controlling memory space allocation and
`structuring virtual memory address spaces,
`
`314 M. Rozieretal.
`
`
`
`
`
`IPC Manager
`
`(Portable)
`
`
`
`Real-time Executive
`(Portable)
`
`VM Manager
`(Portable)
`
`
`
`
`
`Supervisor
`(Machine dependent)
`
`(Machine-
`dependent)
`
`Hardware
`
`Figure 2: The CHORUS Nucleus
`
`« external events — interrupts, traps, exceptions — are
`dispatched by the Supervisor.
`
`2. Global services:
`The IPC Manager provides the communication service,
`delivering messages regardless of the location of their desti-
`nation within a CHORUSdistributed system.
`It supports
`RPC (Remote Procedure Call) facilities and asynchronous
`message exchange, and implements multicast as well as
`functional addressing.
`It may rely on external system
`servers (i.e., Network Managers) to operate all kinds of net-
`work protocols.
`
`Surprisingly, the structure of the Nucleus is
`RATIONALE.
`also logical, and distribution is almost hidden. Local services
`deal with local resources and can be mostly managed with local
`information only. Global services involve cooperation between
`nuclei to cope with distribution.
`In CHORUS-V3 it has been decided for efficiency reasons
`experienced in CHORUS-V2, to include in the nucleus some
`functions which could have been provided by system servers:
`
`CHORUS Distributed Operating Systems 315
`
`
`
`actor and port management(creation, destruction, localization),
`name management, RPC management.
`The “standard” CHORUSIPC is the only means — or “‘tool”
`— used to communicate between managers ofdifferentsites;
`they all use it rather than dedicated protocols — for example,
`Virtual Memory managers requesting a remote segmentto ser-
`vice a page fault.
`The nucleus has also been designed to be highly portable,
`even if this prevents using some specific features of the underly-
`ing hardware. Experience with porting the nucleus to half a
`dozen of different Memory Management Units (MMU’s) on
`three chip sets has shownthe validity of such a choice.
`3.112 THE SUBSYSTEMS System servers implement high-
`level system services, and cooperate to provide a coherent operat-
`ing system interface. They communicate via the Inter-Process
`Communication facility (IPC) provided by the CHORUS Nucleus.
`3.11.3 SYSTEM INTERFACES A CHORUSsystem exhibits
`several levels of interface (Figure 1):
`
`« Nucleus Interface: The Nucleus interface is composed of a
`set of procedures providing access to the services of the
`Nucleus.
`If the Nucleus cannot render the service directly,
`it communicates with a distant Nucleus via the IPC.
`
`Subsystem Interface: This interface is composed ofa set of
`procedures accessing the Nucleus interface, and some Sub-
`system specific protected data. Ifa service cannot be ren-
`dered directly from this information, these procedures “‘call’’
`(RPC) the services provided by System Servers.
`
`The Nucleus and Subsystem interfaces are enriched by
`libraries. Such libraries permit the definition of programming
`language specific access to System functionalities. These libraries
`(e.g., the “C” library) are made up of functions linked into and
`executed in the context of user programs.
`
`3.1.2 Basic Abstractions Implemented by the
`CHORUS Nucleus
`
`The basic abstractions implemented and managed by the CHORUS
`Nucleus are:
`
`316 M. Rozieret al.
`
`
`
`Actor
`
`Thread
`
`Message
`
`Port, Port Groups
`
`unit of resource collection, and
`memory address space
`
`unit of sequential execution
`
`unit of communication
`
`unit of addressing and
`(re)configuration basis
`
`Unique Identifier (UI)—global name
`
`Region
`
`unit of structuring of an Actor
`address space
`
`These abstractions (Figure 3) correspond to object classes which
`are private to the CHORUS Nucleus: both the object representa-
`tion and the operations on the objects are managed bythe
`Nucleus. Those basic abstractions are object classes to which the
`services invoked at the Nucleus interface are related.
`Three other abstractions are also managed both by the
`CHORUS Nucleus and Subsystem Actors:
`
`Message
`
`Ports
`
`Actors
`
` Site
`
`
`
`
`
`
`
`
`
`Communication Medium
`
`Figure 3: CHORUS Main Abstractions
`
`CHORUS Distributed Operating Systems 317
`
`
`
`Segment
`
`Capability
`
`unit of data encapsulation
`
`unit of data access control
`
`Protection Identifier
`
`unit of authentication
`
`RATIONALE. Each of the aboveabstractions plays a
`specific role in the System.
`An Actor encapsulates a set of resources:
`
`¢ a virtual memory context divided into Regions, coupled
`with local or distant segments,
`
`*« a communication context, composed of a set of ports,
`
`e an execution context, composed of a set of threads.
`
`A Thread is the grain of execution and corresponds to the
`usual notion of a process or task. A thread is tied to one and
`only one actor, sharing the actor’s resources with the other
`threads of that actor.
`Messages are byte strings addressed to ports.
`Uponcreation, a Port is attached to one actor, allowing (the
`threads of) that actor to receive messages on that port. Ports
`can migrate from one actor to another. Any thread knowing a
`port can send messagestoit.
`Ports can be grouped dynamically into Port Groups provid-
`ing multicast or functional addressing facilities.
`Actors, ports and port groups receive Unique Identifiers (UI)
`which are global (location independent), unique in space and in
`time.
`Segments are collections of data located anywhere in the
`system and referred to independently of the type of device used
`to store them. Segments are managed by System Servers,
`defining the way they are designated and handling their storage.
`Two mechanismsare provided for building access control
`mechanisms and authentication:
`Resources (e.g., segments) can be identified within their
`servers by a key which is server dependent. Since keys have no
`meaning outside a server they are associated with the port UI of
`the server to form a (global) Capability.
`Actors and ports receive Protection Identifiers which the
`nuclei use to stamp all the messages sent and that receiving
`actors use for authentication.
`
`318 M.Rozieretal.
`
`
`
`3.2 Active Entities
`
`3.2.1 Physical Support: Sites
`
`The physical support of a CHORUSsystem is composed of an
`ensemble of sites (“machines”’ or “‘boards’’), interconnected by a
`communication network (or Bus). A site is a grouping oftightly
`coupled physical resources: one or more processors, central
`memory, and attached I/O devices. There is one CHORUS Nucleus
`per site.
`
`RATIONALE. A Site is not a basic CHORUSabstraction
`(neither are devices). Site managementis performed bysite
`servers, i.¢e., System administrators, and the site abstraction is
`implemented by these servers.
`
`3.2.2 Virtual Machines: Actors
`
`The actor is the logical ‘‘unit of distribution” and ofcollection of
`resources in a CHORUSsystem. An actor defines a protected
`(paged) address space supporting the execution of threads (light-
`weight processes or tasks), that share the address space of the
`actor. An address spaceis split into a “‘user” address space and a
`“system” address space. On a givensite, each actor’s “‘system”
`address space is identical and its access1s restricted to privileged
`levels of execution (Figure 4).
`A given site may support many simultaneousactors. Since
`each has its own “user” address space, actors define protected
`“virtual machines” to the user.
`Any given actoris tied to one site and the threads supported
`by any given actor are always executed on the site to which that
`actor is tied. The physical memory used by the code and data of
`a thread is always that of the actor’s site. Actors (and threads)
`cannot migrate from onesite to another.
`RATIONALE. Because eachactor is tied to onesite, the
`state of the actor(i.e., its contexts) is precisely defined — there is
`no uncertainty due to distribution since it depends only on the
`status of its supporting site. The state of an actor can then be
`known rapidly and decisions can be taken easily. The crash of
`a site leads to the complete crash of its actors — there is no
`actor partially crashed.
`
`CHoRUS Distributed Operating Systems 319
`
`
`
`
`|
`Actor 2
`Actor 1
`
`
`
`
`
`0
`
`User
`address spaces
`
`Actor i
`
`
`
`
`System
`address space
`
`n
`
`Figure 4: Actor Address Spaces
`
`Actors are designated by capabilities built from a UI, i.e. the
`UIof the actor’s default port and a manipulation key. The
`knowledge of the capability of an actor yields all of the rights on
`that actor (creating ports, threads and regions in the actor, des-
`troying it, etc.). By default, only the creator of an actor knows the
`capability of the created actor, howeverthe creator can transmit it
`to others.
`The resources held by an actor (the ports that are attached to
`the actor, the threads, the memory regions) are designated within
`the actor’s context with Contextual Identifiers (i.e., Local Descrip-
`tors). The scope of such identifiers is limited to the specific actor
`which uses the resource.
`
`3.2.3 Processes: Threads
`
`The thread is the “unit of execution” in the CHORUS system. A
`thread is a sequential flow of control and is characterized by a
`thread context corresponding to the state of the processor (regis-
`ters, program counter, stack pointer, privilege level, etc.).
`
`320
`
`M. Rozieretal.
`
`
`
`A thread is always tied to one and only one actor, which
`defines the address space in which the thread can operate. The
`actor thus constitutes the execution environmentof the thread.
`Within the actor, many threads can be created and can run in
`parallel. These threads share the resources (memory, ports, etc.)
`of that actor and of that actor only. When a site supports multi-
`ple processors, the threads of an actor can be madeto run in
`parallel on those different processors.
`Threads are scheduled as independent entities. The basic
`scheme is a preemptive priority based scheduling, but the Nucleus
`implements also timeslicing and priority degradation on a per
`thread basis. This allows for example real-time applications and
`multi-user interactive environments to be supported by the same
`Nucleus according to their respective needs and constraints.
`Threads communicate and synchronize by exchanging mes-
`sages using the CHORUSIPC (see §3.3), even if they are located on
`the same site. However, as threads of an actor share the same
`address space, communication and synchronization mechanisms
`based on shared memory canalso be used inside one actor.
`In
`most cases, when the machine instruction set allowsit, the imple-
`mentation of such synchronization tools avoids invoking the
`nucleus.
`
`RATIONALE, Whythreads?
`
`¢« Because one actor corresponds to one virtual address
`space and is tied to one site, threads allow multiple
`processes on a site corresponding to a machine with no
`virtual memory (i.e., which provides only one addressing
`space, such as a Transputer).
`
`¢ Threads provide a powerful tool for programming I/O
`drivers. Those are bound to interrupts and associating
`one thread to each I/O stream simplifies driver program-
`ming.
`
`Threads allow multi-programmingservers, providing a
`good matchto “client-server” style of programming.
`Threadsallow using multiple processors within one actor,
`e.g., on a Shared memory symmetric multi-processorsite.
`
`CuHorus Distributed Operating Systems
`
`
`
`¢ Threadsare light-weight processes, whose context switch-
`ing is far less expensive than an actor context switch.
`
`3.2.4 Actors and Threads Nucleus Interface
`
`The Nucleus interface for actor and thread managementis sum-
`marized in Table 1:
`
`actorCreate
`actorDelete
`actorStop
`actorStart
`actorSetPar
`
`Create an actor
`Delete an actor
`Stop the actor’s threads
`Restart the actor’s threads
`Set actor parameters
`
`Create a thread
`threadCreate
`Delete a thread
`threadDelete
`Stop a thread
`threadStop
`Restart a thread
`threadStart
`
`threadSetPar Set thread parameters
`
`Table 1: Actors and Threads Services
`
`3.3 Communication Entities
`
`3.3.1 Overview
`
`Threads synchronize and communicate using a single basic
`mechanism: exchange of messages via message queues called
`Ports.
`Inside an actor, ports are locally used as message semaphores.
`Moregenerally, unique and global names (UI’s) maybe given to
`ports, allowing message communicationsto cross the actor’s boun-
`daries. This facility, known as IPC (Inter-Process Communication
`facility), allows any thread to communicate and to synchronize
`with any other thread on anysite.
`The main characteristic of the CHORUS IPC Is its transparency
`vis-a-vis the localization of threads: communication is expressed
`through a uniform interface (ports), regardless of whether the
`communication is between two threads in a single actor, between
`two threads in two different actors on the samesite, or between
`two threads in two different actors on two different sites. Mes-
`sages are transferred from a sending port to a receiving port.
`
`322 M.Rozieretal.
`
`
`
`3.3.2 Messages
`
`A messageis basically a contiguous byte string, logically copied
`from the sender address space to the receiver(s) address space(s).
`Using a coupling between virtual memory management and IPC,
`large messages maybe transferred efficiently by deferred copying
`(copy on write), or even by simply moving page descriptors (on a
`givensite).
`
`RATIONALE. Why messages rather than shared memory?
`
`e Messages make the exchange of information explicit, thus
`clarifying all actions.
`
`« Messages make debugging of a distributed application
`easier, especially when using RPC which involves sequen-
`tial processing steps in different actors.
`
`e Messages are easier to manage than shared memoryin a
`heterogeneous environment.
`
`e The state of an actor can be known moreprecisely (before
`a message transmission, after receiving a message,etc.).
`¢ The cost of information exchangeis better isolated and
`evaluated when it is done through messages — since there
`are explicit calls to the nucleus — than the cost of memory
`accesses — which depend ontraffic on the bus, memory
`contention, memorylocking, etc. The grain of informa-
`tion exchangeis bigger, better defined, and its cost better
`known.
`
`« Performance of local communicationsare still preserved
`by implementation hints and local optimizations (see §5).
`
`3.3.3 Ports
`
`Messages are not addressed directly to threads (nor actors), but to
`intermediate entities called ports. The notion of a port provides
`the necessary decoupling betweenthe interface of a service andits
`implementation.
`In particular, it provides the basis for dynamic
`reconfiguration (see §3.4.4).
`
`CHORUSDistributed Operating Systems 323
`
`
`
`A port represents both:
`
`e a resource (essentially a message queue holding the messages
`received by the port but not yet consumedbythe receiving
`threads),
`
`e an address to which messages can besent.
`Whencreated, a port is attached to one actor. The threads of
`this actor (and only them) may receive messages on the port. A
`port can only be attached to a single actor at a time, but it can be
`“used” by different threads within that actor.
`i.e. a
`A port can be successively attached to different actors:
`port can migrate from one actor to another. This migration can
`be applied also to the messages already received by the port.
`
`RATIONALE. Why Ports?
`Decoupling communication fromexecution, a Port is a func-
`tional name for receiving messages:
`
`¢ one actor may haveseveral ports and therefore communi-
`cation can be multiplexed,
`
`* a port can be used successively by several actors (actors
`grouped, and functionally equivalent),
`
`multiple threads may share a single port, providing cheap
`expansion of server performance on multiprocessor
`machines,
`
`¢
`
`the notion of “port” provides the basis for dynamic
`reconfiguration:
`the extra level of indirection (the ports)
`between any two communicating threads meansthat the
`threads supplying a given service can be changed from a
`thread of one actor to a thread of another actor. This is
`done by changing the attachment of the appropriate port
`from the first thread’s actor to the new thread’s actor (see
`§3.4.4).
`
`Whena port is considered as a res