`
`Architectural issues
`in microkernel-based
`operating systems:
`the CHORUS experience
`
`Allan Bricker, Michel Gien, Marc Guillemont, Jim Lipkis, Doug Orr and
`Marc Rozier consider pragmatic architectural issues in the developmentof
`the CHorus OS
`
`
`
`An important trend in operating system development is the
`restructuring of
`the traditional monolithic operating
`system kernel into independent servers running on top of a
`minimalnucleus or ‘microkernel’. This approacharises out
`of the need for modularity andflexibility in managing the
`ever-growing complexity caused by the introduction of
`new functions and new architectures.
`In particular,
`it
`provides a solid architectural basis for distribution, fault-
`tolerance and security. Microkernel-based operating systems
`have been a focus of research for a numberof years, and
`are now beginning to play a role in commercial UNIX*
`systems. The ultimate feasibility of this attractive approach
`is not yet widely recognized, however. A primary concern
`is efficiency: can a microkernel-based modular operating
`system provide performance comparable to that of a
`monolithic kernel when running on comparable archi-
`tectures? The elegance and flexibility of the client-server
`model mayexacta cost in message-handling and context-
`switching overhead.If this penalty is too great, commercial
`acceptancewill be limited. Another pragmatic concernis
`compatibility:
`in an industry relying increasingly on
`portability and standardization, compatible interfaces are
`needed notonly at the level of application programs, but
`also for device drivers, streams modules and other
`
`Chorus Systémes, 6, Ave Gustave Eiffel, F-78182 Saint-Quentin-en-
`Yvelines CEDEX, France
`*UNIX is a trademark of UNIX Systems Labs, Inc
`
`components. In many cases, binary as well as source
`compatibility is
`required. These concerns affect
`the
`structure and organization of the operating system. The
`Chorus team has spent the past six years studying and
`experimenting with UNIX ‘kernelization’ as an aspectofits
`work in distributed and real-time modular systems.
`Aspects of the current CHORUS} system are examined
`here in terms of its evolution from the previous version.
`The focus is on pragmatic issues such as performance and
`compatibility, as well as considerations of modularity and
`software engineering.
`
`Keywords: operating systems, architecture, compatibility,
`distributed computing
`
`MICROKERNEL ARCHITECTURES
`
`A recenttrend in operating system developmentconsists
`of structuring the operating system as a modular set of
`system servers whichsit on top of a minimal microkemel,
`rather than using the traditional monolithic structure. This
`new approach promisesto help meet system and platform
`builders’ needs for a sophisticated operating system
`development environment that can cope with growing
`complexity, new architectures, and changing market
`
`+ CHORUSis a registered trademark of Chorus systémes.
`
`0140-3664/91/006347-11 © 1991 Butterworth-Heinemann Ltd
`
`vol 14 no 6 july/august 1991
`
`347
`Google Exhibit 1033
`Google Exhibit 1033
`Google v. Valtrus
`Google v. Valtrus
`
`
`
`chorus
`
`In this operating system architecture, the
`conditions.
`microkemel provides system servers with generic services,
`such as processor scheduling and memory management,
`independentof a specific operating system. The micro-
`kernel also provides a simple Inter-Process Communica-
`tion (IPC) facility that allows system servers to call each
`other and exchange data independentof where they are
`executed,in a multiprocessor, multicomputer or network
`configuration.
`This combination of primitive services forms a standard
`base which in turn supports the implementation of
`functions that are specific to a particular operating system
`or environment. These system-specific functions can then
`be configured, as appropriate, into system servers managing
`the other physical and logical resources of a computer
`system,suchas files, devices and high-level communication
`services. We refer to such a set of system servers as a
`subsystem. Real-time systems tend to be built along
`similar
`lines, with a very simple generic executive
`supporting application-specific real-time tasks.
`
`UNIX and microkernels
`
`UNIX introduced the concept of a standard, hardware-
`independent operating system, whose portability allowed
`platform builders to reduce their time to market by
`obviating the need to develop proprietary operating
`systems for each new platform.
`However, as more function and flexibility is continually
`demanded, it is unavoidable that today’s versions become
`increasingly more complex. For example, UNIX is being
`extendedwithfacilities for real-time applications and on-
`line transaction processing (OLTP). Even more fundamental
`is the move toward distributed systems. It is desirable in
`today’s computing environments that new hardware and
`software resources, such as specialized servers and
`applications, be integrated into a single system, distributed
`over a network. The range of communication media
`commonly encountered includes shared memory,buses,
`high-speed networks, local area networks (LAN) and wide
`area networks (WAN). This
`trend to integrate new
`hardware and software components will become funda-
`mental as collective computing environments emerge.
`To support the addition of function to UNIX andits
`migration to distributed environments,it is desirable to
`map UNIX onto a microkernel architecture, where
`machine dependencies maybeisolated from unrelated
`abstractions and facilities for distribution may be in-
`corporated at a very low level.
`The attempt to reorganize UNIX to work within a
`microkernel framework poses problems, however,if the
`resultant system is to behave exactly as a traditional UNIX
`implementation. A primary concern is efficiency: a
`microkernel-based modular operating system must pro-
`vide performance comparable to that of a monolithic
`kernel. The elegance and flexibility of the client-server
`model may exact a cost in message-handling and context-
`switching overhead.If this penalty is too great, commercial
`acceptancewill be limited. Another pragmatic concern is
`compatibility:
`in an industry relying increasingly upon
`
`portability and standardization, compatible interfaces are
`needednotonlyat the level of application: programs, but
`also for device drivers, streams modules, and other
`components.
`In many cases, binary as well as source
`compatibility is
`required. These concerns affect
`the
`structure and organization of the operating system.
`There is work in progress on a numberof fronts to
`emulate UNIX on top of a microkernel architecture,
`including the Mach", V2, and Amoeba?projects. Plan 9
`from Bell Labs’? is a distributed UNIX-like system based
`on the ‘minimalist’ approach. CHORUSversions V2 and V3
`represent the work we have doneto solve the problems
`of compatibility and efficiency.
`
`CHORUS microkernel technology
`
`The Chorus team has spentthe pastsix years studying and
`experimenting with UNIX ‘kernelization’ as an aspect of
`its work in modular, distributed and real-time systems.
`The first implementation of a UNIX-compatible micro-
`kemel-based system was developed between 1984 and
`1986 as a research project at INRIA. Among the goals of
`this project were to explore the feasibility of shifting as
`much function as possible out of the kernel, and to
`demonstrate that UNIX could be implementedas a set of
`modulesthat did not share memoyy.In late 1986, an effort
`to create a new version, based on an entirely rewritten
`CHORUSnucleus, was launched at Chorus systémes. The
`current version maintains many of
`the goals of
`its
`predecessor, and adds some new ones, including real-
`time support and — not coincidentally — commercial
`viability. A UNIX subsystem compatible with System V
`Release 3.2 is currently available, with System V Release
`4,0 and 4.4BSD systems under development. The System
`V Release 3.2 implementation performs comparably with
`well-established monolithic-kernel systems on the same
`hardware, and better in some respects. As a testament to
`its commercial viability, the system has been adoptedfor
`use in commercial products ranging from X terminals and
`telecommunication
`systems
`to mainframe UNIX
`machines.
`In this paper we examine aspects of the current
`CHORUS system in terms of
`its evolution from the
`previousversion. Our focus is on pragmatic issues such as
`performance and compatibility, as well as considerations
`of modularity and software engineering.
`Wereview and evaluate the previous CHORUS version
`below, and discuss how the lessons learned from its
`implementation led to the main design decisions for the
`current version. The subsequent sections focus on
`specific aspects of the current design.
`
`CHORUS V2 OVERVIEW
`
`The CHORUSproject, while at INRIA, began researching
`distributed operating systems with CHORUS VO and V1.
`Theseversions provedthe viability of a modular, message-
`baseddistributed operating system, examinedits potential
`
`348
`
`computer communications
`
`
`
`chorus
`
`performance, and explored its impact on distributed
`applications programming.
`Based on this experience, CHORUS V2*° was developed.
`It representedthe first intrusion of UNIX into the peaceful
`CHORUSlandscape. The goals of this third implementa-
`tion of CHORUS were:
`
`1 To add UNIX emulation to the distributed system
`technology of CHORUS V1.
`‘kernelization’;
`of
`limits
`2 To
`explore
`the outer
`demonstrate the feasibility of a UNIX implementation
`with a minimal kernal and semi-autonomous servers.
`3 To explore the distribution of UNIX services.
`4 And to integrate support for a distributed environment
`into the UNIX interface.
`
`File Manager
`
`CHORUS Nucleus
`
`User Space
`
`System Space
`
`the CHORUS architecture has always
`Since its birth,
`consisted of a modularset of servers running on top of a
`microkernel
`(the nucleus) which included all of the
`necessary support for distribution.
`The basic execution entities supported by the V2
`nucleus were mono-threaded actors running in user
`mode andisolated in protected address spaces. Execution
`of actors consisted of a sequence of ‘processing-steps’
`which mimicked atomic transactions: ports represented
`operations to be performed; messages would trigger their
`invocation and provide arguments. The execution of
`remote operations were synchronisedat explicit ‘commit’
`points. An ever-present concern in the design of CHORUS
`Figure 1.©CHORUS-V2 architecture
`was that
`fault-tolerance and distribution are tightly
`coupled; hardware redundancy both increases the prob-
`ability of faults and gives a better chance to recover from
`these faults.
`Communication in CHORUS V2 was, as in many Current
`systems, based upon the exchange of messages through
`ports, Ports were attached to actors, and had theability to
`migrate from one actor to another. Furthermore, ports
`could be gathered into port groups, which allowed
`message broadcasting as well as functional addressing.
`For example, a message could be directed to all members
`of a port group or to a single member port which resided
`onaspecified site. The port group mechanism provided a
`flexible set of client-server mapping semantics including
`dynamic reconfiguration of servers.
`Ports, port groups and actors were given global unique
`names, constructed in a distributed fashion by each
`nucleus for use only by the nucleus and system servers,
`Private, context-dependent names were exported to user
`actors. These port descriptors were inherited in the same
`fashion as UNIX file descriptors.
`
`Figure 1). UNIX network facilities (sockets) were not
`implemented at this time.
`A UNIX process was implemented as a CHORUS actor.
`All interactions of the processwithits environment(i.e.all
`system calls) were performed as exchanges of messages
`between the process and system servers. Signals were
`also implemented as messages.
`This ‘modularization’ impacted UNIX in the following
`ways:
`
`1 UNIX data structures were split between the nucleus
`and severalservers. Splitting the data structures, rather
`than replicating them, was doneto avoid consistency
`problems. Messages between theseservers contained
`the information managed by one server and required
`by another in order to provide its service. Careful
`thought was given to how UNIX data structures were
`split between servers to minimize communication
`costs.
`2 Most UNIX objects,files in particular, were designated
`by
`network-wide
`capabilities which
`could
`be
`exchanged freely between subsystem servers and
`sites. The context of a process contained a set of
`capabilities representing the objects accessed by the
`process.
`
`As many of the UNIX system calls as possible were
`implemented by a process-level
`library. The process
`context was stored in process-specific library data at a
`fixed,
`read-only location within the process address
`space. Thelibrary invoked the servers, when necessary,
`
`UNIX
`
`On topofthis architecture, a full UNIX System V was built.
`In V2, the whole of UNIX was split into three servers: a
`Process Manager, dedicated to process management; a
`File Managerfor block device and file system management;
`and a Device Managerfor character device management.
`In addition, the nucleus was complemented with two
`servers, one which managed ports and port groups, and
`another which managed remote communications (see
`
`vol 14 no 6 july/august 1991
`
`349
`
`
`
`chorus
`
`
`
`from within
`1 Ports and port groups were known,
`using an RPC facility. For example, the Process Manager
`was invoked to handle a fork(2) system call and the File
`processes, by local identifiers. Access to a port was
`Managerfor a read(2) system call onafile.
`controlled in a fashion analogous to the access to a
`file.
`This library offered only source-level compatibility with
`2 Ports and port groups were protected in a similar
`UNIX, but was acceptable because binary compatibility
`fashion to files (with uids and gids).
`was not a project goal. Thelibrary resided at a predefined
`3 Port and port group access rights were inherited on
`user virtual address in a write-protected area. Library data
`fork and exec exactly as are file descriptors.
`holding the process context information was not com-
`pletely secure from malicious or unintentional modification
`by the user. Thus, errant programs could experience new,
`unexpected error behaviour. In addition, programs that
`depended uponthe standard UNIX address space layout
`could cease to function becauseof the additional address
`space contents.
`
`ANALYSIS OF CHORUS V2
`
`Extended UNIX services
`
`CHORUS V2 extended UNIX services in two ways:
`
`@ by allowing their distribution while retaining their
`original
`interface (e.g. remote process creation and
`remotefile access);
`@ by providing access to new services without breaking
`existing UNIX semantics (e.g. CHORUS IPC).
`
`Distribution of UNIX services
`
`Access to files and processes extended naturally to the
`remote case due to the modularity of CHORUS’s UNIX
`andits inherent protocols. Files and processes, whether
`local or remote, were manipulated using CHORUS IPC
`through the use of location-transparent capabilities.
`In addition, CHORUS V2 extended UNIX file semantics
`with port nodes. A port node was an entry in thefile
`system which had a CHORUS port associated withit.
`Whena port node was encountered during path-name
`analysis, a message containing the remainderof the path
`to be analyzed was sentto the associated port. Port nodes
`were used to automatically interconnectfile trees.
`For processes, new protocols between Process Man-
`agers were developed in orderto distribute fork and
`exec operations. Remote fork and exec werefacilitated
`because:
`
`@ the management of a process context was not
`distributed: each process context was managedentirely
`by only one system server(the Process Manager);
`® a process context contained only global references to
`resources (capabilities).
`
`Therefore, creating a remote process could be done
`almost entirely by transferring the process context from
`one Process Manager to another.
`Since signals were implemented as messages, their
`distribution was trivial due to the location transparency of
`CHORUSIPC.
`
`Introduction of new services
`
`Its UNIX
`CHORUS IPC was introduced at user-level.
`interface was designed in the standard UNIX style:
`
`Experience developing and using CHORUS V2: gave us
`valuable insight into the basic operating system services
`that a microkernel must provide to implement a rich
`operating system environment such as UNIX. CHORUS V2
`wasour third reimplementation of the CHORUS nucleus,
`but represented our first attemptat integrating an existing,
`complex operating system interface with microkernel
`technology. This research exercise was not withoutfaults.
`However, it demonstrated that we did a numberof things
`correctly. The CHORUS V2 basic IPC abstractions —
`location transparency, untyped messages, asynchronous
`and RPC protocols, ports, and port groups — have proven
`to be very well suited to the implementation of dis-
`tributed operating systems and applications. These
`abstractions have been entirely retained for CHORUS V3;
`only their interface has been enriched to maketheir use
`more efficient.
`The basic modulararchitecture of the UNIX subsystem
`has also been retained in the implementation of CHORUS
`V3 UNIX subsystems. Some newservers, such as a BSD
`Socket Manager, have been added to provide new
`function that was not included CHORUS V2.
`Version 3 of the CHORUS nucleus has been completely
`redesigned and reimplemented around a new set of
`project goals. These goals were put in place as a direct
`result of our experience implementing our first distributed
`UNIX system.
`In the following subsections webriefly state our new
`goals and then explain how these new goals affected the
`design of CHORUS V3.
`
`CHORUS V3 goals
`
`The design of CHORUS V3 system®? has been strongly
`influenced by a new major goal: to design a microkemel
`technology suitable for the implementation of commercial
`operating systems. CHORUS V2 was a UNIX-compatible
`distributed operating system. The CHORUS V3 micro-
`kernel is able to support operating system standards while
`meeting the new needs of commercial systems builders.
`These new goals determined new guidelines for the
`design of the CHORUS V3 technology:
`@ Portability: the CHORUS V3 microkemel must be highly
`portable to many machine architectures. In particular,
`this guideline motivated the design of an architecture-
`independent memory management system", taking
`the place of the hardware-specific CHORUS V2 memory
`management.
`
`350
`
`computer communications
`
`
`
`chorus
`
`@ Generality: the CHORUS V3 microkernel must provide a
`set of functions that are sufficiently generic to allow the
`implementation of many different sets of operating
`system semantics; some UNIX-related features had to
`be removed from the CHORUS V2 nucleus. The
`nucleus must maintain its simplicity and efficiency for
`users or subsystems which do not require high level
`services.
`@ Compatibility: UNIX source compatibility in CHORUS
`V2 had to be extended to binary compatibility in V3,
`both for user applications and device drivers.
`In
`particular, the CHORUS V3 nucleus hadto providetools
`to allow subsystems to build binary compatible
`interfaces.
`@ Real-time: process control and telecommunication
`systems comprise important targets for distributed
`systems. In this area, the responsiveness of the system
`is of prime importance. The CHORUS V3 nucleusis,first
`and foremost, a distributed real-time executive. The
`real-time features may be used by any subsystem
`allowing,
`for example, a UNIX subsystem to be
`naturally extendedto besuitable for real-time applica-
`tions needs.
`@ Performance: for commercial viability, good perfor-
`mance is essential
`in an operating system. While
`offering the base for building modular, well-structured
`operating systems, the nucleus interface must allow
`these operating systems to reach at least the same
`performance as conventional, monolithic imple-
`mentations.
`
`These new goals forced us to reconsider CHORUS V2
`design choices. In most cases, the architectural elements
`were retained in CHORUSV3; onlytheir interface evolved.
`Wheneverpossible, the V3 interface reflects our desire to
`leave it to the subsystem designer to negotiate the
`tradeoffs between simplicity and efficiency, on the one
`hand, and more sophisticated function, on the other.
`
`CHORUSprocessing model
`
`Problems arose with the CHORUS V2 processing model
`when UNIX signals were first
`implemented. To treat
`asynchronoussignals in V2 mono-threadedactors, it was
`
`Message
`
`Communication Medium
`
`Figure 2.
`
`CHORUS V3 nucleus abstractions
`
`necessary to introduce the concept of priorities within
`messages to expedite the invocation of a signalling
`operation. Even so, the priorities went into effect only at
`fixed synchronization points, making it
`impossible to
`perfectly emulate UNIX signal behaviour. Further work
`has shown that signals are one of the major stumbling
`blocks for building fault-tolerant UNIX systems.
`
`Lesson: we found the processing-step model! of
`computation to be a poorfit with the asynchronous
`signal modelof exception handling. In orderto provide
`full UNIX emulation, a more general computational
`model was necessary for CHORUS V3.
`
`The solution to this problem gave rise to the V3 multi-
`threaded processing model. A CHORUS V3actoris merely
`a resource container, offering,
`in particular, an address
`space in which multiple threads may execute. Threads are
`scheduled as independententities, allowing real parallel-
`ism on amultiprocessorarchitecture. In addition, multiple
`threads allow the simplification of the control structure of
`server-based applications. New nucleusservices, such as
`thread execution control and synchronization have been
`introduced.
`
`CHORUSinter-process communication
`
`As a consequenceof the change to the basic processing
`model,
`the inter-process communication model also
`evolved.
`In the V2 processing-step model,
`IPC and
`execution weretightly bound, yielding a mechanism that
`resembled atomic transactions.
`This tight binding of communication to execution did
`not necessarily make sense in a multi-threaded CHORUS
`V3 system. Thus, the atomic transactions of V2 have been
`replaced,
`in V3, by the RPC paradigm, and has since
`evolved into a very efficient lightweight RPC protocol.
`One aspect of the IPC mechanism that has not
`changed in CHORUSV3is that messages remain untyped.
`The CHORUS IPC mechanismis simple and efficient when
`communicating among homogeneoussites. When com-
`municating between heterogeneous sites, higher-level
`protocols are used, as needed. A guideline in the design of
`CHORUSV?, retained in V3, was to allow the construction
`of simple andefficient applications without forcing them
`to pay a penalty for sophisticated mechanisms which
`were required only by specific classes of programs.
`
`CHORUSports
`
`A number of enhancements concerning CHORUS ports
`have been madeto provide more generality andefficiency
`in the most common cases.
`
`Port naming
`Recall that in V2 context-dependent port names were
`exported to the user-level while global port names were
`used by the nucleus and system servers. The user-level
`context-dependent port names of V2 were intended to
`
`vol 14 no 6 july/august 1991
`
`351
`
`
`
`chorus
`
`
`provide security and ease ofuse.It was difficult, however,
`for applications to exchange port names,sinceit required
`intervention by the nucleus and posed bootstrapping
`problems. As a result, context-dependent names were
`inconvenientfor distributed applications, such as name
`servers. In addition, many applications had no need of the
`added security the context-dependent namesprovided.
`
`Lesson: CHORUS V3 makesglobal names of ports and
`port groups (Unique Identifiers) visible to the user,
`discarding the UNIX-like CHORUS V2 contextual naming
`scheme. Contextual identifiers turned out not to be an
`effective paradigm.
`
`The first consequence of using Unique Identifiers is
`simplicity: port and port group names may befreely
`exchanged by nucleus users, avoiding the need for the
`nucleus to maintain complex actor context. The second
`consequenceis a lower level of protection: the CHORUS
`V3 philosophyis to provide subsystems with the means
`for implementing their own level and style of protection
`rather than enforcing protection directly in the micro-
`kernel. For example,
`if
`the security of V2 context-
`dependent namesis desired, a subsystem can easily and
`efficiently export a protected name-space server. V3
`Unique Identifiers have proven to be key to providing
`distributed UNIX services in an efficient manner.
`
`Port implementation
`A goal of the V2 project was to determine what were the
`minimal set of functions that a microkernel should have
`in order to support a robust base of computing. To that
`end, the managementof ports and port groups was put
`into a server external to the nucleus. Providing the ability
`to replace a portion of the IPC did not prove to be useful,
`however, since IPC was a fundamental and critical
`element of all nucleus operations. Maintaining it
`in a
`separate server rendered it more expensive to use.
`
`Lesson: wefoundthatactors, ports and port groupsare
`basic nucleus abstractions. Splitting their management
`did not provide significant benefit, but did impact
`system performance. Actor, port and port group
`managementhas been movedbackinto the nucleus for
`V3.
`
`UNIX port extensions
`Whenextending the UNIX interface to give access to
`CHorus IPC, we maintained normal UNIX-style seman-
`tics. Employing the same form as the UNIXfile descriptor
`for port descriptors was intendedto provide uniformity of
`model. The semantics of ports were sufficiently different
`from the semanticsoffiles to negate this advantage. In
`operations such as fork, for example, it did not make
`sense to share port descriptors in the samefashionas file
`descriptors. Attempting to force ports into the UNIX
`model resulted in confusion.
`
`Lesson: a user-level IPC interface was important, but
`giving it UNIX semantics was cumbersome and un-
`necessary. This lesson is an example ofa larger principle;
`the nucleus abstractions should be primitive and
`
`generally applicable — they should not be coercedinto
`the framework ofa specific operating system.
`
`issue by, as previously mentioned,
`V3 avoids this
`exporting global names. Since the V3 nucleus no longer
`managesthe sharing of global port and port group names,
`it is up to the individual subsystem servers to do so. In
`particular, if counting the numberof referencesto a given
`port is important to a subsystem, it is the subsystem itself
`that must maintain the reference count. On the other
`hand, a subsystem that has no need for reference
`counting is not penalized by the nucleus.
`Using V2 port nodesto interconnectfile systems was a
`simple, but extremely powerful, extension to UNIX. Since
`all access tofiles was via CHORUS messages, port nodes
`provided network transparent access to regularfiles as
`well as to device nodes. They also, however, introduced a
`new file type into the file system. This caused many
`system utilities, such as 1s and find, not to function
`properly. Thus,all suchutilities had to be modified to take
`the newfile type into account.
`Port nodes have been maintained in CHORUS V3
`(however, they are now called symbolic ports). In future
`CHORUS UNIX systems, the file type ‘symbolic port’ may
`be eliminated by inserting the port into the file system
`hierarchy using the mount system call. These port mount
`points would carry the same semantics as a normal
`mountedfile system.
`
`Virtual memory
`
`The virtual memory subsystem has undergone significant
`change. The machine-dependentvirtual memory system
`of CHORUS V2 has been replaced in V3 by a highly
`portable VM system. The VM abstractions presented by
`the V3 nucleus include ‘segments’ and ‘regions’. Segments
`encapsulate data within a CHORUS system and typically
`represent some form of backing store, such as a swap area
`on a disc. A region is a contiguous range of virtual
`addresses within an actor that map a portion of asegment
`into its address space. Requests to read or to modify data
`within a region are converted by the virtual memory
`system into read or modify requests within the segment.
`‘External mappers’
`interact with the virtual memory
`system using a nucleus-to-mapper protocol to manage
`data represented by segments. Mappers also provide the
`needed synchronization to implementdistributed shared
`memory. (For more details on the CHORUS V3 virtual
`memory system see Abrossimovet al."°.)
`
`Actor context
`
`CHORUS V2 was built around a ‘pure’ message-passing
`model, in which strict protection was incorporated at the
`lowest level; all servers were implementedin protected
`user address spaces., This distinct separation enforced a
`clean, modular design of a subsystem. However,it also
`led to several problems:
`
`352
`
`computer communications
`
`
`
`chorus
`
`full UNIX binary compatibility has been achieved.
`Internally, the UNIX subsystem makesuse of new nucleus
`services, such as multi-threading and supervisor actors.
`The CHORUS V2 user-level UNIX system-call library has
`been moved inside the Process Manager and is now
`invoked directly by system-call traps.
`Experience with the decomposition of UNIX System V
`for V2 showed, not surprisingly, that performing this
`modularization is difficult. Care must be taken to
`decompose the data structures and function along
`meaningful boundaries. Performing this decompositionis
`an iterative process. The system is first decomposed along
`broad functional lines. The data structures are then split
`accordingly, possibly impacting the functional decom-
`position.
`
`EVOLUTION IN NUCLEUS SUPPORT FOR
`SUBSYSTEMS: SUPERVISOR ACTORS
`
`Supervisor actors, as mentioned above, share the super-
`visor address space and their threads execute in a
`privileged machinestate, usually implying, among other
`things,
`the ability to execute privileged instructions.
`Otherwise, supervisor actors are fundamentally very
`
`Applications, Utilities
`
`(sh, cc, ed,
`
`...)
`
`@ a UNIX subsystem based on CHORUS V2 required the
`use of user-level system call stubs and altered the
`memory layout of a process and,
`therefore, could
`never provide 100% binary compatibility;
`® all device drivers were required to reside within the
`nucleus;
`® context switching expense was prohibitively high.
`The most fundamental enhancement made between
`CHORUSV2 and V3 was the introduction of the Supervisor
`Actor. Supervisor actors share the supervisor address
`space and their threads execute in a privileged machine
`state. Although they reside within the supervisor address
`space, supervisor actors are truly separate entities; they
`are compiled, link edited, and loaded independently of
`the nucleus and of each other.
`The introduction of supervisor actors creates several
`opportunities for system enhancement in the areas of
`compatibility and performance. The ramifications of
`supervisor actors are discussed in-depth below.
`
`UNIX subsystem
`
`As a consequenceof these nucleus evolutions, the UNIX
`subsystem implementation has also evolved.In particular,
`
`UNIX Applications
`
`(Binary compatible interface)
`
`UNIX Subsystem Servers
`
`(Transparenily distributed)
`
`Generic Micro-Kernel
`
`
`
`Figure 3.
`
`CHORUS/MiX-V3 architecture
`
`vol 14 no 6 july/august 1991
`
`353
`
`
`
`chorus
`
`similar to regular user actors. They may create multiple
`ports and threads, and their threads access the same
`nucleus interface. Any user program can be run as a
`supervisor actor, and any supervisor actor which does not
`make useofprivileged instructions or connected handlers
`(see below) can be run as a user actor. In both cases a
`recompilation of the program is not needed(a relink is
`required, however). Although they share the supervisor
`address space, supervisor actors are paged just as user
`actors and may be dynamically loaded and deleted.
`Supervisoractors alone are granted direct access to the
`hardware event
`facilities. Using a standard nucleus
`interface, any supervisor actor may dynamically establish
`ahandlerfor any particular hardware interrupt, system call
`trap or program exception. A connected handler executes
`as an ordinary subroutine, called directly from the
`corresponding low-level handler in the nucleus. Several
`arguments are passed,
`including the interrupt/trap/
`exception number and the processor context of the
`executing thread. The handler routine may take various
`actions, such as processing an event and/or awakening a
`thread in the actor. The handler routine then returns to the
`nucleus,
`
`Connectedinterrupt handlersallow device drivers to exist
`entirely outside of the nucleus, and to be dynamically
`loaded and deleted; with noloss in interrupt response or
`overall performance. For example, to demonstrate the
`powerandflexibility of the CHORUS V3 nucleus we have
`constructed a user-modeFile Manager that communicates
`using CHORUS IPC with a supervisor actor which manages
`a physical disc. Both the supervisor actor and the user-
`mode File Ma