`
`a nae
`aIUILOMVAmreleLae AVAL=ANY
`|
`Ted SVA0KEE01<\or
`ee aiEeeeee,PNA)odi!he
`
`
`
`a?
`
`|
`L
`
`
`
`a
`
`4
`
`"Rober W.Halstead. Jr.i
`Woe 3
`:
`aes
`
`Hulu 1017
`Hulu v. Sound View
`IPR2018-00864
`
`
`
`
`
`
`
`Computation Structures
`
`i
`
`
`
`The MIT Electrical Engineering and Computer Science Series
`Harold Abelson and Gerald Jay Sussman with Julie Sussman, Structure and Interpretation
`of Computer Programs, 1985.
`>
`William McC.Siebert, Circuits, Signals, and Systems, 1986.
`Berthold Klaus Paul Horn, Robot Vision, 1986,
`Barbara Liskov and John Guttag, Abstraction and Specification in Program Development,
`1986,
`Thomas H. Cormen, Charles E. Leiserson, and Ronald L. Rivest, Introduction to Algorithms,
`1990.
`
`Stephen A. Ward and Robert H. Halstead, Jr., Computation Structures, 1990.
`
`
`
`
`
`ii
`
`
`
`
`
`Stephen A. Ward
`Robert H. Halstead, Jr.
`
`
`
`Computation Structures
`
`am4
`
`The MIT Press
`Cambridge, Massachusetts London, England
`
`McGraw-Hill Book Company
`New York St. Louis San Francisco Montreal Toronto
`
`
`
`iii
`
`
`
`ceeee
`
`Second printing, 1991
`This book is one of a series of texts written by faculty of the Electrical Engineering and
`Computer Science Department at the Massachusetts Institute of Technology. It was edited
`and produced by The MITPress undera joint production-distribution arrangement with the
`-7
`McGraw-Hill Book Company.
`
`Ordering Information:
`North America
`Text orders should be addressed to the McGraw-Hill Book Company. All other orders should
`be addressed to The MIT Press.
`
`Outside North America
`All orders should be addressed to The MIT Pressorits local distributor.
`
`©1990 by The Massachusetts Institute of Technology
`All rights reserved. No part of this book may be reproduced in any form or by anyelec-
`tronic or mechanical means (including photocopying, recording, or information stroage and
`retrieval) without permission in writing from the publisher.
`This book was printed and bound by Halliday Lithograph in the United States of America.
`Library of Congress Cataloging-in-Publication Data
`
`Ward, Stephen A.
`Computation structures / Stephen A. Ward, Robert H. Halstead, Jr.
`p.
`cm—(The MIT electrical engineering and computer scienceseries)
`ISBN 0-262-23139-5 (MIT Press).—ISBN 0-07-068147-3 (McGraw-Hill)
`1. Computers—Circuits. 2. Logic design. 3. Computer
`
`architecture. II. Title._II. Series.[. Halstead, Robert H.
`
`TK7888.4.W37
`1989
`621.39’2—dc20
`
`89-12961
`CIP
`
`
`
`
`iv
`
`
`
`
`
`Contents
`
`Preface xv
`
`
`
`1
`
`
`
`The Digital Abstraction 1
`
`1.1‘
`Information and the Digital Abstraction 2
`1.2
`Representation of Discrete Variables 3
`1.3.
`Combinational Devices 4
`1.4
`The Static Discipline: Logic Levels 5
`1.5.
`Rate of Information Flow 7
`1.6—Transitions and Validity 8
`1.7
`Logic Families 8
`1.8
`Digital-Circuit Implementation 9
`1.9
`Summary 24
`1.10 Context 24
`
`1.11
`
`Problems 25
`
`
`
`2
`
`aes
`
`
`
`Binary Representations and Notation 33
`
`2.1
`
`2.2
`
`2.3
`
`2.4
`
`2.5
`
`2.6
`
`2.7
`
`2.8
`
`Representation of Numbers 34
`Floating-Point Representations 39
`Other Representations 39
`Hexadecimal Notation 40
`*Unrepresentable Values 4/
`Error Detection and Correction 42
`Context 44
`
`Problems 44
`
`v
`
`
`
`
`
`
`
`Combinational Devices and Circuits 47
`
`3.1
`
`3.2
`
`3.3
`
`3.4
`
`3.5
`
`3.6
`
`Boolean Functions and Truth Tables 47
`Elementary Gate Circuits 5/
`Synthesis of Logic Circuits 54
`Temporal Considerations in Combinational Circuits 6/
`Context 65
`
`Problems 66
`
`
`
`4
`
`
`
`Sequences and State 75
`
`.
`
`4.1
`4,2
`4.3.
`44
`4.5
`4.6
`4.7
`4.8
`
`Feedback and State 76
`Latches, Flip-Flops, and Registers 78
`Edge-Triggered Flip-Flops and Registers 83
`Register Timing 85
`Models of Sequential Circuits 86
`Synchronization and State 93
`Context 100
`Problems /00
`
`
`
`§
`
`vi
`
`
`
`
`
`Synthesis of Digital Systems 105
`
`5.1
`
`5.2
`5.3
`
`5.4
`5.5
`
`Building Blocks for Logic Design [05
`Regular Structures 7/9
`Design Example: A Combinational Multiplier 130
`Context 134
`
`Problems 134
`
`Contents
`
`
`
`
`
`Finite-State Machines 143
`
`6.1
`
`6.2
`
`6.3
`
`6.4
`
`6.5
`
`6.6
`
`6.7
`
`Synthesis of Finite-State Machines 144
`Synchronous FSM Circuit Models 146
`States and Bits 147
`*Equivalence of FSMs 148
`*Regular Expressions and Nondeterministic FSMs /5/
`Context [54
`
`Problems /55
`
`
`
`
`
`Control Structures and Disciplines 171
`
`7A
`
`7.2
`
`73
`
`7.4
`
`75
`
`7.6
`17
`
`78
`
`Timing Disciplines 172
`Degrees of Synchrony 1/73
`Constraints on Control Structure 177
`Synchronous Globally Timed Control, 187
`Synchronous Locally Timed Control 190
`Asynchronous Locally Timed Control /9/
`Context 196
`
`Problems 196
`
`
`
`
`
`Performance Measures and Trade-offs 201
`
`8.1
`
`8.2
`
`8.3
`
`8.4
`
`8.5
`
`8.6
`
`8.7
`
`8.8
`
`Pipelining 204
`Systematic Pipeline Construction 2/0
`Cost-Performance Trade-offs and Options 212
`Implementations of Bubble Sort 2/3
`More Efficient Sorting Algorithms 22]
`Summary 224
`Context 225
`
`Problems 226
`
`Contents
`
`vii
`
`PadeeeAIkK
`
`
`
`
`
`Communication: Issues and Structures 237
`
`9.1
`9.2
`
`9,3
`
`9.4
`
`9.5
`
`Physical Limits and Constraints 237
`Communication Buses 245
`
`Serial Communication 259
`
`Context 264
`
`Problems 265
`
`
`
`
`
`10
`
`
`
`Interpretation 269
`
`>
`
`Turing Machines and Computability 269
`10.1
`10.2. Universality 272
`10.3. Uncomputable Functions 273
`10.4
`Interpretation versus Compilation 275
`10.5
`Context 277
`.
`
`10.6
`
`Problems 277
`
`
`
`11.2 A Basic Data-Path Architecture 283
`11.3.
`Typical Data-Path Subsystems and Uses 287
`11.4 Control Subsystem 293
`11.5
`The Control Machine as an Interpreter 298
`11.6 Context 299
`
`
`
`ll
`
`
`
`Microinterpreter Architecture 28]
`
`11.1 Data Paths versus Control 28/
`
`11.7.
`
`Problems 299
`
`viii
`
`Contents
`
`-— oo ee 7 aes
`
`
`
`
`
`Microprogramming and Microcode 303
`
`12.1
`
`12.2
`
`12.3
`
`12.4
`
`12.5
`
`12.6
`
`Microcode Semantics 303
`Symbolic Microprogramming 312
`Microcoding Examples 3/9
`Summary 325
`Context 325
`
`Problems 325
`
`
`
`13
`
`
`
`Single-Sequence Machines 335
`
`13.1 Machine Language as an Abstraction 335
`13.2 Gross Organization of the Single-Sequence Machine 336
`13.3.
`Influences on Machine-Language Design 337
`13.4
`Implementation Considerations 353
`13.5.
`The von Neumann Machine 356
`13.6
`Perspectives and Trends 362
`13.7
`Context 364
`
`13.8
`
`Problems 366
`
`
`
`14
`
`
`
`.
`
`Stack Architectures: The S Machine 367
`
`Basic Instructions 368
`14.1
`S-Machine Instruction Coding 371
`14.2.
`*MAYBE Implementation 372
`14.3
`14.4 Compilation Techniques for Stack Machines 38/
`14.5.
`Flow of Control on the S Machine 385
`14.6 Relative Addressing and Position-Independent Code 389
`14.7
`Stack Frames and Procedure Linkage 392
`14.8
`*Lexical-Scoping Support 400
`14.9
`Traps 408
`14.10 Summary 41]
`14.11 Context 4/2
`
`14.12 Problems 4/2
`
`Contents
`
`ix
`
`
`
`
`
`15
`
`
`
`Register Architectures: The G Machine 427
`
`15.1
`
`15.2
`
`15.3
`
`15.4
`
`15.5
`
`15.6
`
`15.7
`
`15.8
`
`15.9
`15.10
`
`Addressing Modes 428
`The G Machine 434
`*MAYBEImplementation 44/
`Other General-Register Architectures 450
`Procedure Linkage 452
`Register Allocation by Compilers 457
`Traps 458
`High-Performance Implementation Considerations 459
`Summary 459
`Context 460
`
`15.11
`
`Problems 46/
`
`:
`
`16
`
`
`
`Memory Architectures 471
`
`Locality of Reference and Other Regularities of Memory Access 473
`16.1
`Interleaved Memory Modules 475
`16.2
`Instruction Prefetch 478
`16.3
`Top-of-Stack Cache 478
`16.4
`16.5 Multilevel Memory 479
`16.6 Cache Memory 480
`16.7
`Paging and Virtual Memory 486
`16.8
`Summary 496
`16.9 Context 496
`
`16.10 Problems 497
`
`
`
`17
`
`
`
`Reduced-Instruction-Set Computers 573
`
`17.1
`
`17.2
`
`17.3
`
`17.4
`
`17.5
`
`17.6
`17.7
`
`Basic Data Pipeline 5/4
`Pipeline Timing 5/6
`Interface Issues 519
`Instruction-Stream Constants 520
`
`Instruction Fetch and Branch Control 522
`Main-Memory Timing Conflict 525
`Impact of Lengthened Pipeline 528
`
`Contents
`
`=
`
`
`
`
`
`17.8
`
`Alternatives and Issues 529
`Summary 530
`17.9
`17.10 Context 530
`17.11
`Problems 531
`
`
`
`18
`
`
`
`Processes and Processor Multiplexing 533
`
`The Process Abstraction 534
`18.1
`Process Management 537
`18.2
`18.3 Operating-System Services 539
`18.4 Memory Mapping 544
`18.5
`Protection 548
`18.6
`Summary 549
`18.7
`Context 550
`
`18.8
`
`Problems 550
`
`
`
`19
`
`
`
`Process Synchronization 559
`
`Process-Synchronization Constraints 559
`19.1
`Semaphores and Precedence Constraints 560
`19.2
`Semaphores for Precedence 560
`19.3.
`Semaphores for Mutual Exclusion 56/
`19.4
` Producer-Consumer Synchronization 562
`19.5
`19.6 Binary Semaphores 565
`19.7
`Implementing Semaphores 565
`19.8 Deadlock 573
`19.9
`Summary 576
`19.10 Context 576
`19.11 Problems 577
`
`s
`
`
`
`20
`
`
`
`Interrupts, Priorities, and Real Time 587
`
`20.1
`
`20,2
`
`20.3
`
`20.4
`
`Mutual-Exclusion Requirements 590
`Enable/Disable Mechanism 59]
`Interrupts and Stack Discipline 592
`Implementation of Interrupts 593
`
`Contents
`
`Xi
`
`
`
`20.5
`
`20.6
`20.7
`
`20.8
`
`20.9
`
`20.10
`
`Weak Priorities 594
`
`Processor Priorities 595
`Scheduling and Priority Assignments 596
`Summary 599
`Context 600
`Problems 60]
`
`
`
`21
`
`
`
`Architectural Horizons 605
`
`21.1. Models of Computation 606
`21.2
`The Multiprocessor Challenge 608
`21.3
`Taxonomy of Multiprocessors 6/2
`21.4 Hiding Parallelism: Concurrent SSM Execution 613
`21.5 Data Parallelism 6/4
`21.6 Other SIMD Approaches 6/5
`21.7
`Shared-Memory Multiprocessors 616
`21.8 Distribution of Subcomputations 6/9
`21.9
`Summary 62]
`21.10 Context 622
`
`.
`
`
`
`Al
`
`
`
`The C Language: A Brief Overview 625
`
`Simple Data Types and Declarations 625
`Al.1
`Al.2 Expressions 626
`Al.3
`Statements and Programs 628
`Al.4 Arrays and Pointers 63]
`Al.5
`Structures 633
`
`
`
`A2
`
`
`
`MAYBEMicroarchitecture Summary 637
`A2.1
`
`Control-ROM Bit Fields 637
`Circuit Details 640
`
`A2.2
`
`A2.3
`A2.4
`
`A2.5
`
`Microinstruction Set 647
`Control-ROM Listing 652
`Macro Definitions for Microinstructions 667
`
`xii
`
`
`
`Contents
`
`
`
`
`
`MAYBE Microcode Support for Interpreters 67]
`
`A3.1
`
`Switch Console 67]
`
`A3.2. Microsubroutines 686
`
`
`
`A4
`
`
`
`S Machine Summary 695
`
`A4.1
` Instruction-Set Details 695
`A4.2 Language Definition 700
`A4.3.
`Sample S-Machine Program 703
`A4.4 Complete MAYBE S-Machine Microcode 704
`
`
`
`A5
`
`
`
`G Machine Summary 723
`
`A5.1
` Instruction-Set Details 723
`A5.2 Language Definition 725
`A5.3.
`Sample G-Machine Program 728
`AS5.4 Complete MAYBE G-Machine Microcode 728
`
`Bibliography 749
`
`«
`
`Index 761
`
`|ieed
`
`Contents
`
`xiii
`
`
`
`
`
`Preface
`
`This text approaches the field of digital systems architecture with a particular bias,
`one kindled during years of observation of M.LT. students and speculation about the
`factors that distinguish outstanding engineers from the merely very competent ma-
`jority. We bring to the project a conviction that the distinction reflects attitudes and
`perspectives that are at most subtle by-products of the usual technical education:
`subliminal lessons occasionally administered to students by virtue of accidental
`discovery rather than deliberate pedagogy. The organization and approach of Com-
`putation Structures is geared to the explicit cultivation of these characteristics in
`every student and reader.
`
`Scope and Mission
`their study inevitably re-
`As computer systems become increasingly complex,
`lies upon more specific areas of specialization. We compartmentalize specific
`technologies—logic elements, processors, compilers, operating systems—as sub-
`disciplines to be treated by separate courses, texts, and intellectual tools. Interfaces
`between the subdisciplines are powerful engineering abstractions that allow, for ex-
`ample, the designer of a processor to deal entirely in the domain of digital logic,
`naively exploiting the electronic circuitry in the abstraction below and providing
`interpretive services to software in the abstraction above.
`The abstractions work sufficiently well that our curricula (and, for that matter,
`our job descriptions) accommodate specialists whose understanding of computer
`systems is cripplingly incomplete. We confer computer science degrees on theo-
`rists who may be helpless when programming is required, on hardware designers
`to whom the high-level structure of the system to which they contribute is mystery,
`and on programmers wholack a clear understanding of how or why their programs
`make real things happen. Often what passes for technical breadth takes the form
`of multiple specialties: Students exhibit pockets of local facility, unable to connect
`them into an effective understanding of the system as a whole. Such people can
`and do perform useful functions, but their potential for creativity is limited by their
`acceptance of the boundaries of their specialties. They shy from the challenge to
`venture beyond familiar approaches, to reshape problems, to develop new inter-
`faces and revise existing ones;
`they accept the mysterious rather than unveiling
`it, oversensitized to their own limitations. They are, in effect, imprisoned by the
`abstractions they have been taught.
`Outstanding students, in contrast, somehow developa perspectivethat illuminates
`the interfaces between technologies, rather than the technologies themselves, as the
`
`-
`
`XV
`
`xv
`
`
`
`
`
`most important structural elements oftheir discipline. They view the interfaces with
`respect but not reverence; they examine both sides.of an abstraction, treating it as
`an object of study rather than a boundary between the known and the intractably
`mystical. They have discovered the power of their own universality; typically they
`have built complete systems, often with meager resources, and made them work.
`They have the sophistication, and most importantly the self-assurance, to explore
`new abstractions and to appreciate the role and costs of existing ones.
`Computation Structures is intended to produce renaissance engineersofthis lat-
`ter category. To this end, it deliberately telescopes much of computer science into
`a bottom-up progression that builds simple electronics into representative computer
`systems. The student is catapulted through one layer after another of implemen-
`tation technology, an experience that accentuates technological interfaces and the
`structure induced by them. The breadth of coverage of the book reflects our com-
`pulsion to demystify its subject matter. We are unwilling to ask students to accept a
`technology on faith, or to postpone its penetration to some more advanced course.
`The spectrum of implementation technology is laid before them as a challenge to
`sample, to uncover, to discover their capacity to master its entirety.
`The ultimate mission of Computation Structures is not so muchto conveyfacts as
`to catalyze personal discoveries by each student: the ability to master a wide range
`of technical details, the self-assurance to dive through a technological boundary,
`the power to grab a wire-wrap gun or CADtool or body of code and create new
`structures. The considerable volume of technical detail is present largely to serve
`this goal; it is not the subject of the book, but rather a tool by which we convey
`its more important lesson.
`
`The Text
`
`Computation Structures does not presume to replace the many excellent texts that
`focus on established subtopics such as logic design or operating systems; noris it
`intended for an elite audience capable of digesting four semesters of course work
`in a single term.
`It covers topics that every engineer—certainly every computer
`scientist--should be aware of, in sufficient detail to establish a concrete connection
`between each topic and practical reality. In many cases that connection is provided
`by the MAYBE computer, a simple microarchitecture that we use as a running
`example throughout the text. The MAYBEis presented in sufficient detail to allow
`its actual construction and operation, an activity required of M.1.T. students; even
`in the absence of such experience, it provides a subject for close scrutiny and a
`context for concrete discussions and questions.
`The text is introductory, presuming no formal background in digital systems;
`however, it generally assumes a level of technical sophistication consistent with
`that of an undergraduate engineering student. The text makes repeated connec-
`tions to related technical areas. These should enrich the presentation for students
`with appropriate backgrounds, but are inessential to the sequel; for example, early
`chapters include a few circuit diagrams whose appreciation requires an Ohm’s-law
`level of circuit sophistication. Programming ideas and constructs are introduced
`with a lack of fanfare that presumes some previous exposure to computers and
`programming on the part of the student. A subset of the C language, used for sam-
`
`Preface
`
`
`
`
`
`ple programs, is presented in a terse appendix; we have found that students easily
`attain a level of C fluency that allows them to read and understand the examples.
`The text does not make extensive use of real-world case studies, relying instead
`on the more coherent framework designed into our real but parochial example
`machines. Connections to the technical literature and to industrial practice are
`identified primarily in sections entitled Context; these provide pointers for deeper
`investigation of technical issues, links to adjacent fields, and occasionalhistorical
`perspective.
`
`Role at M.I.T.
`
`Computation Structures is used at M.LT. as the text for 6.004, a one-term, 15-
`hour-per-week sophomore “core” course required of all electrical engineering and
`computer science undergraduates. Three of the fifteen student hours are allocated
`to laboratory activities; the remainder are spent in classes and homework. Typical
`students have previously taken 6.001, a LISP-based programming course that uses
`Abelson and Sussman [1985] as a text, and 6.002, an introductory circuits course.
`The role of 6.002 is primarily to prepare the student for the 6.004 laboratory
`component, in which (for example) familiarity with an oscilloscope is assumed.
`6.001 providesafirst exposure to many ideas that recur in 6.004—programs,stacks,
`memory locations—albeit in a quite different context and often following a different
`idiom.
`6.004 is a fast-moving course, stretching to accommodate the subjectin its tightly
`packed single-term syllabus. Major topics are dealt with in about 25 lectures,
`which generally follow the order of the text and average one or two lectures per
`chapter. Substantial pruning is required to provide coherent if somewhat superficial
`coverage in lectures: While the major topics of the text are each dealt with in some
`depth, many of their ramifications and issues raised in the text are not addressed
`explicitly. Many of the optional (starred) sections are omitted, although the mix
`varies somewhat from one term to the next.
`The big picture emerging from the lectures is embellished by smaller section
`meetings held twice weekly, by homework, and by laboratory assignments. These
`components provide a vehicle for sampling underlying technical detail, stimulating
`each student to relate the lecture topic with at least one example of practical
`reality. The cultivation of this connection is a key element of the course and
`this text. Rather than presenting a stack frame in abstract terms, for example, we
`encourage each student to cometo grips with the entire set of nested implementation
`technologies that make it real—machine language, microcode, circuit diagrams,
`logic gates. Frequent probes to concrete reality reinforce the two major lessons
`of the course:
`first, the value of abstractions as a tool for structuring complex
`systems; and second, the capacity of the individual student to master any aspect of
`a system he or she cares to focus on.
`
`Laboratory Work
`These key lessons of Computation Structures are reinforced by the laboratory com-
`ponent of the course. Each student is required to construct, debug, and program
`a working MAYBE computer from simple components (at the level of ROMsand
`
`Preface
`
`;
`
`xvii
`
`
`
`registers). Several machine architectures—stack and general-register machines—
`are implemented and programmed using a common microarchitecture.
`Students’ computers are constructed from reusable take-home kits. The kits
`have a selection of logic components as well as integral prototyping board, power
`supply, and very primitive input/output provisions. Workstations (with a variety of
`support programs) and oscilloscopes are available in the laboratory. Although the
`wiring and much of the debugging can be performed by students at home,final
`checkout and subsequent program development require the use of facilities in the
`lab.
`
`The laboratory is structured as about eight assignments, each directed at a fairly
`tightly constrained set of goals. The directedness of the assignments differentiates
`the activity from more advanced “project” laboratories, which emphasizeinitiative
`and creativity; however, each assignment contains some nugget of creative chal-
`lenge, such as the design of a machineinstruction. A majorcreative outlet takes the
`form of an optional design contest, held at the end of each term, in which students
`are given a relatively free hand to modify their machines to improve performance
`on a benchmark program thatis not revealed until the day of the contest. Each en-
`trant is constrained to use only parts from the lab kit, although we allow n-student
`teams to combine n lab kits, normalizing the performance of such entries by a
`factor of n. (The best performanceby this metric has come from two-kit entries.)
`In order to make the student computer construction tractable and reasonably
`fail-safe, we have developed a set of printed-circuit modules (compatible with pro-
`toboard construction techniques) and a staged approach to construction whose early
`phases involve the use of a kernel subset of the machine to debug its incremental
`improvements. This methodology, involving the use of bootstrapping and scaffold-
`ing to develop complex systems, is among the major lessons of the course; it is
`one we have found no way to teach without hands-on experience.
`The integration of a “complete” laboratory with a broad-spectrum architecture
`course makes an ambitious package, but one that produces students whosetechnical
`maturity and self-assurance is based on that ultimate educator: They have each built
`a computer and made it work. We enthusiastically recommend the approach.
`While we value the omniware breadth of the laboratory, much ofits pedagogy
`can be captured by somewhat less extreme programs. The majority of the labo-
`tatory activity involves coding at some level, and can be carried out effectively
`using simulation software rather than hardware implementations. Several simula-
`tors and related system software (available both for UNIX workstations and for
`PCs) provide a convenient development test bed for microcode and software, even
`where the alternative of a hardware MAYBE machine exists. Since the architec-
`tural personality of our machine is largely dictated by the contents of a control
`ROM,the range of architectural options that can be explored using reprogramming
`and simulation techniques is relatively unconstrained. An effective laboratory can
`thus be fashioned using only commonly available computing resources.
`In the absence of such resources, the complete implementations presented here
`can still be explored in a kind of ersatz laboratory whose medium is paper rather
`than machinery, that is, in assignments that probe, modify, and analyze the given
`
`xviii
`
`Preface
`
`
`
`
`
`structures. This approach was used at M.L.T. in years prior to the 6.004 laboratory;
`while the latter has clear advantages, a paper laboratory devoted to all levels of a
`single coherent example accomplishes many of the demystification and perspective-
`building goals of our current course.
`
`Alternative Paths through the Text
`The subject of the text lends itself well to subsetting by suppression of detail, and
`we expect individual courses and readers to deemphasize or omit selected subtopics
`to suit their needs and backgrounds. The (widely varying) dependencies among
`topics are mapped crudely by the following graph:
`
`
`
`Relatively inessential dependencies are shown as dotted edges. The level of detail
`shown in the diagram ignores certain subtleties, often bundling essential and op-
`tional issues together‘as single nodes. Some additional guidance to topic selection
`is provided by observing sections marked with an asterisk, indicating digressions
`that can be skipped without compromising subsequent (nonasterisked) coverage.
`Topic selection and path through the text may be adjusted to accommodate a
`number of different course needs. A two-term undergraduate sequence based on
`the text would allow careful and relaxed treatmentof its entire coverage and, given
`sufficient student time, is an attractive alternative to the single-term firehose-style
`course offered at M.I.T. An introductory graduate course might augmentthe text by
`selected readings from the literature; candidates appear in the Context sections and
`in the bibliography. Certain of the exercises appearing at the end of each chapter
`are marked with one or twoasterisks to indicate that they may be challenging for
`an introductory undergraduate course. A single asterisk suggests that the marked
`problem requires more than routine application of the principles of the chapter;
`double asterisks identify problems that should challenge graduate students.
`
`Acknowledgments
`Computation Structures reflects the ideas, influence, and efforts of literally dozens
`of people who have contributed to 6.004 and its ancestors over the years. This
`
`Preface
`
`xix
`
`
`
`
`
`lineage begins with the seminal course 6.032, which was developed in the late
`1960s by Jack Dennis and subsequently evolved under the auspices of Jon Allen
`and Bob Gallager. The book owes its name and much ofits sense of direction
`to these early roots; it conveys, to a considerable extent, lessons that have been
`taught its authors by their predecessors.
`The current course began to take shape during the spring of 1980, when the
`authors collaborated with Dave Reed in an effort to modernize 6.032 and its course
`notes. Subsequent years saw a succession of significant revisions to the course, each
`accompanied by a rewrite of the notes, exploring a variety of pedagogic formulae.
`The birth of the MAYBE in 1984 marks the point at which the book’s current
`content and focus began to jell; the efforts of Dan Nussbaum and Chris Terman
`played a significant role during this formative period. Subsequent maturation of
`the laboratory work included substantial contributions to the MAYBE by Ziggy
`Blair, Ken Mackenzie, Joe Morgan, and Henry Houh.
`During this long history, the course and its notes were scrutinized by dozens of
`course staff and thousands of students, whose contributions to the surviving text
`are real but generally so diffused as to be unidentifiable. Exceptions, whose impact
`is more conspicuous, include Tom Anderson, Andy Ayers, Dave Goddeau, Mark
`Johnson, Rob Kassel, Ken Mackenzie, Rishiyur Nikhil, Dan Nussbaum, Jon Powell,
`Gill Pratt, Jerry Saltzer, Chris Terman, and John Wolfe. Valuable discussions,
`critique, and feedback on the text have been provided by Anant Agarwal, Bill
`Dally, Jacob Katzenelson, Jim Kirtley, Al Mok, Bruce Musicus, Steve Seda, Gerry
`Sussman, Rich Zippel, and manyothers,
`Particular credit is due John Wolfe for the massive effort he devoted to conversion
`of the source manuscript to TeX, to Eva Tervo and Sharon Thomasfor the help
`they lent and the aggravation they sustained in preparing the manuscript, to Larry
`Cohen for his skilled and tireless editing, and to the Harris Corporation, whose
`HCX-9 was used to format the book with blazing speed. And to Don Knuth,
`that premier computer scientist whose contributions have been varied, prolific, and
`influential, we are indebted for TeX, which deserves simultaneous recognition as the
`text formatter of choice and the most idiosyncratic programming language known
`to us. It is impossible to account for the prenatal debt incurred by the book to the
`authors’ wives, Debbie and Louise, whose loving encouragementnursedit through
`its long gestation.
`The authors are grateful to M.LT. for providing an environment that ied to
`Computation Structures, to the Department of Electrical Engineering and Computer
`Science for support of the course development surrounding the text, and to Hewlett-
`Packard and Tektronix for their donations to the 6.004 laboratory. Finally, a decade
`of 6.004 and 6.032 students deserve sympathy and thanks for their role as subjects
`of a long succession of educational experiments as well as credit for the result.
`They, after all, are more than an influence; they are the reason.
`
`Preface
`
`
`
`
`
`18
`
`Processes and
`
`Processor Multiplexing
`
`In this chapter we move beyondthe single-sequence machine to consider one further
`level of abstraction from which we can view computing systems. As with all our
`levels of abstraction, it is motivated by the twin desires to provide a more congenial
`environment for the user of the abstraction (the programmer) and to allow a greater
`degree of flexibility to the implementor of the abstraction.
`The single-sequence machine is an abstraction powerful enough to enable the
`expression of all manner of computational algorithms, and its emphasis onstrict
`sequentiality of operations avoids a potential source of complexity and confusion
`for the programmer. Nevertheless, when a digital system must interact with the
`physical world, when improved performance may be achievable by carrying out
`several operations simultaneously, and for other organizational reasons, it is useful
`to be able to deal conveniently with concurrency.
`For example, a digital system might be called upon to serve as part of a tele-
`phone switching system, connected to a multitude of individual customers. At any
`particular time, several customer lines could be in various states of activity. For
`another example, a system could be connected to several printers and be required
`to keep each of them busy printing the appropriate data.
`In each of these cases,
`an ad hoc program could be written for a single-sequence machine to do the job.
`Such a program would have to record explicitly the status of the various devices
`or activities to be managed andshift its attention from one to another as required.
`Writing such a program for a single-sequence machine has much in common with
`trying to express an inherently recursive algorithm in FORTRAN: It is possible,
`butit requires explicit management of information that would be handled implicitly
`by a more appropriate abstraction.
`Analternative to employing a single-sequence machine for one of these appli-
`cations would be to use several processors, one for each task to be performed (for
`example, one for each telephone line or each printer). Then a relatively simple
`single-sequence program on each processor would equip the system to perform its
`function. Given the previously mentioned advantages of expressing algorithms as
`single-sequence programs, this approach has much to recommendit; in fact, it may
`be viewed as the most conservative extension of the single-sequence abstraction to
`situations that the unadorned single-sequence paradigm does not handle weil.
`However well it may work and however pleasing its structure,
`though, such
`a multiprocessor system is rarely the most cost-effective solution to a problem.
`
`533
`
`
`
`
`
`533
`
`
`
`
`
`Individual processors may be idle much of the time, waiting for external events.
`Moreover, dedicating a separate processor to each task leads to inflexibility: A
`change in the system’s specification that adds new functionality, for example, may
`require additional processors as well as revised software.
`Some way is needed to take advantage of the conceptual simplification afforded
`by the multiprocessor solution without our being forced to dedicate a physical
`processor to each task. This is accomplished by the process abstraction. We use
`the term process to describe the action of a virtual processor, which executes a
`program as though that program had a physical processorentirely to itself. Instead
`of actually devoting a physical processor to each process, though, we can program a
`single physical processor to multiplex itself among several processes, allowing the
`implementation of systems comprising fewer physical processors than the number
`of processes they contain.
`is
`An important special case, and a basis for building more complex systems,
`the implementation of a group of processes on just one physical processor. This
`chapter explores the implementation of processes on single-sequence machines. As
`it happens, processes are useful in many more situations than the rather elementary
`scenarios just discussed; some of these applications are described, along with the
`additional mechanisms needed to support them.
`
`18.1 The Process Abstraction
`
`Along with its program and data in memory, a physical processor has a certain
`amount of internal state information, such as a stack pointer, program counter, and
`PSW.In order to implement a virtual processor, copies ofthis information must
`be supplied, and an interpreter must emulate the effects of the virtual machine in-
`structions on the contents of these re