throbber
GiAl:VIN
`
`‘CE(RERSCH<CHAS
`
`Microsoft Corp. Exhibit 1055
`
`

`

`OPERATING
`SYSTEM
`CONCEPTS
`
`Fifth Edition
`
`
`
`Abraham Silberschatz
`Bell Labs
`
`Peter Baer Galvin
`Corporate Technologies, Inc.
`
`Aiy ADDISON-WESLEY
`
`An imprint of Addison Wesley Longman,Inc.
`
`Reading, Massachusetts * Harlow, England * Menlo Park, California
`Berkeley, California * Don Mills, Ontario * Sydney
`Bonn ¢ Amsterdam * Tokyo * Mexico City
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`Editor-In-Chief: Lynne Doran Cote
`Acquisitions Editor: Maite Suarez-Rivas
`Associate Editor: Deborah Lafferty
`Production Editor: Patricia A. O. Unubun
`Design Editor: Alwyn R. Velasquez
`Manufacturing Coordinator: Judy Sullivan
`Cover Illustration: Susan Cyr
`‘
`Accessthe latest information about Addison-Wesleytitles from our World Wide
`Website: http://www.awl.com/cseng
`
`Manyofthe designations used by manufacturers andsellers to distinguish their
`products are claimed as trademarks. Where those designations appearin this
`book, and the publisher was awareof a trademark claim, the designations have
`been printedin initial caps orin all caps.
`
`The programs andapplications presented in this book have been includedfor
`their instructional value. They have been tested with care butare not guaran-
`teed for any particular purpose. Neither the publisher or the authoroffers any
`warranties or representations, nor do they accept anyliabilities with respect to
`the programsor applications.
`
`Reprinted with corrections, February 1998,
`Copyright © 1998 by Addison Wesley Longman,Inc.
`All rights reserved. Nopart of this publication may be reproduced, orstored in
`a databaseorretrieval system, or transmitted in any form or by any means,
`electronic, mechanical, photocopying,recording, or any other media embodi-
`ments now knowor hereafter to become known, withoutthe prior written per-
`mission of the publisher. Printed in the United States of America.
`
`Library of Congress Cataloging-in-Publication Data
`Silberschatz, Abraham.
`Operating system concepts / Abraham Silberschatz, Peter Galvin. —
`5th ed.
`
`p. cm
`Includes bibliographical references and index.
`ISBN 0-201-59113-8
`1, Operating systems (Computers) I. Galvin, Peter B.
`QA76.76.063S5583 1998
`
`II. Title.
`
`005.4, 3—dc21
`
`ISBN 0-201-59113-8
`345678910 MA 01009998
`
`97-28556
`CIP
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`Chapter4
`
` PROCESSES
`
`
`
`Early computer systems allowed only one program to be executed at a time.
`This program had complete control of the system, and had access to all of the
`system’s resources. Current-day computer systems allow multiple programs
`to be loaded into memory and to be executed concurrently. This evolution
`required firmer control and more compartmentalization of the various pro-
`grams. These needs resulted in the notion of a process, which is a program in
`execution. A process is the unit of work in a modern time-sharing system.
`The more complex the operating system, the moreit is expected to do on
`behalf of its users. Althoughits main concernis the execution of user programs,
`it also needsto take care of various system tasksthatare better left outside the
`kernelitself. A system therefore consists of a collection of processes: Operating-
`system processes executing system code, and user processes executing user
`code. All these processes can potentially execute concurrently, with the CPU
`(or CPUs) multiplexed among them. By switching the CPU between processes,
`the operating system can make the computer more productive.
`
`4.1 = Process Concept
`
`Onehindranceto the discussion of operating systemsis the question of whatto
`call all the CPU activities. A batch system executes jobs, whereas a time-shared
`system has user programs,or tasks. Even ona single-user system, such as MS-DOS
`and Macintosh OS, a user may beable to run several programsat one time: one
`interactive and several batch programs. Evenif the user can execute only one
`
`Microsoft Corp. Exhibit 1645
`
`Microsoft Corp. Exhibit 1055
`
`

`

`90
`
`Chapter 4 Processes
`
`program at a time, the operating system may need to supportits own internal
`programmedactivities, such as spooling. In manyrespects, all of these activities
`are similar, so wecall all of them processes.
`The terms job and process are used almost interchangeably in this text.
`Although we personally prefer the term process, much of operating-system
`theory and terminology was developed during a time when the major activity
`of operating systems was job processing.
`It would be misleading to avoid
`the use of commonly accepted terms that include the word job (such as job
`scheduling) simply becausethe term process has supersededit.
`
`4.1.1 The Process
`Informally, a process is a program in execution. The execution of a process must
`progress in a sequential fashion. Thatis, at any time, at most oneinstruction is
`executed on behalf of the process.
`A process is more than the program code (sometimes known as the text
`section).
`It also includes the current activity, as represented by the value of
`the program counter and the contents of the processor’s registers. A process
`generally also includesthe process stack, containing temporary data (such as
`subroutine parameters, return addresses, and temporaryvariables), and a data
`section containing global variables.
`We emphasize that a program by itself is not a process; a program is a
`passive entity, such as the contentsofa file stored on disk, whereas a process
`is an active entity, with a program counter specifying the next instruction to
`execute andaset of associated resources.
`Although two processes maybe associated with the same program, they
`are nevertheless considered two separate execution sequences. For instance,
`several users may be running copies of the mail program, or the same user may
`invoke manycopies of the editor program. Each of these is a separate process,
`and, although the text sections are equivalent, the data sections will vary. It is
`
`
`
` admitted
`interrupt
`terminated
`
`
`running—
`
`
`
`
`\/O or event wait
`
`Figure 4.1 Diagram of processstate.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.1 Process Concept
`
`91
`
`also common to have a process that spawns many processes as it runs. This
`issue will be further discussed in Section 4.4.
`
`4.1.2 Process State
`
`Asa process executes,it changesstate. Thestate of a processis defined in part by
`the currentactivity of that process. Each process maybein oneof the following
`states:
`
`e New:Theprocessis being created.
`
`e Running: Instructions are being executed.
`e Waiting: The process is waiting for some event to occur (such as an I/O
`completion or reception of a signal).
`e Ready: The process is waiting to be assignedto a processor.
`e Terminated: The process hasfinished execution.
`
`These namesare arbitrary, and vary between operating systems. The states
`that they represent are found on all systems, however. Certain operating
`systems also distinguish among more finely delineating processstates.
`It is
`important to realize that only one process can be running on any processor at
`any instant. Many processes may be ready and waiting, however. Thestate
`diagram correspondingto thesestates is presented in Figure 4.1.
`
`4.1.3 Process Control Block
`
`Each process is represented in the operating system by a process control block
`(PCB)—alsocalled a task control block. A PCB is shownin Figure 4.2. It con-
`
`,
`ointer
`
`process
`
`process number
`program counter
`
`
`
`
`
`
`
`
`list of open files
`
`
`
`
`Figure 4.2
`
`Process control block.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`92
`
`Chapter4 Processes
`
`tains manypieces of information associated with a specific process, including
`these:
`
`e Process state: The state may be new, ready, running, waiting, halted, and
`so on.
`
`e Program counter: The counter indicates the address of the next instruction
`to be executed for this process.
`e CPUregisters: The registers vary in number and type, depending on the
`computer architecture. They include accumulators, index registers, stack
`pointers, and general-purposeregisters, plus any condition-code informa-
`tion. Along with the program counter, this state information must be saved
`when an interrupt occurs, to allow the process to be continued correctly
`afterward (Figure 4.3).
`e CPU scheduling information: This information includes a process prior-
`ity, pointers to scheduling queues, and any other scheduling parameters.
`(Chapter 5 describes process scheduling.)
`e Memory-management information: This information may include such
`information as the value of the base and limit registers, the page tables, or
`the segmenttables depending on the memory system usedby the operating
`system (Chapter8).
`
`process P,
`
`operating system
`interrupt or system call
`
`process P,
`
`executing
`
`executing
`
`a»
`
`
`
`J
`
`savestate into PCB,
`
`reload state from PCB,
`
`idle
`
`interrupt or system call
`
`save state into PCB,
`
`
`
`reload state from PCB,
`
`idle
`
`executing
`
`idle
`
`Figure 4.3. Diagram showing CPU switch from process to process.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.2 Process Scheduling
`
`93
`
`e Accounting information: This information includes the amount of CPU
`and real time used, time limits, account numbers, job or process numbers,
`and so on.
`
`e 1/O status information: The information includes the list of 1/O devices
`(such as tape drives) allocated to this process,a list of openfiles, and so on.
`The PCB simply servesas the repository for any information that mayvary from
`process to process.
`
`4.2 m Process Scheduling
`Theobjective ofmultiprogrammingis to have some process runningatall times,
`to maximize CPUutilization. The objective of time sharing is to switch the CPU
`among processes so frequently that users can interact with each program while
`it is running. For a uniprocessor system, there will never be more than one
`running process.If there are more processes, the rest will have to wait until the
`CPUis free and can be rescheduled.
`
`4.2.1 Scheduling Queues
`Asprocessesenter the system, they are putinto a job queue. This queue consists
`of all processes in the system. The processesthat are residing in main memory
`and are ready and waiting to execute are kept onalist called the ready queue.
`This queue is generally stored as a linked list. A ready-queue header will
`contain pointers to the first and last PCBs in thelist. Each PCB has a pointer
`field that points to the next process in the ready queue.
`There are also other queues in the system. Whena processis allocated the
`CPU,it executes for awhile and eventually quits, is interrupted, or waits for the
`occurrence of a particular event, such as the completion of an I/O request. In
`the case of an I/O request, such a request may be to a dedicated tape drive,
`or to a shared device, such as a disk. Since there are many processes in the
`system, the disk may be busy with the I/O request of someother process. The
`process therefore may have to wait for the disk. Thelist of processes waiting for
`a particular I/O device is called a device queue. Each device has its own device
`queue(Figure 4.4).
`A commonrepresentation for a discussion of process scheduling is a queue-
`ing diagram, such as that in Figure 4.5 (on page 95). Each rectangular box
`tepresents a queue. Two types of queues are present: the ready queue and a
`set of device queues. Thecircles represent the resources that serve the queues,
`and the arrows indicate the flow of processes in the system.
`A newprocessis initially put in the ready queue.It waits in the ready queue
`until it is selected for execution (or dispatched) andis given the CPU. Once the
`processis allocated the CPU andis executing, one of several events could occur:
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`94
`
`Chapter 4 Processes
`
`ready
`queue
`
`queue header
`
`PCB,
`
`
`
`
`tapeunto |___ 2
`
`PCB,
`az =
`registers
`
`==
`
`PCB
`PCB
`PCB
`
`
`,
`
`head
`tail
`
`tail
`
`
`Figure 4.4 The ready queue and various 1/O device queues.
`
`ratape
`
`disk
`unit 0
`
`terminal
`unit 0
`
`
`
`head
`
`PCB,
`
`e The processcould issue an 1/O request, and then be placed in an I/O queue.
`e The process could create a new subprocess and wait for its termination.
`e The process could be removed forcibly from the CPU, as a result of an
`interrupt, and be put backin the ready queue.
`
`In the first two cases, the process eventually switches from the waiting state to
`the ready state, and is then put back in the ready queue. A process continues
`this cycle until it terminates, at which timeit is removed from all queues and
`hasits PCB and resources deallocated.
`
`4.2.2 Schedulers
`A process migrates between the various scheduling queues throughoutits
`lifetime. The operating system mustselect, for scheduling purposes, processes
`from these queues in somefashion. The selection processis carried out by the
`appropriate scheduler.
`In a batch system, there are often more processes submitted than can be
`executed immediately. These processes are spooled to a mass-storage device
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`—_—_>
`
`4.2 Process Scheduling
`
`95
`
`i
`
`ready queue (ceu
`
`
`
`I/O request
`
`expired
`
`time slice
`
`
`
`
`
`
`child
`\. executes
`
`“interrupt
`_ occurs
`
`|
`
`|
`
`fork a
`child
`
`
`SS
`|
`wait for an
`
`interrupt |
`
`
`Figure 4.5 Queueing-diagram representation of process scheduling.
`(typically a disk), where they are kept for later execution. The long-term
`scheduler (or job scheduler) selects processes from this pool and loads them into
`memoryfor execution. The short-term scheduler (or CPU scheduler) selects from
`among the processes that are ready to execute, and allocates the CPU to one of
`them.
`The primary distinction between these two schedulers is the frequency
`of their execution. The short-term scheduler must select a new process for
`the CPU quite frequently. A process may execute for only a few milliseconds
`before waiting for an I/O request. Often, the short-term scheduler executes
`at least once every 100 milliseconds. Because of the short duration of time
`between executions, the short-term scheduler must be veryfast. If it takes 10
`milliseconds to decide to execute a process for 100 milliseconds, then 10/(100
`+ 10) = 9 percentof the CPUis being used (wasted) simply for scheduling the
`The long-term scheduler, on the other hand, executes much less frequently.
`There may be minutes between the creation of new processes in the system.
`The long-term scheduler controls the degree ofmultiprogramming (the numberof
`processes in memory). If the degree of multiprogrammingis stable, then the
`average rate of process creation must be equalto the average departure rate of
`processes leaving the system. Thus, the long-term scheduler may need to be
`invoked only when a process leaves the system. Becauseof the longer interval
`between executions, the long-term scheduler can afford to take more time to
`decide which process should beselected for execution.
`It is important that the long-term scheduler make a careful selection.
`In
`general, most processes can be described aseither I/O bound or CPU bound.
`An 1/O-boundprocess is one that spends moreofits time doing I/O thanit spends
`
`work,
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`96
`
`Chapter 4 Processes
`
`doing computations. A CPU-bound process, on the other hand, is one that gener-
`ates I/O requests infrequently, using moreof its time doing computation than
`an I/O-boundprocess uses. It is important that the long-term scheduler select
`a good process mix of I/O-bound and CPU-boundprocesses.
`If all processes
`are 1/O bound, the ready queue will almost always be empty, and the short-
`term scheduler will havelittle to do. If all processes are CPU bound, the I/O
`waiting queue will almost always be empty, devices will go unused, and again
`the system will be unbalanced. The system with the best performance will have
`a combination of CPU-boundand I/O-boundprocesses.
`On somesystems, the long-term scheduler may be absent or minimal. For
`example, time-sharing systems often have no long-term scheduler, but simply
`put every new process in memory for the short-term scheduler. The stability
`of these systems dependseither on a physical limitation (such as the number
`of available terminals) or on the self-adjusting nature of human users.
`If the
`performance declines to unacceptable levels, some users will simply quit, and
`will do somethingelse.
`Some operating systems, such as time-sharing systems, may introduce an
`additional, intermediate level of scheduling. This medium-term scheduler is
`diagrammedin Figure 4.6. The key idea behind a medium-term scheduler
`is that sometimes it can be advantageous to remove processes from memory
`(and from active contention for the CPU), and thus to reduce the degree of
`multiprogramming. At somelater time, the process can be reintroduced into
`memory and its execution can be continued whereit left off. This scheme
`is called swapping. The process is swapped out and swappedin later by the
`medium-term scheduler. Swapping may be necessary to improve the process
`mix, or because a change in memory requirements has overcommitted available
`memory, requiring memory to be freed up. Swapping is discussed in more
`detail in Chapter8.
`
` swapin
`swap out
`partially executed
`swapped-out processes
`
`
`
`
`
`eady queue
`
`end
`
` A r
`
`I/O waiting
`queues
`
`|
`
`Figure 4.6 Addition of medium-term scheduling to the queueing diagram.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.3 Operation on Processes
`
`97
`
`4.2.3 Context Switch
`
`Switching the CPU to another process requires savingthe state of the old process
`and loading the saved state for the new process. This task is known as a
`context switch, Context-switch time is pure overhead, because the system does
`no useful work while switching.
`Its speed varies from machine to machine,
`depending on the memory speed, the number of registers which must be
`copied, and the existence of special instructions (such as a single instruction
`to load orstore all registers). Typically, the speed ranges from 1 to 1000
`microseconds.
`Context-switch times are highly dependent on hardware support. For
`instance, some processors (such as the DECSYSTEM-20) provide multiple sets
`of registers. A context switch simply includes changing the pointer to the
`current register set. Of course, if there are moreactive processes than there are
`register sets, the system resorts to copying register data to and from memory,
`as before. Also, the more complex the operating system, the more work must
`be done during a context switch. As we shall see in Chapter 8, advanced
`memory-management techniques may require extra data to be switched with
`each context. For instance, the address space of the current process must be
`preserved as the space of the next task is prepared for use. How the address
`space is preserved, and the amount of work needed to do it, depend on the
`memory-management method of the operating system. As weshall see in
`Section 4.5, context switching has become such a performance bottleneck that
`new structures (threads) are being used to avoid it wheneverpossible.
`
`4.3 ™ Operation on Processes
`
`Theprocesses in the system can execute concurrently, and mustbe created and
`deleted dynamically. Thus, the operating system must provide a mechanism
`for process creation and termination.
`
`4.3.1 Process Creation
`
`A process maycreate several new processes, via a create-process system call,
`during the course of execution. Thecreating processis called a parent process,
`whereasthe new processesare called the children of that process. Each of these
`new processes may in turn create other processes, forming a tree of processes
`(Figure 4.7).
`In general, a process will need certain resources (CPU time, memory, files,
`I/O devices) to accomplish its task. When a process creates a subprocess, the
`subprocess may be able to obtain its resources directly from the operating
`system, or it may be constrained to a subset of the resources of the parent
`process. The parent may have topartition its resources amongits children,
`or it may be able to share some resources (such as memoryorfiles) among
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`98
`
`Chapter 4 Processes
`
`
`
`
`
`
`
`Figure 4.7_A tree of processes on a typical UNIX system.
`
`several ofits children. Restricting a child process to a subset of the parent's
`resources prevents any process from overloading the system by creating too
`many subprocesses.
`In addition to the various physical and logical resources that a process
`obtains whenit is created, initialization data (input) may be passed along by
`the parentprocess to the child process. For example, consider a process whose
`function is to display the status ofa file, say F1, on the screen of a terminal.
`Whenit is created,it will get, as an input from its parent process, the nameof the
`file F1, and it will execute using that datum to obtain the desired information.
`It mayalso get the nameof the output device. Some operating systems pass
`resources to child processes. On such a system, the new process may get two
`openfiles, F1 and the terminal device, and mayjust needto transfer the datum
`between the two.
`Whena process creates a new process, two possibilities exist in terms of
`execution:
`
`e Theparent continues to execute concurrently with its children.
`e The parent waits until someorall of its children have terminated.
`
`There are also twopossibilities in terms of the address space of the new process:
`
`e The child process is a duplicate of the parent process.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.3 Operation on Processes
`
`99
`
`e The child process has a program loadedintoit.
`
`Toillustrate these different implementations, let us consider the UNIX operating
`system. In UNIX, each process is identified by its process identifier, which is a
`unique integer. A new process is created by the fork system call. The new
`process consists of a copy of the address space of the original process. This
`mechanism allows the parent process to communicate easily with its child
`process. Both processes (the parent and the child) continue execution at the
`instruction after the fork with one difference: The return codefor the fork is
`zero for the new (child) process, whereas the (nonzero) processidentifier of the
`child is returnedto the parent.
`Typically, the execve system call is used after a fork by one of the two
`processes to replace the process’ memory space with a new program. The
`execve system call loads a binaryfile into memory (destroying the memory
`image of the program containingthe execve system call) and starts its execution.
`In this manner, the two processes are able to communicate, and then to go their
`separate ways. The parent can then create morechildren,or, if it has nothing
`else to do while the child runs, it can issue a wait system call to moveitself off
`the ready queueuntil the terminationof the child.
`The DEC VMSoperating system, in contrast, creates a new process, loads
`a specified program into that process, and starts it running. The Microsoft
`Windows NToperating system supports both models:
`the parent’s address
`space may be duplicated, or the parent may specify the name of a program
`for the operating system to load into the address space of the new process.
`
`4.3.2 Process Termination
`
`A process terminates whenit finishes executingits last statement and asksthe
`operating system to delete it by using the exit system call. At that point, the
`process may return data (output) to its parent process (via the wait system call).
`All of the resourcesof the process, including physicalandvirtual memory, open
`files, and I/O buffers, are deallocated by the operating system.
`There are additional circumstances when termination occurs. A process
`can cause the termination of another process via an appropriate system call (for
`example, abort). Usually, such a system call can be invoked by only the parent
`of the process that is to be terminated. Otherwise, users could arbitrarily kill
`each other's jobs. Note that a parent needs to knowtheidentities ofits children.
`Thus, whenoneprocess creates a new process, the identity of the newly created
`process is passed to the parent.
`A parent may terminate the execution of oneofits children for a variety of
`reasons, such as:
`
`e The child has exceeded its usage of some of the resources it has been
`allocated.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`100
`
`Chapter 4 Processes
`
`e The task assignedto the child is no longer required.
`e The parentis exiting, and the operating system does not allow a child to
`continueif its parent terminates.
`
`To determinethefirst case, the parent must have a mechanism to inspect the
`state of its children.
`Manysystems, including VMS, do not allow a child to exist if its parent
`has terminated.
`In such systems, if a process terminates (either normally or
`abnormally), thenall its children must also be terminated. This phenomenon
`is referred to as cascading termination and is normally initiated by the operating
`system.
`To illustrate process execution and termination, let us consider again the
`UNIX system. In UNIX, a process may terminate by using the exit system call,
`and its parent process may wait for that event by using the wait system call.
`Thewait system call returns the process identifier of a terminated child, so that
`the parent cantell which of the possibly many children has terminated. If the
`parent terminates, however, all the children are terminated by the operating
`system. Without a parent, UNIX does not know to whomto report the activities
`of a child.
`
`4.4 m Cooperating Processes
`
`The concurrent processes executing in the operating system may be either
`independent processes or cooperating processes. A process is independent if
`it cannotaffect or be affected by the other processes executing in the system.
`Clearly, any process that does not share any data (temporary or persistent) with
`any other process is independent. Onthe other hand,a processis cooperatingif it
`can affect or be affected by the other processes executing in the system.Clearly,
`any process that shares data with other processes is a cooperating process.
`Thereare several reasonsfor providing an environmentthat allowsprocess
`cooperation:
`
`e Information sharing: Since several users may be interested in the same
`piece of information (for instance, a shared file), we must provide an
`environmentto allow concurrentaccess to these types of resources.
`e Computation speedup: If we wanta particular task to run faster, we must
`break it into subtasks, each of which will be executing in parallel with the
`others. Notice that such a speedup can be achieved only if the computer
`has multiple processing elements (such as CPUsor I/O channels).
`e Modularity: We may wantto construct the system in a modular fashion,
`dividing the system functions into separate processes, as was discussed in
`Chapter3.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.4 Cooperating Processes
`
`101
`
`e Convenience: Even an individual user may have manytasks to work on at
`one time. For instance, a user may be editing, printing, and compiling in
`parallel.
`
`Concurrent execution that requires cooperation among the processes requires
`mechanismsto allow processes to communicate with each other (Section 4.6),
`and to synchronize their actions (Chapter6).
`let us consider the
`To illustrate the concept of cooperating processes,
`producer-consumer problem, which is a common paradigm for cooperating
`processes. A producer process produces information that is consumed by a
`consumer process. For example, a print program produces characters that are
`consumedby theprinter driver. A compiler may produce assembly code, which
`is consumed by an assembler. The assembler, in turn, may produce object
`modules, which are consumedby the loader.
`To allow producer and consumer processes to run concurrently, we must
`have available a buffer of items that can befilled by the producer and emptied
`by the consumer. A producer can produce one item while the consumeris
`consuming another item. The producer and. consumer must be synchronized,
`so that the consumer does not try to consume an item that has not yet been
`produced. In this situation, the consumer must wait until an item is produced.
`The unbounded-buffer producer-consumer problem places no practical limit
`on the size of the buffer. The consumer may have to wait for new items, but
`the producer can always produce new items. The bounded-buffer producer-
`consumer problem assumesthat there is a fixed buffer size. In this case, the
`consumer must wait if the buffer is empty and the producer must wait if the
`bufferis full.
`The buffer may be either provided by the operating system through the
`use of IPC (Section 4.6), or explicitly coded by the application programmer
`with the use of shared memory. Let usillustrate a shared-memorysolution
`to the bounded-buffer problem. The producer and consumer processes share
`the following variables:
`
`var 11;
`type item =... ;
`var buffer: array [0..n—1] of item;
`in, out: 0.n—1;
`
`with the variables in and outinitialized to the value 0. The shared buffer is
`implemented as a circular array with two logical pointers:
`in and out. The
`variable in points to the next free position in the buffer; out points to thefirst
`full position in the buffer. The buffer is empty when in = out; the bufferis full
`when in + 1 mod n = out.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`102
`
`Chapter 4 Processes
`
`The code for the producer and consumer processes follows. The no-op
`is a do-nothing instruction. Thus, while condition do no-op simply tests the
`condition repetitively until it becomesfalse.
`The producer process has a local variable nextp, in which the new item to
`be producedis stored:
`
`repeat
`
`producean item in nextp
`
`while in+1 mod 1 = out do no-op;
`bufferlin] := nextp;
`in := in+1 mod n;
`until false;
`
`The consumer process has a local variable nextc,
`consumedis stored:
`
`in which the item to be
`
`repeat
`while in = out do no-op;
`nextc := buffer|out];
`out := out+1 mod n;
`
`consumethe item in nextc
`
`until false;
`
`This schemeallows at most n—1 items in the buffer at the same time. We leave
`it as an exercise for you to provide a solution where1 itemscan bein the buffer
`at the same time.
`In Chapter 6, weshall discuss in great detail how synchronization among
`cooperating processes can be implemented effectively in a shared-memory
`environment.
`
`4.5 m Threads
`
`Recall that a process is defined by the resources it uses and by the location
`at which it is executing. There are many instances, however, in whichit
`would be useful for resources to be shared and accessed concurrently. This
`situation is similar to the case where a fork system call is invoked with a new
`program counter, or thread of control, executing within the sameaddress space.
`This concept is so useful that several new operating systems are providing a
`mechanism to supportit through a thread facility.
`
`Microsoft Corp. Exhibit 1055
`
`Microsoft Corp. Exhibit 1055
`
`

`

`4.5 Threads
`
`103
`
`4.5.1 Thread Structure
`
`A thread, sometimes called a lightweight process (LWP), is a basic unit of CPU
`utilization, and consists of a program counter, a register set, and a stack space.
`It shares with peer threads its code section, data section, and operating-system
`resources such as open files and signals, collectively known as a task. A
`traditional or heavyweight process is equal to a task with one thread. A task
`does nothing if no threads are in it, and a thread must be in exactly one
`task. The extensive sharing makes CPU switching among peer threads and
`the creation of threads inexpensive, compared. with context switches among
`heavyweight processes. Although a thread context switch still requires a
`register set switch, no memory-management-related work need be done. Like
`any parallel processing environment, multithreading a process may introduce
`concurrency control problemsthat require the useofcritical sections or locks.
`Also, some systems implement user-level
`threads in user-level libraries,
`rather than via system calls, so thread switching does not needtocall the oper-
`ating system, and to cause an interruptto the kernel. Switching between user-
`level threads can be done independently of the operating system and, therefore,
`very quickly. Thus, blocking a thread and switching to another thread is a
`reasonable solution to the problem of howaserver can handle many requests
`efficiently. User-level threads do have disadvantages, however. Forinstance,if
`the kernelis single-threaded, then any user-level thread executing a system call
`will cause the entire task to wait until the system call returns.
`We can grasp the functionality of threads by comparing multiple-thread
`control with m

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket