throbber
DonFowley
`Executive Publisher
`DanSayre
`Associate Publisher
`Catherine Shultz
`Acquisitions Editor
`Gladys Soto
`Project Editor
`Chelsee Pengal
`Editorial Assistant
`Chris Ruel
`Marketing Manager
`Lea Radick
`Production Editor
`Michael St. Martine
`Cover Designer
`©Megumi Takamura/Dex Image/Getty Images
`Cover Image
`RichardJ. Pacifico
`Bicentennial Logo Design
`This book was set in TimesTen by Preparé and printed and bound by Hamilton Printing
`‘The coverwas printed by Phoenix Color.
`
`TRODUCIN'
`
`
`
`go toencompassalmosteveryaspect ofmodernelectrical engineering and computing
`
`Preface
`3 EMBEDDED SYSTEMS
`in barges
`Lessthan150years ago, shipping a newproduct, petroleum, downthe Mississippi
`wasviewedwithskepticismandfear of possible explosion.Fifty yearslater, electricity and
`electric lights were viewed as marvels of modern technology
`available only to a few
`Another 50years subsequent, someonesuggested that the world would needat most three
`to four computers. Our views continue to change.
`Today we shippetroleum(still with con-
`cern)all over the world. Electricity has become so commonthatweare surprised ifa switch
`is not available toturnon a light whenweentera room. Theneedforthree to four computers
`has grownto hundreds ofmillions, perhapsbillions,ofinstalled computers worldwide.
`This book presents a contemporary approachtothe design and developmentof a kind
`of computer systemthat most of us will never see—those that we call embedded systems.
`The approach brings together a solid theoretical hardware and software foundation with
`real-worldapplications. Why do weneedsuch athing? A goodquestion,let’s take a look
`Today weinteract with an embedded computer invirtually every aspect of our
`everyday life. Fromoperating our cartoriding anelevatortoour office to doing our laundry
`or cookingour dinner, a computeristhere, quietly,silently doing its job. Wefind the micro-
`processor—microcomputer—microcontroller—everywhere. Today these machines are
`ubiquitous. Like the electric light, without thought, we expect the antilock braking system
`in our car to work whenweuseit. We expectour mobile phone tooperatelikethe stationary
`one in our home. We carry a computerin our pocket that is more powerfulthan the onesthe
`original astronauts took intospace.
`Today wehave the ability to put anincreasingly larger number of hardware pieces
`into diminishingly smaller spaces. Softwareis nolongerrelegated to agiant machinein an
`air-conditioned room;our computerandits software go where we go.Thisability gives
`engineers a newfreedomtocreatively put together substantially more complex systems
`withtitillating functionality,
`systemsthat only sciencefiction writers thoughtofa few
`years ago, Suchanability alsogivesusthe opportunity to solve bigger and more complex
`problems thanwe have ever imaginedin the past—andtoput those designsinto smaller
`and smaller packages. Theseare definitely the most fun problems, the exciting kinds of
`things that wearechallengedto work on. Okay, where do we begin?
`The embeddedfieldstarted almostby accidentnot too manyyears ago.In the early 70s
`Federico Faggin and manyothersat Intel andMotorolaintroduced the 4004, 8008, and 6800
`microprocessors tothe engineering world. Originally intended for use in calculators andin
`calculator-like applications,today, driven by evangelists like Faggin, the microprocessor has
`becomeafundamental componentofvirtually everything we touch. With such widespread
`application, the ensured safety and reliability ofsuch systems are absolutely essential.
`The embedded systemsfield has grownvirtually overnight from nonexistent several
`
`Copyright 2008 © John Wiley & Sons, Inc. All rights reserved.
`aeeoeftieis maybereproduced, storedin a retrieval systemortransmitted in any form or
`ee eel Photocopying,
`recording, scanning, or otherwise, except as permit-
`Sea 1976UnitedStates Copyright Act, withouteitherthe proper written per-
`Bee eeinion through paymentof the appropriate per-copy fee tothe Copyright
`eee ‘ovewood Drive, Danvers, MA 01923, (978)750-8400, fax (978)646-8600.
`oe ame
`m permission should be addressedtothe Permissions Department, JohnWiley
`iver Street, Hoboken, NJ 07030-5774, (201)748-6011, fax (201)748-6008.
`Library of Congress Cataloging-in-Publication Data
`Peckol,
`JamesK,
`¥y designtool / James K. Peckol,
`mbeddedsystem: a contempora
`pcm.
`ISBN 978-0-471-72180-2 (cloth)
`system.
`2
`Embedded
`computer
`system.
`2.
`TK7895.42P43 2008
`004,16—de22
`2007017870
`ENR
`Toorder books or
`for customerservice
`Printed in the United Sates of AmericaALL WILEY (225.5945)
`10987654321
`
`Object
`
`Objec
`
`oriented methods
`
`(Computer
`mputer
`
`science)
`scienc
`
`APPLE 1026
`
`APPLE 1026
`
`1
`
`

`

`462 Chapter 11
`
`Real-Time Kernels and Operating Systems
`

`
`11.20 Combinethe subsystems in Problem 11.16 and Problem
`11.19.
`
`11.21 Modify the design of the TCB in Problem 11.15 tosup.
`port a task priority numberin the range of {0-9}. Assume0is
`the highest and 9 the lowestpriority.
`Incorporate the modified TCB design into the task queue
`design in Problem 11.19. Modify the access method to always
`return the highestpriority task.
`11.22 Modify the design of the TCB in Problem 11.15 to sup-
`port the inclusion ofan estimate of execution time numberinthe
`range of {0-99}.
`Incorporate the modified TCB design into the task queue
`design in Problem 11.19. Modify the access method to always
`return the shortesttask.
`
`11.19 Implementthe task queue specified in Example 11.1jo
`
`11.7 Consider implementing an embedded system to control a
`use a doubly linked list as the underlying data type forthequene
`traffic light as a foreground/background system. Each direction
`container.
`supports a left turn (right turn if traffic normally drives on the
`left hand side) and pedestrian-activated crosswalk control.
`(a) Which tasks are foreground tasks?
`(b) Which tasks are background tasks?
`(c) Give a UMLstate diagram illustrating the behavior of the
`system during a change from north-south green to east-west
`green. Be certain to consider the operation with and without a
`left (right) turn and with and withouta pedestrian.
`(d) Give a UML sequence diagram for the events in part (c).
`11.8 Repeat Problem 11.7 for a microwave cooker.
`11.9 Repeat Problem 11.7 for a washing machine.
`11.10 Repeat Problem 11.7 for a video-on-demand entertain-
`ment system for a large hotel.
`11.11 Consider implementing an embedded system to control
`a traffic light as an RTOS-based system. Each direction sup-
`ports a left turn (right turn if traffic normally drives on the left-
`hand side) and pedestrian-activated crosswalk control.
`(a) Which tasks are the major tasks?
`(b) Give a UML state diagram illustrating the behavior of the
`system during a change from north-south green to east-west
`green. Be certain to consider the operation with and without a
`left (right) turn and with and without a pedestrian.
`(c) Give a UML sequencediagram for the events in part (c).
`11.12 Repeat Problem 11.11 for a microwave cooker.
`11.13 Repeat Problem 11.11 for a washing machine.
`11.14 Repeat Problem 11.11 fora video-on-demand entertain-
`mentsystem fora large hotel.
`11.15 Provide a UMLclass diagram for a task control block
`(TCB). Implement the design using a C struct data structure.
`11.16 Design a method that would enable the dynamic alloca-
`tion and deallocation of TCBsas tasks are created or terminated.
`
`11.23 Give a high-level description of howthe system in Fig-
`ure P11.22 works. You should not need more than 10 lines.
`
`
`
`Figure P11.22
`
`11.17 Modify the design in Example 11.1 to support a
`dynamic numberoftasksin the task queue withoutusing malloc
`and free (C) or new and delete (C++) while retaining the array
`as the queue container.
`11.18 Provide a UMLclass diagram for a task queue that sup-
`ports the dynamic insertion and deletion oftasks.
`
`11.24 Write aC program to implementthe design given in the
`data/control flow diagram in Problem 11.23.
`
`2
`
`

`

`! Chapter 12
`‘|
`
`
`
`Tasks and
`Task Management
`
`THINGS TO LOOK FOR...
`
`Therole of time in embedded designs.

`Thedefinitions of reactive and time-based systems.

`¢ The differences between preemptive and nonpreemptive systems.
`¢ The need foreffectively scheduling the use of the system CPU(s).

`Thecriteria for making scheduling decisions.
`¢ Common scheduling algorithms.
`¢ Real-time scheduling considerations.

`Howscheduling algorithms might be evaluated.
`¢ Methodsfor intertask communication.

`Thecritical section problem and several solutions.
`¢ Methods for task synchronization.
`
`12.0
`
`INTRODUCTION
`
`In the previous chapter weintroduced some ofthe basic concepts and methods involved in
`
`||
`
`| |
`
`|
`
`|
`|
`
`|
`
`controlling multitasking systems. Welearnedthat foreground / background systemscan be
`effective underreal-time constraints and that the basic responsibilities of the operating sys-
`tem comprise task scheduling, intertask communication,and task dispatch. In addition, we
`introduced some ofthe issues associated with the context switch in preemptable systems.
`In this chapter, we will examine the scheduling problem and intertask communication
`in greater detail. The resource managementaspects of task scheduling and dispatch will be
`covered in the following chapter. We will open by continuing the discussion oftime and the
`critical role it plays in the design of embedded applications by introducing the concepts of
`| reactive, time-based systems_reactive and time-based systems. Wewill present anddiscuss various metrics for specifying
`and assessing a task schedule. Wewill then investigate several different scheduling algo-
`rithms and analyze task synchronization and intertask communication in somedetail. The
`focus will be primarily from the perspectiveofeither a kernel-based or more complete oper-
`ating system-based controlstrategy.
`
`
`
`463
`
`3
`
`

`

`| |
`
`||
`
`Tasks and Task Management
`| Chapter 12
`1
`TIME, TIME-BASED SYSTEMS, AND REACTIVE SYSTEMS
`
`1.1
`
`Time
`
`absolute, relative
`
`interval
`duration
`
`Wehavealreadybriefly encountered time and the importantrole it plays in the design and
`execution of embedded applications. We will now explore that role in greater detail,
`Wedefine two different measures of time: absolute and relative, based on what the
`measurementis referenced to. Absolute time is based on real-world time;relativetimeis
`measured with respect to somereference. Timeis further qualified as either an intervalot
`a duration: these are distinct. An interval is marked by specific start and endtimes; a dura-
`tion is a relative time measure. Equal intervals must have the samestart times and the same
`stop times; nonequal intervals can have the same duration. This difference is captured in
`Figure 12.0.
`
`EqualIntervals << —Durations
`
`EqualIntervals ae ceDurations
`
`Figure12.0 EqualIntervals
`
`and Equal Durations
`
`«1.2
`
`time-based SYStems
`absolute, relative
`following an interyal
`
`Reactive and Time-Based Systems
`Embedded systemsare classified into two broad categories: reactive and time based. Reac-
`reactive, time based
`tive systems, as the name suggests, contain tasks that are initiated by some event that may
`be either internal or externalto the system. Aninternal event maybe an elapsed time ora
`temporal bound on data that has been exceeded. An external eventis the recognition of a
`sWitch that has been activated or an external responseto an internally generated command,
`for example.Typically, the initiating events are asynchronousto the normalactivity ofthe
`system, Foreground/background systemsare a good exampleofthoseclassed as reactive.
`Time-based systems are those systems whose behavioris controlled by time. Such a
`relationship can be absolute—an action must occurat a specific time; relative—an action
`must occurafter or before somereference;orfollowing an interval—anaction must occur
`at a specified time with respect to some reference. The behaviorin time-based systems is
`Senerally synchronouswith a timing elementof one form or another. Time-shared systems
`ae a good exampleofthose classed as time based.
`The relevanceoftime in embedded applications becomes clear whentryingto schedule
`tasks and threads, that is, deciding when and how often each is executed. Tasksor threads
`that are initiated with repeating duration betweeninvocationsare called periodic; otherwise
`they are designated as aperiodic. A repeating durationis called theperiod. Thetimetocom-
`Plete a task is called the execution time.
`Ina periodic system,variation in the evoking eventis called jitter. The time between
`the €Voking event and the intended action is called the delay. When designinga system,
`cach Context in whichit is anticipated that the system will be operating must be examined
`© determine the significanceofjitter and delay with respect to specified time constraints,
`Anaction that must occurbya specified time is defined as hard orissaid to have a hard
`€dline A missed deadline in such cases is considered to be a partial or total system fail.
`
`Periodic
`_
`periodic, Periog;c
`€Xecution times
`Jitter
`delay
`
`hard, harddeadtine
`
`4
`
`

`

`soft real-time
`
`12.1
`
`Time, Time-Based Systems, and Reactive Systems
`
`465
`
`ure. A system is defined as hard real-time if it contains one or more tasks containing such
`constraints. Such systems may have other tasks that do not have temporal deadlines. The
`major focus, however,is on the hard deadlines.
`Systems with relaxed time constraints are defined as soft real-time. Such systems may
`meettheir deadlines on average. Soft real-time systems maybe soft in several ways:
`
`* Relaxationofthe constraintthat missing the deadline constitutes system failure. Such
`a system maytolerate missing the specific deadline provided some other deadline or
`timeliness constraint is met—the average throughput, for example.
`¢ Evaluating the correctness of timeliness as a gradation of values rather than pass
`orfail.
`
`firm real-time
`
`predictability
`
`when, how
`
`periodic
`
`Systemswith tasks that have somerelaxed constraints as well as hard deadlines are defined
`as firm real-time.
`Real-time systems are those in which correctness demandstimeliness. Most such sys-
`temscarefully manage resources with respect to maintainingthe predictability of timeliness
`constraints. Such predictability gives us a measure of accuracy with which onecanstate in
`advance when and how an action will occur. We elaborate by annotating the durations,
`events,jitter, and actions. Figure 12.1 illustrates a periodic system typical of a time-based
`design.
`
`
`
`Time
`
`Figure 12.1 Task Activity in a Periodic Time-Based System
`
`In the figure, the period of the recurrenceofthe tasks is defined. The evoking event
`occurs with respectto the start of the period. The first rectangle expresses the variation in
`the actual invocation with respect to the intended. Suchjitter may arise from variations in
`the system’s ability to respondto a timer expiring, for example. Once the eventoccurs, the
`second rectangle capturesthe delay in getting the task started. Whenthe task beginsto exe-
`cute,the third rectangle accounts foranyinitialization or similar operations that must occur
`before the intendedaction takes place. Theintended action occurs during the time indicated
`by the fourth rectangle. After the action completes,the fifth rectangle mirrors the entry
`actions with any necessary cleanup before the task completes. The sixth rectangle accounts
`for variation in exiting the task.
`The diagram also marksthe latesttime at whichthe intended action could complete and
`still meet the time constraints onthe period. The duration between the completion deadline
`andthestart of the next cycle is equal to that between the endofthe action and the end of
`the exitjitter.
`
`5
`
`

`

`Tasks and Task Management
`
` 5 Chapter12
`
`
`
`aperiodic
`
`interarrival time
`
`Figure 12.2 Task Activity in an Aperiodic Foreground/Background Design
`
`Figure 12.2 illustrates an aperiodic system thatis typical of a foreground/background
`design. Notice how the minimum and maximum times are specified.
`The invocation of aperiodic tasks is not fixed in time—they are asynchronoustothe
`operation of the core system. Thus, there can be nojitter because there is no expected time
`for the initiating event. The duration between suchtasksis called interarrivaltime. Sucha
`timeis critical when one needs to determine how to schedule real-time tasks. Under such
`circumstances, the lower bound on interarrival time must be identified. Such things as the
`maximum numberof events occurring within a given time interval mayalso need to be
`considered.
`Table 12.0 captures timeliness constraints with respect to whetherthe taskis soft or
`hard real-time.
`
`Table 12.0 Hard and Soft Real-Time Timeliness Constraints
`
`Property
`
`Nonreal-time
`
`Soft Real-time
`
`Hard Real-time
`
`Yes
`Possibly
`No
`Deterministic
`Yes
`Possibly
`No
`Predictable
`Failure
`Degraded performance
`Noeffect
`Consequencesof late computation
`Yes
`Yes
`No
`Critical reliability
`Yes
`Yes
`No
`Response dictated by external events
`Analytic, stochastic
`Analytic (sometimes)
`No
`Timing analysis possible
`
`stochastic simulation simulation
`
`Atthis point, we should be sufficiently comfortable with some ofthe terminology that
`we can start to investigate the control of embedded systemsin greater detail. Wewill begin
`with the problem oftask scheduling.
`
`»2
`
`TASK SCHEDULING
`
`Howefficiently and effectively a task moves through the various queuesalongthe control
`path followingits arrival and howeffectively and efficiently the CPUisutilized during such
`a movementestablish the quality of the embedded design. An essential componentofthat
`control strategy is the algorithm used to schedule the allocation of the CPU.
`In a multitasking system, the main objective is to have some process using the CPU a
`all times. Such a scheme maximizes the usage of that resource. Which taskis running at any
`
`6
`
`

`

`12.2 Task Scheduling
`
`467
`
`specific time is based on a number ofcriteria.It is the scheduler’s responsibility to ensure
`that the CPUis efficiently utilized and that the various jobs are executed in such an order
`as to meet any required constraints.
`Whenworking with a scheduling algorithm, one mustalso consider the priority of the
`task. Priority is assigned by the designer and is based on a variety of differentcriteria. We
`will examinethese shortly. Suchcriteria are used to resolve which task to execute when
`more than one is waiting and ready to execute. Tasks with higherpriority execute prefer-
`entially over those with lowerpriority.
`In a real-time context, a task that can be determined to always meetits timeliness con-
`straints is said to be schedulable. A task that can be guaranteed to always meetall deadlines
`is said to be deterministically schedulable. Such a situation occurs when an event's worst
`case responsetimeis less than or equal to the task’s deadline. When all tasks can be sched-
`uled, the overall system can be scheduled.
`Scheduling decisions must be madeduring the design phase of the system development
`since such decisions involve trade-offs that affect and optimize the overall performance of
`the system. Whenthe system specification stipulates hard deadlines, one must ensure that
`the implementingtasks and their associated actions can meet every deadline. Soft deadlines
`naturally give more flexibility.
`
`schedulable
`
`deterministically schedulable
`
`12.2.1 CPU Utilization
`
`CPU Utilization
`
`In additionto satisfying time constraints, a goal in formulating a task scheduleis to keep the
`CPUasbusyaspossible, ideally close to 100%, but with some margin for additional tasks.
`Such a metric is referred to as CPUutilization. In a practical system, utilization should
`range between 40% for a lightly loaded system and 90% foronethatis heavily loaded.
`Fora single periodic task, CPU utilization is given as
`
`uy =e / pj
`
`for periodic task is the period
`
`(12.0)
`
`u,_fraction of time task keeps CPU busy
`e,
`execution time
`p;
`
`Onecan express a similar relationship for aperiodic tasks.
`CPUutilization information can be used in conjunction with a sequence diagram to aid
`in assessing wheneachof the tasks can and needsto run.
`
`12.2.2
`
`Scheduling Decisions
`Twokey elementsofreal-time design, repeatability and predictability, are absolutely essen-
`tial in the context of hard deadlines. To ensure predictability, one must completely under-
`stand and define the timing characteristics of each task and properly schedule those tasks
`using a predictable scheduling algorithm.The first step in developing a robust scheduleis
`knowing when a scheduling decision must be made.
`Scheduling decisions are made underthe following four conditions:
`1. A process switches from the running to the waiting state—initiated by an I/O request.
`2. A process switches from the running to the ready state—when aninterrupt occurs.
`3. A process switches from the waiting to the ready state—the completionof I/O activity.
`4. A process terminates.
`
`running, waiting
`running, ready
`waiting, ready
`
`7
`
`

`

`_
`
`Chapter 12
`
`Tasks and Task Management
`
`Asynchronous Interrupt Event Driven
`Oneofthe simplest scheduling schemes is asynchronousinterrupt eventdriven. Certainly,
`the asynchronous nature of the schemecalls into question the use of the word “schedule”
`Under such an approach,the system is constrained to operate in a basic one-lineinfinite
`loop until an interrupting eventoccurs, as is illustrated in the code fragment shown in Figure
`12.3. As such,the design is a special case of the foreground/background model.Inthis case,
`the design has no background tasks. The design can also be considered tobe reactive,
`
`Figure 12.3. An Event-Driven
`Schedule Algorithm
`
`global variable declarations
`
`isr set up
`function prototypes
`void main (void)
`{
`
`local variable declarations
`
`while(1);
`
`if task loop
`
`}I
`
`SRs
`function definitions
`
`Whenan interrupting event occurs, flow of control jumpsto the associated ISR where
`the designated task is executed; flow then resumesin the infinite loop. Generally,the event
`originates from some external source. We will look at an extension to the event-driven
`approach in which the event derives from a system timer.
`The overall behavior of such a system can be difficult to analyze because ofthe non-
`deterministic nature of asynchronous interrupts. However, it is rather straightforward to
`determine the postevent behavior for systems with a single interrupt or the behavior ofthe
`highest priority interrupt in systems with more than one interrupt.
`
`Polled and Polled with a Timing Element
`
`Thebasic polled algorithm is amongthe simplest and fastest algorithms. The system con-
`tinually loops, waiting for an eventto occur. The difference between thepolled algorithm
`andthe eventdrivenis that the polled algorithm is continually testing the valueofthe polled
`signal looking for a state change. Theinterrupt-driven design, on the other hand, does noth-
`ing until the event occurs. Only then does it respond. Schematically, the algorithmisgiven
`as shownin Figure 12.4.
`Such a scheme workswell fora single task. It is completely deterministic. The time to
`respondto the event is computable and bounded. In the worstcase, let’s assume the event
`occurs immediately after the test instruction. Under such a circumstance,the response time
`is the length of the loop. Polled with a timing eventis a simple extension. The scheme uses
`a timing element to ensure a delay action after a polled event is true. Such a technique
`deskewsthe incoming signals.
`The polled modelis also a special case of the foreground/background model.In con-
`trast to the event-driven schedule, the polled model has no foreground tasks. The design
`implementsa reactive system.
`
`8
`
`

`

`12.3
`
`Scheduling Algorithms
`
`471
`
`Figure 12.4 A Polling-Based
`Schedule Algorithm
`
`global variable declarations
`function prototypes
`void main (void)
`{
`
`local variable declarations
`
`while(1)
`{
`
`II task loop
`/I test state of each signal in polled set
`if then construct
`or
`switch statement
`
`}
`
`} f
`
`unction definitions
`
`12.3.3
`
`State Based
`
`The next approach implements the flow of control throughthetaskset as a finite automaton
`or state machine. The twobasic implementationsofthe finite-state machine (FSM), Mealy
`and Moore,are distinguished by the implementation of the output function: in Mealy the
`outputis a function ofthe currentstate and the input, and in Moore the outputis a function
`of the currentstate only. The basic machine can be expressed asillustrated in Figure 12.5.
`
`Input Queue
`
`Next State and
`Output Function
`
`Machine Model
`
`Figure 12.5 A Basic State
`
`The state machine caneasily be implemented aseithera set of case statements,as an if-then,
`orif-then-else construct.
`Someofthe limitations of such an approach begin with the theoreticallimit on the com-
`putational powerofthe finite-state machine. Usingstatesis notefficient, and the state space
`explosion for large problems makesthe approach impractical for systems with large num-
`bers of inputs. There is a rich set of variations on the basic FSM, however, some of which
`addressthe variouslimitationsof the basic implementation. A state-based designis reactive
`in nature.
`
`12.3.4
`
`Synchronous Interrupt Event Driven
`The next level of sophistication entails constraining the asynchronous event used in the
`openingalgorithm to onethat is synchronous, based on a timer. Such a system continually
`loops until interrupted by a timing signal (whichis typicallyinternally generated). The tim-
`ing/interrupt eventtriggers a context switch to an ISR that managesit. A schedule based on
`
`timing
`
`9
`
`

`

`472 Chapter 12
`
`Tasks and Task Management
`
`time-sharing systems
`
`a periodic eventis defined as fixed rate.In contrast, an aperiodic scheduleis defined as spo-
`radic. Such a synchronous interrupt-based scheme can work with multiple tasks and is the
`basis for time-sharing systems. The design is an example ofa time-based system, although
`it is reacting to a special interrupt.
`
`ad
`
`12.3.5
`
`Combined Interrupt Event Driven
`
`A simplevariation on the two interrupt event-driven designs is to permit both synchronous
`and asynchronousinterrupts. In such a system,priority is used to select amongtasksthat are
`ready whenthe timinginterrupt occurs.If multiple tasks are permitted to have the same pri-
`ority, then selection from among ready tasks proceeds in a round robin fashion. Naturally,
`higherpriority tasks will be given preference at any time.
`
`12.3.6
`
`Foreground—Background
`
`foreground-background A system utilizing aforeground—background flow of control strategy implements a com-
`foreground bination of interrupt and noninterrupt-driven tasks. The former are designated the fore-
`background ground tasks and the latter the background tasks. The backgroundtasks can be interrupted
`at any time by anyof the foregroundtasks and are thus operating at the lowestpriority. The
`interrupt-driven processes implementthe real-time aspects of the application;the interrupt
`events may be either synchronousor asynchronous. All of the previous algorithms are spe-
`cial cases of foreground/background designs in which either the foreground(polled sys-
`tems) or the background (interrupt based) componentis missing.
`
`12.3.7.
`
`Time-Shared Systems
`
`In a time-shared system, tasks may or may notall be equally important. Whenall are given
`the same amountoftime, the schedule is periodic, and when theallocation is based on pri-
`ority, the scheduleis aperiodic. Several of the more commonalgorithmsare examined in the
`ensuing paragraphs.
`
`12.3.7.1 First-Come First-Served
`
`A very simple algorithmisfirst-comefirst-served andis easily managed with a FIFO queue.
`Whena process enters the ready queue, the task control block is linkedto the tailofthe
`queue. When the CPU becomes free,it is allocated to the processat the head ofthe queue,
`The currently running processis removed from the queue. Such an approachis nonpreemp-
`tive and can be troublesomein a system with real-time constraints.
`
`12.3.7.2 Shortest Job First
`
`Theshortestjob first schedule assumes that the CPU is used in bursts ofactivity. Each task
`has associated with it an estimate of how muchtime the job will need whennextgiven the
`CPU. Theestimate is based on measured lengths of previous CPU usage. Thealgorithm can
`be either preemptive or nonpreemptive. With a preemptive schedule, the currently running
`process can be interrupted by one with a shorter remaining time to completion.
`
`12.3.7.3 Round Robin
`
`The round robin algorithm is designed especially for time-shared systems.It is similar to
`first-comefirst-served, with preemption added to switch between processes. A small unit of
`
`10
`
`10
`
`

`

`12.3.
`
`Scheduling Algorithms
`
`473
`
`time quantum, slice
`
`time called time quantum or slice is defined, and the ready queueis treated as a circular
`queue. The scheduler walks the queue, allocating the CPU to each process for one time
`slice. If a process completes in less than its allocated time, it releases the CPU; otherwise,
`the processis interrupted when time expires andit’s put at the end ofqueue. New processes
`are addedto the tail of the queue. Observethatif the time slice is increased to infinity, round
`robin becomesa first-come first-served scheduler.
`
`2.3.8
`
`Priority Schedule
`
`Shortest job first is a special case of the more general priority scheduling class of algo-
`rithms. A priority is associated with each process, and the CPUisallocated to the process
`with the highest priority. Equal priority jobs are scheduled first-come first-served or in
`round robin fashion. The major problem with a priority schedule is the potential for indef-
`inite blocking or starving—priority inversion. The algorithms can be either preemptive or
`nonpreemptive.
`
`12.3.8.1 Rate-Monotonic
`
`rate-monotonic
`
`With a preemptive schedule,the currently running process can be interrupted by any other
`task with a higher priority. A special class of priority-driven algorithms called rate-
`monotonic was initially developed in 1973 and has been updated overthe years.In the basic
`algorithm,priority is assigned based on execution period; the shorter the period, the higher
`the priority.
`Priorities that are determined andassigned at design time and then remain fixed during
`executionare said to useastatic orfixed scheduling policy. Theability to schedule a set of
`static, fixed
`tasks is computed asa bound onutilization of the CPU as shown in Eq. 12.1.
`
`F Bsa")
`
`izo'
`
`p = Period ofthe task
`
`e = Execution timeofthe task
`
`This approach makesthe following assumptions.
`
`¢ The deadline for each task is equalto its period.
`* Anytask can be preemptedat any time.
`The expression on the right-hand side gives a bound on CPUutilization; the boundis
`extreme,thatis, worst case.If it cannot be met, a more detailed analysis must be performed
`to prove whetherornot the task can be scheduled. The above equation sets a CPU utilization
`bound at 69%.Practically, the bound could be relaxed to around 88%,and the taskscanstill
`be scheduled.
`The basic algorithm given above simplifies system analysis. Schedulingis static, and
`the worst case occurs whenall the jobs mustbe started simultaneously. Formalanalysis that
`is beyondthe scope ofthis text leads us to the rate-monotonic schedule also knownasthe
`critical zone theorem critical zone theorem.
`
`11
`
`11
`
`

`

`wir
`
`Chapter 12
`
`Tasks and Task Management
`
`all task deadlines in all task orderings.
`
`Critical Zone Theorem
`If the computed utilizationis less than the utilization bound, then the system is guaranteed to meet
`
`It can be shown that rate-monotonic systems are the optimal fixed rate scheduling
`method.If a rate-monotonic schedule cannot be found, then no other fixed rate scheme will
`work. The algorithm is defined as stable, which meansthat as additional, lower priority
`tasks are added to the system,the higherpriority tasks can still meet their deadlines even if
`lowerpriority tasks fail to do so. The initial algorithm bases assurance uponthe assumption
`that there is no task blocking. The basic algorithm can be modified to include blockingas
`illustrated in Eq.12.2.
`
`stable
`
`= ft <a(2 3 }
`
`b
`Pn- 1
`
`1
`-
`
`(12.2)
`
`blocked by a lowerpriority task
`
`The terms b; give the maximum timetask i can be
`
`earliest deadline
`
`With a nonpreemptive schedule,a currently arriving higherpriority processis placed at the
`head of the ready queue.
`
`12.3.8.2 Earliest Deadline
`
`A dynamic variation on the rate-monotonic algorithm is called earliest deadline. The ear-
`liest deadline schedule uses a dynamic algorithm with priority assigned based onthe task
`with the closest deadline. The schedule must be established and modified during runtime,
`for only then can the deadline(s) be assessed.
`A setoftasks is considered schedulableif the sum of the task loadingis less than 100%.
`It is considered optimalin the sense thatif a task can be scheduled by otheralgorithms, then
`it can be scheduled bythe earliest deadline.
`The algorithm is not considered stable. If the runtime task load rises above 100%, some
`task may miss i

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket