`Huang
`
`USOO6212562B1
`(10) Patent No.:
`US 6,212,562 B1
`(45) Date of Patent:
`Apr. 3, 2001
`
`(54) CRITICALITY AND QUALITY OF SERVICE
`(QOS) BASED RESOURCE MANAGEMENT
`(75) Inventor: Jiandong Huang, Plymouth, MN (US)
`(73) Assignee: Honeywell International Inc.,
`Minneapolis, MN (US)
`Subject to any disclaimer, the term of this
`y
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`
`* ) Notice:
`
`(21) Appl. No.: 08/828,314
`(22) Filed:
`Mar. 28, 1997
`(51) Int. Cl." ............................. G06F 15/16; G06F 15/17
`(52) U.S. Cl. ........................... 709/227; 709/228; 370/468
`(58) Field of Search ..................................... 370/230, 395,
`370/468; 709/228, 222, 226, 227
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`5,689,508 * 11/1997 Lyles .................................... 370/391
`5,898,668 * 4/1999 Shaffer .......
`... 370/230
`5,917,822 * 6/1999 Lyles et al. .......................... 370/395
`OTHER PUBLICATIONS
`Huang et al., “Integrated System Support for Continuous
`Multimedia Applications.” Proceedings of the International
`Conference on Distributed Multimedia Systems and Appli
`cations, Hawaii, (Aug. 1994).
`Huang et al., “Resource Management for Continuous Mul
`timedia Database Applications,” Proceedings of the 15th
`IEEE Real-Time Systems Symposium, Puerto Rico, (Dec.
`1994).
`Huang, et al., “Presto-A Multimedia Data Management
`System for Mission-Critical Applications,” Final Technical
`Report RL-TR-96-XXXX, Air Force Rome Laboratory, 525
`Brooks Road, Griffiss AFB, NY 13441, (Dec. 1995) Draft.
`Huang, et al., “Presto-A Multimedia Data Management
`System for Mission-Critical Applications,” Final Technical
`Report RL-TR-96-0183, Air Force Rome Laboratory, 525
`Brooks Road, Griffiss AFB, NY 13441, (Aug. 1996).
`
`Huang, et al., “A Decentralized End-to-End Scheduling
`Approach for Continuous Multimedia Applications.” Pro
`ceedings of the 6th International Workshop On Network and
`Operating System Support for Digital Audio and Video,
`Japan (Apr.).
`Huang, et al., “Presto-A System Environment for Mission
`Critical Multimedia Applications.” Proceeding of the Sixth
`Annual IEEE Dual-Use Technologies and Applications
`Conference, (Jun. 1996).
`Huang, “System Resource Management for Mission-Criti
`cal Multimedia Applications,' Computer Science Colloquia,
`University of Minnesota, pp. 1-35, (Oct. 28, 1996).
`
`(List continued on next page.)
`
`Primary Examiner Mark H. Rinehart
`Assistant Examiner Marc D. Thompson
`(74) Attorney, Agent, or Firm-Marshall, O'Toole,
`Gerstein, Murray & Borun
`(57)
`ABSTRACT
`A resource executes a plurality of executing Sessions, the
`resource receives an arriving Session, and the resource has a
`resource capacity constraint. Each of the executing Sessions
`has a QoS, a timing requirement, and a criticality level, and
`the arriving Session has a QoS, a timing requirement, and a
`criticality level. The arriving Session is admitted if there is
`Sufficient resource capacity. If the resource capacity is not
`Sufficient, a resource manager Shrinks the QoS of each
`executing Session and the QoS of the arriving Session. The
`resource manager then (i) admits the arriving Session with
`out preemption if the resource capacity constraint can be met
`without preemption, (ii) admits the arriving Session with
`criticality-based preemption if the resource capacity con
`Straint can be met only with preemption, and (iii) denies
`admission of the arriving Session if the resource capacity
`constraint cannot be met even with criticality-based preemp
`tion. The resource manager expands the QoS’s of appropri
`ate Sessions following (i), (ii), or (iii), as the case may be.
`
`56 Claims, 7 Drawing Sheets
`
`IS
`SESSox
`88HEICARE
`SS S
`
`10s
`
`
`
`133
`
`SCT MOST
`CFTTsi SESSION
`
`SERISK ccs of
`&L 3XECTINK
`STRESS
`
`110
`
`SDT
`SESSION
`
`isie
`EET8X
`
`XC-- DyT
`SESSIOX
`
`ss. FE
`TRFEM
`(0:3 &RTICLITY
`&R8SQNS A3 PSSTBE
`
`- 118
`
`EXPAND Q88 OF Ll
`SELEc'E:I) SESSIOxs
`
`EN)
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 1 of 20
`
`
`
`US 6,212,562 B1
`Page 2
`
`OTHER PUBLICATIONS
`Huang, “Tutorial: Real-Time Scheduling Technology for
`Continuous Multimedia Applications,” Lecture Notes of the
`3rd ACM Multimedia Conference, San Francisco, (Nov.
`1995).
`Huang, et al., “On Supporting Mission-Critical Multimedia
`Applications,” Proceedings of the Third IEEE International
`Conference On Multimedia Computing and Systems, Japan,
`(Jun. 1996).
`Cota-Robles, et al., “Schedulability Analysis for Desktop
`Multimedia Applications: Simple Ways to Handle General
`Purpose Operating Systems and Open Environments, pp.
`475-483, IEEE (1997).
`Demers, “Analysis and Simulation of a Fair Queueing
`Algorithm”, ACM SIGCOMM 89, Sep. 19, 1989.*
`
`Anthony Hung et al., “Bandwidth Scheduling for Wide
`Area ATM Networks Using Virtual Finishing Times”, IEEE/
`ACM Transactions on Networking, Feb. 1996.*
`Geoffrey Xie et al., “Delay Guarantee of a Virtual Clock
`Server", IEEE/ACM Transactions on Networking, Dec.
`1995.*
`Jing-Fei Ren et al., “ADynamic Priority Queuing Approach
`to traffic Regulation and Scheduling in B-ISDN”, Global
`Telecommunications Conference, 1994.*
`S. Golestani, “A Self-Clocked Fair Queuing Scheme for
`Broadband Applications", IEEE INFOCOM, 1994.*
`
`* cited by examiner
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 2 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 1 of 7
`
`US 6,212,562 B1
`
`
`
`9
`
`s
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 3 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 2 of 7
`
`US 6,212,562 B1
`
`14
`
`SCHEDULABILITY ANALYSIS AND RESOURCE ALLOCATION
`
`16
`
`
`
`CPU
`SCHEDULER
`
`DISK I/O
`SCHEDULER
`
`
`
`BUFFER
`MANAGER
`
`WINDOW
`VIDEO
`MANAGER
`
`
`
`
`
`
`
`COMMERCIAL (REAL-TIME) OPERATING
`SYSTEM
`
`Fig.2
`
`--
`|| K K II X X ||
`|-
`-
`Fig. 3
`
`24
`
`-
`TIME
`
`CRITICALITY
`
`
`
`
`
`TIMING
`
`Fig. 4
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 4 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 3 of 7
`
`US 6,212,562 BI
`
`AaLUVdadGV
`
`96
`
`NOISSHS
`
`LL
`
`NOISSAS
`
`HOLVdS1d
`
`NOLLVILODUNSO}
`
`_—_—_————
`
`NOISSdAS
`
`IWAIUUV
`
`
`
`NOLLdWNadadddWALSAS
`
`
`
`
`
`SNOISSHSONLLAOUXaSNOISSdSONTLTYA
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 5 of 20
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 5 of 20
`
`
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 4 of 7
`
`US 6,212,562 B1
`
`
`
`
`
`
`
`
`
`
`
`SESSION
`DEPARTURE
`
`SESSION
`REQUEST
`if
`EXECUTION
`
`1 O2
`
`IS
`SESSION
`SCHEDULABLE
`AS IS 2
`
`104
`
`YES-- ADMIT
`
`NO
`
`END
`
`
`
`
`
`
`
`SELECT MOST
`CRITICAL SESSION
`
`
`
`
`
`SHRINK QoS OF
`ALL, EXECUTING
`STREAMS
`
`
`
`1 O6
`
`108
`
`
`
`
`
`IS
`SESSION
`SCHEDULABLE
`
`PREEMPTION
`d
`
`
`
`11 O
`S
`
`
`
`
`
`
`
`
`
`
`
`
`
`112
`
`NO
`
`
`
`
`
`IS
`SESSION
`SCHEDULABLE
`WITH SOME
`PREEMTION
`?o
`
`114
`
`NO
`
`DO NOT
`ADMIT
`SESSION
`
`
`
`YES
`
`116
`
`
`
`PREEMPT AS FEW
`LOWER CRITICALITY
`SESSIONS AS POSSIBLE
`
`118
`
`EXPAND
`
`S OF ALL
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 6 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 5 of 7
`
`US 6,212,562 B1
`
`- HIGHEST
`CRITICALITY
`LEVEL
`
`-CANDIDATE
`SESSION
`
`- SCHEDULABLE
`CRITICALITY
`LEVEL
`
`- LOWEST
`CRITICALITY
`LEVEL
`
`Fig. 8
`
`2OO
`
`2O2
`
`LEVEL h--1
`
`LEVEL h
`
`aXi
`
`2O4
`
`O6
`
`LEVEL m
`
`L-NO
`
`YES
`
`END
`
`Fig. 7
`
`LEVEL 1
`
`
`
`
`
`
`
`SESSIONS ABOVE
`LEVEL (a+b)/2
`SCHEDULABLE
`o
`
`Fig. 9
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 7 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 6 of 7
`
`US 6,212,562 B1
`
`
`
`SORT ALL SELECTED
`SESSIONS AND PLACE
`THEM IN A CIRCULAR LIST
`
`4OO
`
`41 O
`
`NO
`
`
`
`
`
`
`
`
`
`WILL
`REDUCING CLF a
`OF Sb BY 1 SATISFY
`RESOURCE
`CONSTRAINTS
`o
`
`
`
`NO
`
`412
`
`REMOVE Sb
`FROM LIST
`
`414
`
`416
`
`YES
`
`418
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 8 of 20
`
`
`
`U.S. Patent
`
`Apr. 3, 2001
`
`Sheet 7 of 7
`
`US 6,212,562 B1
`
`
`
`[×] ONV HO GIOTOWN JITQIS}}
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 9 of 20
`
`
`
`1
`CRITICALITY AND QUALITY OF SERVICE
`(QOS) BASED RESOURCE MANAGEMENT
`
`This invention was made with Government support
`under Contract F30602-93-C-0172 awarded by the Air
`Force. The Government has certain rights in this invention.
`
`RELATED APPLICATIONS
`The present invention is related to the invention disclosed
`in U.S. patent application Ser. No. 08/827,536, filed Mar. 28,
`1997 (H16-16415).
`TECHNICAL FIELD OF THE INVENTION
`The present invention is directed to local resource man
`agement of local data processing resources, which may
`include, for example, a CPU, memory, disk I/O, a video
`CODEC unit, a display unit, and/or the like.
`
`15
`
`BACKGROUND OF THE INVENTION
`Continuous multimedia applications are being developed
`for entertainment (e.g., video-on-demand Services), for
`office automation (e.g., video conferencing), for crisis
`management, for command and control, and the like. In
`these continuous multimedia applications, Video, audio,
`and/or image Streams are processed within a node and
`between nodes of a data processing System.
`Some continuous multimedia applications are mission
`critical and Some are not. For example, the continuous
`multimedia applications being developed for entertainment
`(e.g., video-on-demand Services), for office automation (e.g.,
`Video conferencing), and the like, are not particularly
`mission-critical. By contrast, the continuous multimedia
`applications being developed for crisis management, for
`command and control, and the like, are often mission
`critical. Mission-critical continuous multimedia applications
`are becoming increasingly important.
`Mission-critical continuous multimedia applications have
`at least three unique characteristics-they are criticality
`driven, they are dynamic, and they operate in real time. With
`respect to the first of these unique characteristics, media
`Streams in mission-critical continuous multimedia applica
`tions may be associated with an attribute of criticality.
`Criticality is an indication of the importance of a particular
`application being executed at a given time, and is assigned
`to the application by an System administrator (or mediator)
`who reviews all applications to determine the criticality
`differences between them. For instance, an application
`which is performing periodic image-capturing and flaw
`detection in a proceSS control can be more important than an
`application that monitors floor activities in a controlled
`plant. Consequently, the periodic image-capturing and flaw
`detection Stream is assigned a higher criticality level by the
`System administrator than is the Video stream relating to the
`monitored floor activities. In order to support different
`criticality levels, the data processing System which pro
`ceSSes Such media Streams must be criticality cognitive and
`must be able to Support plural critical multimedia data
`Streams in the presence of multiple Service requests.
`With respect to the Second of these unique characteristics,
`mission-critical continuous multimedia applications are
`often dynamic and may vary greatly in their demands on the
`local resources of the data processing System. In digital
`battlefield management, for example, detection of a mobile
`target may trigger a Sequence of reactions, Such as Video
`monitoring, infrared tracking, image library retrieval for
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,212,562 B1
`
`2
`target matching and recognition, media data fusion and
`filtering, and command and control. Such dynamic demands
`on the local resources of the data processing System are not
`predictable a priori, and, therefore, require applications to
`negotiate on line for, and adapt to, the available local
`resources, which may include disk i?o bandwidth, CPU
`cycles, memory Space, Video compression/decompression
`capacity, and the like. Without Sufficient resources and
`proper resource management, multimedia Streams may lose
`their data or timelineSS in a random fashion, causing appli
`cation malfunction.
`With respect to the third of these unique characteristics,
`mission-critical continuous multimedia applications must
`operate according to a guaranteed latency and data flow rate.
`Latency is the end-to-end delay from the time when the very
`first media unit is produced at a stream Source to the time it
`reaches a Stream destination. Rate is the number of media
`data units per Second that are processed by a processing
`node.
`The present invention is directed to a local resource
`management arrangement that manages a node's local
`resources which are required to execute one or more appli
`cations. In more detailed aspects of the present invention, a
`local resource management arrangement manages a node's
`local resources that are required to execute, in real time,
`criticality driven and dynamic applications.
`SUMMARY OF THE INVENTION
`According to one aspect of the present invention, a
`resource manager for managing a resource comprises QoS
`Shrinking means, preempting means, and QoS expanding
`means. The resource executes executing Sessions, and the
`resource receives an arriving Session. The executing Ses
`Sions have corresponding QoS’s, and the arriving Session
`has a QoS. The QoS Shrinking means shrinks the QoS’s of
`the executing Sessions and the QoS of the arriving Session.
`The preempting means preempts, as necessary, the executing
`Sessions in order to execute the arriving Session. The QoS
`expanding means expands the QoS’s of the executing Ses
`Sions remaining after preemption.
`According to another aspect of the present invention, a
`method is provided to manage a resource. The resource
`executes executing Sessions, the resource has a resource
`capacity constraint, and the resource receives an arriving
`Session. Each of the executing Sessions has a QoS and a
`criticality level, and the arriving Session has a QoS and a
`criticality level. The method comprises the following Steps:
`a) Shrinking the QoS of the executing Sessions and of the
`QoS’s of the arriving Session; b1) admitting the arriving
`Session without preemption if the executing Sessions and the
`arriving Session do not exceed the resource capacity con
`Straint of the resource; b2) if the arriving Session and the
`executing Sessions having criticality levels higher than the
`criticality level of the arriving Session do not exceed the
`resource capacity constraint of the resource, but the arriving
`Session and all executing Sessions exceed the resource
`capacity constraint of the resource, preempting, as
`necessary, the executing Sessions which have criticality
`levels that are lower than the criticality level of the arriving
`Session; b3) denying admission of the arriving session if the
`arriving Session and all of the executing Sessions having
`criticality levels higher than the criticality level of the
`arriving Session exceed the resource capacity constraint of
`the resource; and c) expanding the QoS of Sessions remain
`ing following Steps b1), b2), and b3).
`According to yet another aspect of the present invention,
`a System comprises a plurality of resources, an operating
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 10 of 20
`
`
`
`US 6,212,562 B1
`
`3
`System, a plurality of resource Schedulers, and a resource
`manager. The operating System is arranged to operate the
`resources. Each resource Scheduler Schedules access to a
`corresponding resource. The resource manager receives
`resource capacity constraints from the resource Schedulers,
`the resource manager manages the resources in order to
`execute Sessions within the resource capacity constraints,
`and the resource manager sits on top of the operating System.
`According to Still another aspect of the present invention,
`a resource manager manages a Session according to a
`criticality level, a timing requirement, and a QoS of the
`Session.
`According to a further aspect of the present invention, a
`method of initiating on-line QoS negotiation and adaptation
`by a user comprises the following steps: a) specifying a
`criticality level for a Session; b) specifying a timing require
`ment for the Session; c) specifying a QoS for the Session; and
`d) after partial execution of the Session, re-specifying at least
`one of the criticality level, the timing requirement, and the
`QoS for the session
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`These and other features and advantages of the present
`invention will become more apparent from a detailed con
`sideration of the invention when taken in conjunction with
`the drawings in which:
`FIG. 1 is a block diagram of a distributed data processing
`System having a plurality of processing nodes according to
`the present invention;
`FIG. 2 is a block diagram of a typical processing node,
`Such as a processing node 12, of the distributed data
`processing system illustrated in FIG. 1;
`FIG. 3 is a timing diagram illustrating an example of a
`consecutive loss factor (CLF) which defines quality of
`Service (QoS) and which is used, along with timing and
`criticality, by a node-wide resource manager of the proceSS
`ing node illustrated in FIG. 2;
`FIG. 4 is a coordinate System illustrating the Orthogonal
`ity of timing, criticality, and QoS;
`FIG. 5 is a resource management mechanism illustrating
`the flow of Sessions into and out of, and the execution of
`Sessions within, the local data processing resources of the
`processing node illustrated in FIG. 2;
`FIG. 6 is a flow chart representing the management of
`local resources by the node-wide resource manager for the
`local processing of applications by the processing node
`illustrated in FIG. 2;
`FIG. 7 is a flow chart representing the SHRINK QoS
`block of FIG. 6;
`FIG. 8 is a diagram useful in explaining preemption;
`FIG. 9 is a flow chart representing a portion of the
`PREEMPT block of FIG. 6;
`FIG. 10 is a flow chart representing the EXPAND QoS
`block of FIG. 6; and,
`FIG. 11 is a State diagram showing three possible States of
`a Session.
`
`DETAILED DESCRIPTION
`A distributed data processing System 10, which provides
`an exemplary environment for the present invention, is
`illustrated in FIG.1. The distributed data processing system
`10 includes a plurality of processing nodes . . . , 12, 12,
`12, . .
`. . Each processing node of the distributed data
`processing System 10 manages its own local resources. So
`
`4
`that processing of applications may be distributed, as
`needed. For purposes of describing the present invention, it
`may be assumed that all processing nodes have a similar
`resource manager architecture So that only one processing
`node, Such as the processing node 12, is illustrated in detail
`in FIG. 2.
`The processing node 12, as shown in FIG. 2, includes a
`node-wide resource manager 14, which accepts certain
`inputs, that are described below, from, and which interacts
`with, a CPU scheduler 16, a disk I/O scheduler 18, a buffer
`manager 20, and a window/video manager 22. The CPU
`scheduler 16, the disk I/O scheduler 18, the buffer manager
`20, and the window/video manager 22 may be provided by,
`or operate on top of, Such operating Systems as Lynx",
`SolarisTM, or Windows NTTM in order to schedule access,
`respectively, to a CPU resource, a disk I/O resource, a buffer
`memory resource, and a window/video processing resource.
`These resources are managed by the node-wide resource
`manager 14.
`The operating System 24 functions to provide System
`primitive Services, Such as Setting the priority of threads and
`preempting and executing threads. For example, these Ser
`vices may be provided through the POSIXTM standard
`operating System interface. The node-wide resource man
`ager 14 as described herein sits on top of the operating
`system 24. Also, the CPU scheduler 16, the disk I/O sched
`uler 18, the buffer manager 20, and the window/video
`manager 22 may sit on top of the operating System 24, or the
`CPU scheduler 16, the disk I/O scheduler 18, the buffer
`manager 20, and the window/video manager 22 may reside
`within the operating System 24. Accordingly, the node-wide
`resource manager 14 of the present invention does not
`require redesign of the operating System 24. Similarly, the
`node-wide resource manager 14 of the present invention
`does not require redesign of the CPU scheduler 16, the disk
`I/O scheduler 18, the buffer manager 20, and the window/
`Video manager 22, but merely accepts certain inputs from
`the CPU scheduler 16, the disk I/O scheduler 18, the buffer
`manager 20, and the window/video manager 22.
`A mission-critical multimedia application to be processed
`by the processing node 12 can be characterized by three
`factors-timing, quality of Service (QoS), and criticality.
`Timing may be characterized by rate (0) and latency (L). AS
`described above, rate () is defined as the number of media
`data units per Second that are processed by the processing
`node 12. For example, if the processing node 12, processes
`Video data, a media data unit may be a Video frame, and the
`rate (2) may be specified at thirty frames per Second, which
`is Standard for the transmission of conventional television
`Signals in the United States. Latency (L) is the tolerable
`end-to-end delay from the time when the very first media
`unit is produced at a stream Source to the time it reaches a
`Stream destination. Rate () ) and latency (L) are specified by
`the application user. An application user is a user of the
`distributed data processing System 10 who desires execution
`of an application.
`QoS Specifies the degree of Service quality expected for
`the application from the underlying computer System. QoS
`may be defined in terms of a consecutive loss factor (CLF).
`QoS and CLF are inversely related So that, as the consecu
`tive loSS factor CLF goes up, the quality of Service QoS goes
`down, and So that, as the consecutive loSS factor CLF goes
`down, the quality of service QoS goes up. CLF is the number
`of consecutive data units which may be dropped between
`every two processing units.
`FIG. 3 illustrates an example of the consecutive loss
`factor (CLF). In this example, only one in three media data
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 11 of 20
`
`
`
`S
`units (such as image frames) are being processed. Thus, two
`of every three data units are being dropped. Accordingly, the
`continuous loss factor (CLF) as shown in FIG. 3 is 2.
`The application user Specifies the CLF of an application
`that the application user desires to be executed So that the
`Specified CLF is in the range 0, CLF), where CLF is
`the maximum number of consecutive data units which may
`be dropped between every two processing units. At run time,
`the application being processed by the processing node 12,
`may adapt its CLF between 0 and CLF, depending on the
`availability of System resources. The application user, also,
`may re-specify, on line, the CLF within the range 0,
`CLF, depending on availability of System resources.
`Alternatively, QoS may be defined in terms other than
`consecutive loss factor (CLF). For example, QoS may be
`defined in terms of a JPEG Quantization Factor (QFactor).
`Criticality refers to the degree of application importance
`among concurrent applications. Importance may be through
`put importance, economic importance, Security importance,
`or the like. When not all applications can be processed by the
`processing node 12, applications having lower criticality
`levels are preempted in favor of applications having higher
`criticality levels. A criticality level is determined and
`assigned by a System administrator, who administrates the
`applications Submitted to the processing node 12 for
`processing, and not by the application user who wishes to
`launch an application. If a criticality level were assigned by
`an application user launching an application, most applica
`tions would be given the highest possible criticality level by
`the application user So that preemption would not be mean
`ingful. After the criticality level is determined and assigned
`by the System administrator, the application user inputs the
`assigned criticality level.
`FIG. 4 illustrates that timing, criticality, and QoS are
`orthogonal to each other, i.e., the application user and the
`System administrator may specify any of these indepen
`dently of the others. For example, a high rate application
`may have low criticality and/or a low QoS requirement, and
`So on. System resources should be allocated and Scheduled
`So that the applications timing constraints are met, So that
`the QoS’s of the applications are maximized, and So that the
`number of high criticality applications being processed is
`maximized.
`A resource manager mechanism for concurrent Sessions
`processed by the node-wide resource manager 14 is illus
`trated in FIG. 5. Concurrent sessions are all of the sessions
`in the processing node 12, whether they are in a State of
`execution or not. At any point in time, arriving Sessions
`and/or preempted (blocked) Sessions may be maintained in
`a criticality-ordered waiting queue 30. A Session is an
`internal System activity defined by an execution behavior of
`a continuous multimedia application. AS is known, a Session
`consists of a set of System entities (e.g., producer threads,
`consumer threads, buffers) that form an execution path of the
`multimedia data flow between a producer proceSS and a
`consumer process. Accordingly, a Session may demand a
`certain amount of disk I/O bandwidth for Storage access,
`memory space for buffering, CPU cycles for media data
`processing, and/or Video processing bandwidth.
`When a behavior which defines a Session changes, a new
`Session begins. For example, Video which is transmitted at
`30 frames per Second may define one Session behavior.
`When the transmission rate of the video is changed to 20
`frames per Second by an application user, a new Session is
`Started, and the old Session ends. Accordingly, any one
`continuous multimedia application may include Several Ses
`Sions during its execution.
`
`1O
`
`15
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`US 6,212,562 B1
`
`6
`Sessions in the criticality-ordered waiting queue 30 are
`not being currently processed by the local resources of the
`processing node 12. These local resources include a CPU
`resource 32, a disk I/O resource 34, a buffer resource 36, and
`a window/video processing resource 38. The CPU resource
`32 is scheduled by the CPU scheduler 16, the disk I/O
`resource 34 is scheduled by the disk I/O scheduler 18, the
`buffer resource 36 is scheduled by the buffer manager 20,
`and the window/video processing resource 38 is scheduled
`by the window/video manager 22.
`If the CPU resource 32, the disk I/O resource 34, the
`buffer resource 36, and/or the window/video processing
`resource 38 have exceSS capacity in order to execute an
`additional Session, the node-wide resource manager 14 dis
`patches a Session from the criticality-ordered waiting queue
`30 to a CPU resource queue 40 corresponding to the CPU
`resource 32, to a disk I/O queue 42 corresponding to the disk
`I/O resource 34, to a buffer queue 44 corresponding to the
`buffer resource 36, and to a window/video queue 46 corre
`sponding to the window/video processing resource 38, as
`appropriate. The sessions in the CPU resource queue 40, the
`disk I/O queue 42, the buffer queue 44, and the window/
`video queue 46 are to be executed by the CPU resource 32,
`the disk I/O resource 34, the buffer resource 36, and the
`window/video processing resource 38.
`If the CPU resource 32, the disk I/O resource 34, the
`buffer resource 36, and/or the window/video processing
`resource 38 do not have exceSS capacity in order to execute
`an additional Session, the node-wide resource manager 14
`conducts an automatic QoS negotiation within the QoS
`range 0, CLF) of the executing Sessions. During this
`QoS negotiation, the QoS’s of all executing Sessions and of
`an additional Session are shrunk. If this QoS Shrinking frees
`up sufficient capacity of the CPU resource 32, the disk I/O
`resource 34, the buffer resource 36, and/or the window/video
`processing resource 38 to permit execution of the additional
`Session, the additional Session is admitted for execution.
`However, if there is still insufficient capacity after QoS
`Shrinking, and if the additional Session has a higher criti
`cality level than one or more of the executing Sessions (i.e.,
`sessions being executed by the CPU resource 32, the disk
`I/O resource 34, the buffer resource 36, and the window/
`Video processing resource 38), enough executing sessions
`are preempted, if possible, until there is Sufficient free
`capacity of the CPU resource 32, the disk I/O resource 34,
`the buffer resource 36, and/or the window/video processing
`resource 38 to permit execution of the additional Session.
`The additional Session is then admitted for execution.
`(Session preemption differs from thread (or process) pre
`emption in traditional operating Systems in that Session
`preemption involves preemption of an executing Session
`from multiple resources, whereas thread preemption
`involves preemption of a thread from a single resource.) On
`the other hand, if the additional Session cannot be admitted
`even with preemption, then no executing Sessions are pre
`empted and the additional Session is not admitted for execu
`tion.
`When a session departs from the CPU resource queue 40,
`the disk I/O queue 42, the buffer queue 44, and/or the
`window/video queue 46 because of preemption, the pre
`empted session is removed from the CPU resource queue 40,
`the disk I/O queue 42, the buffer queue 44, and the window/
`Video queue 46, and is added to the criticality-ordered
`waiting queue 30. When a session departs from the CPU
`resource queue 40, the disk I/O queue 42, the buffer queue
`44, and/or the window/video queue 46 because the session
`has been completely executed, the executed Session is Sim
`
`IPR2021-00831
`
`Daedalus EX2022
`Page 12 of 20
`
`
`
`US 6,212,562 B1
`
`8
`
`7
`ply removed from the CPU resource queue 40, the disk I/O
`queue 42, the buffer queue 44, and the window/video queue
`46.
`The CPU scheduler 16, the disk I/O scheduler 18, the
`buffer manager 20, and the window/video manager 22
`provide the following corresponding inputs: C, D
`M, and V. These inputs describe the capacities of the
`resources that the corresponding Schedulers/managers
`manage, and are used by the node-wide resource manager 14
`in order to establish the criteria which the node-wide
`resource manager 14 uses to determine whether an arriving
`session is schedullable. Specifically, for the disk I/O sched
`uler 18, commercial disk Subsystems usually provide I/O
`Scheduling Support, often with a SCAN algorithm, at the
`SCSI controller level. Thus, in order to reduce disk head
`movement overhead and guarantee a bounded access time,
`the node-wide resource manager 14 employs a simple
`interval-based I/O acceSS policy. Accordingly, let
`
`prax
`
`X. (eir + ef L) sln2 = Cmax
`
`i=1
`
`(5)
`
`where e is the execution time of the CPU for processing one
`unit of media data and e' is the execution time of the CPU
`for a disk I/O operation. C is the maximum cycle rate of
`the CPU.
`With respect to the window/video manager 22, the n
`Sessions being executed may deliver Video frames at the
`aggregate rate defined by the following expression:
`
`Xi.
`
`(6)
`
`1O
`
`15
`
`If V is the maximum video rate Supportable by the
`window/video processing software of the window/video
`processing resource 38, then in Sessions may be Schedullable
`if the following expression is Satisfied:
`
`L=Min(L), where 1 sisn
`
`(1)
`
`X. ris Vmax.
`
`(7)
`
`and where L is the latency tolerable by all of the executing
`Sessions in relating to all of the applications being executed
`by the processing node 12 at a given time. Accordingly, L
`is the time interval for Scheduling of concurrent media
`Streams. If it is assumed that the amount of contiguous data
`that the disk I/O resource 34 can transfer in one second is
`D., and that the average disk seek time for Serving each
`I/O request within L is S, then, during the time interval L, the
`effective transfer time is L-nS. Therefore, the n Sessions can
`be schedulable only if
`
`25
`
`35
`
`X. xiitis (L - inS) Dmax
`i=l
`
`(2)
`
`40
`
`where X=r,L), where u is the size of one data unit, and
`where the quantity D,
`and the average disk seek time S are
`constraints of the disk I/O resource 34. The quantity r for
`Session i may be determined from the following equation:
`
`45
`
`-
`i = 1 . CLP
`
`(3)
`
`where 2 and CLF are specified by the application user for
`an application corresponding to Session i, and where CLFae
`0,CLFmax). The equation (2) may be rewritten according
`to the following equation:
`
`X. Xiti: + nSDmax is LDmax.
`i=l
`
`(4)
`
`For the CPU scheduler 16, all threads are periodic in
`nature. Furthermore, thread access to