throbber
(12) United States Patent
`Bhat et al.
`
`I 1111111111111111 11111 lllll lllll 111111111111111 1111111111111111 IIII 11111111
`US006279039Bl
`US 6,279,039 Bl
`*Aug. 21, 2001
`
`(10) Patent No.:
`(45) Date of Patent:
`
`(54) RESOURCE MANAGEMENT METHOD AND
`APPARATUS FOR MAXIMIZING
`MULTIMEDIA PERFORMANCE OF OPEN
`SYSTEMS
`
`(75)
`
`Inventors: Kabekode V. Bhat; Amalesh C.R.
`Sanku, both of Naperville, IL (US)
`
`(73) Assignee: NCR Corporation, Dayton, OH (US)
`
`( *) Notice:
`
`This patent issued on a continued pros(cid:173)
`ecution application filed under 37 CFR
`1.53( d), and is subject to the twenty year
`patent term provisions of 35 U.S.C.
`154(a)(2).
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by O days.
`
`(21) Appl. No.: 08/627,192
`
`(22) Filed:
`
`Apr. 3, 1996
`
`Int. Cl.7 ............................. G06F 13/38; G06F 15/17
`(51)
`(52) U.S. Cl. ............................................. 709/226; 709/229
`(58) Field of Search ......................... 395/200.33, 200.47,
`395/200.48, 200.49, 200.53, 200.54, 200.56,
`200.31, 200.3, 821, 827, 672, 674, 675,
`676, 677, 184.01; 709/203, 229, 217, 218,
`219,223,224,226,201,200,102,104,
`105,106,107; 710/1, 7
`
`(56)
`
`References Cited
`
`U.S. PATENT DOCUMENTS
`5,321,605 * 6/1994 Chapman et al.
`....................... 705/7
`5,440,719 * 8/1995 Hanes et al. ......................... 395/500
`5,461,611 * 10/1995 Drake et al. ......................... 370/420
`5,491,690 * 2/1996 Alfonsi et al. ....................... 370/404
`5,550,982 * 8/1996 Long et al.
`..................... 395/200.49
`5,572,694 * 11/1996 Uchino ................................. 395/406
`5,606,359 * 2/1997 Youden et al.
`.......................... 348/7
`5,630,067 * 5/1997 Kindell et al. .................. 395/200.61
`
`OTHER PUBLICATIONS
`
`Dogac et al., "A Generalized Expert System for Database
`Design", IEEE, 1989.*
`Chung et al., "A Relational Query Language Interface to a
`Hierarchical Database Management System," IEEE, 1989. *
`Multimedia Laboratory Research Papers, http://www-c(cid:173)
`se.ucsd.edu/groups/multimedia/papers.html, Jan. 1, 1997. *
`The Tenet Group Papers, http://tenet.berkeley.edu/tenet-pa(cid:173)
`pers.html, Jan. 1, 1996.*
`
`(List continued on next page.)
`
`Primary Examiner-Mark H. Rinehart
`(74) Attorney, Agent, or Firm-Gates & Cooper
`
`(57)
`
`ABSTRACT
`
`A computer-implemented system performance model based
`resource management method and apparatus for dynami(cid:173)
`cally guaranteeing delivery of specified constant bit rate
`multimedia data for a specified duration to each accepted
`multimedia client request, is disclosed. The method can be
`executed on any open system server or electronic device that
`selectively connects upstream multimedia information chan(cid:173)
`nels or storage subsystems over a data communication path
`to an arbitrary number of downstream NTSC ports or
`devices individually requiring data delivered at a specified
`bit rate for a specified duration. The method includes
`dynamically tracking the utilization and capacity of key
`resources of the server system as they are used by existing
`clients of the system, receiving and analyzing new requests
`for their impact if granted upon the performance of the
`server in servicing the existing clients, and granting the new
`request only if the analysis shows that such grant will
`guarantee that the server can simultaneously service the
`needs of both the new request and the existing clients. An
`open multimedia server system having an admission con(cid:173)
`troller operating according to the described method is also
`disclosed.
`
`14 Claims, 2 Drawing Sheets
`
`CPU
`
`PCI2
`
`2
`
`A~
`
`4
`
`5
`
`6
`
`7
`
`9
`
`10
`
`11
`
`12
`
`Ex.1025
`APPLE INC. / Page 1 of 11
`
`

`

`US 6,279,039 Bl
`Page 2
`
`OIBER PUBLICATIONS
`
`Ferrari, "A New Admission Control Method for Real-time
`Communication in an Internetwork" i S. Son, Ed., Advances
`in Real-Time Systems, Ch. 5, pp. 105-116, Prentice-Hall,
`Englewood Cliffs, NJ, Jan. 1, 1995.*
`Gemmell et al., "Multimedia Storage Servers: A Tutorial and
`Survey", IEEE Computer, Jan. 1, 1995.*
`Haskin, R.L., "The Shark continuous-media file server,"
`COMPCON Spring '93. Digest of Papers, IEEE Comput.
`Soc. Press, 1993, Feb. 1993. *
`Vin, H.M. et al., "An Observation-Based Admission Control
`Algorithm for Multimedia Servers," Proceedings of the
`International Conference on Multimedia Computing and
`Systems, IEEE Comput. Soc. Press, pp. 234-243, Jan.
`1994.*
`Vin, H.M. et al., "A statistical admission control algorithm
`for multimedia servers," Multimedia 94, ACM, Oct. 1994. *
`Chang et al., "An open-systems approach to vidio on
`demand," IEEE Communications Magazine, May 1994.*
`
`Deloddere et al., "Interactive video on demand," IEEE
`Communications Magazine, May 1994. *
`Liu, C.L. and Layland, J.W., "Schelduling Algorithms for
`Multiprogramming in a Hard-Real-Time Environment",
`Journal of the Association for Computing Machinery, vol.
`20, No. 1, Jan. 1973, pp. 46-61. *
`Ferrari, D. and Verma, D.C., "A scheme for real-time
`channel establishment in wide-area networks," IEEE Jour(cid:173)
`nal on Selected Areas in Communications, vol. 8 Issue 3, pp.
`368-379, Apr. 1990. *
`
`Laugher, P. and Shepherd, D., "The design of a storage
`server for continuous media," The Computer Journal, vol.
`36, No. 1, Jan. 1993, pp. 32-42.*
`Vin, H.M. and Rangan, P.V., "Designing a multiuser HDTV
`storage server," IEEE Journal on Selected Areas in Com(cid:173)
`munications, vol. 11, Issue 1, pp. 153-164, Jan. 1993.*
`
`* cited by examiner
`
`Ex.1025
`APPLE INC. / Page 2 of 11
`
`

`

`i,-
`~
`\I:;
`~
`Q
`_,.\C
`-...,l
`N
`_,.a-...
`rJ'J.
`e
`
`N
`
`'"""' 0 ....,
`~ ....
`'JJ. =(cid:173)~
`
`'"""'
`0
`0
`N
`r-'
`N
`~
`~
`
`~ = ......
`~ ......
`~
`•
`r:JJ.
`d •
`
`CLIENTS: ( 1,1 ), ( 1,2) ... ( 1,7)
`
`•••
`
`ATM SWITCH
`
`KxK
`
`25---.._
`
`22
`
`ADDITIONAL ATM SWITCHES
`
`23
`
`~ 21
`PCIB
`
`ATM 1
`
`PCI BUS 2
`
`18
`
`17
`
`PCIB
`
`14
`
`PCI BUS 1
`
`16
`
`15
`
`PCIB
`
`400 MB/s EXTENDED EXPRESS BUS
`
`/10
`
`MEMORY 1--13
`
`12
`
`PENTIUM
`
`FIG. 1
`
`BUS
`EISA
`
`20
`
`(4,7)
`( 4,6 )
`( 4,5 )
`( 4,4 )
`( 4,3 )
`( 4,2 )
`( 4, 1 )
`
`SCSi1J J SCSi2 J SCSi3 J SCSi4 J
`
`19
`
`19
`
`19
`
`19
`
`(3,7)
`( 3,6 )
`( 3,5 )
`( 3,4 )
`( 3,3 )
`( 3,2 )
`( 3, 1 )
`
`(2,7)
`( 2,6 )
`( 2,5 )
`( 2,4 )
`( 2,3 )
`( 2,2 )
`( 2, 1 )
`
`(1,7)
`( 1,6 )
`( 1,5 )
`( 1,4 )
`( 1,3 )
`( 1,2 )
`( 1, 1 )
`
`Ex.1025
`APPLE INC. / Page 3 of 11
`
`

`

`U.S. Patent
`
`Aug. 21, 2001
`
`Sheet 2 of 2
`
`US 6,279,039 Bl
`
`FIG. 2
`
`1
`
`CPU
`
`PCI1
`
`PCI2
`
`2
`
`A~
`
`4
`
`5
`
`6
`
`7
`
`9
`
`10
`
`11
`
`12
`
`Ex.1025
`APPLE INC. / Page 4 of 11
`
`

`

`1
`RESOURCE MANAGEMENT METHOD AND
`APPARATUS FOR MAXIMIZING
`MULTIMEDIA PERFORMANCE OF OPEN
`SYSTEMS
`
`CROSS-REFERENCE TO RELATED
`APPLICATION
`
`The disclosures of copending U.S. patent application Ser.
`No. 08/624,337, entitled "Predictable Diverse Data Delivery
`Enablement Method and Apparatus for ATM Based Com(cid:173)
`puter System," filed on even date herewith, are herein fully
`incorporated by reference.
`
`FIELD OF THE INVENTION
`
`The present invention relates generally to multimedia
`communication systems, and more particularly to a system
`performance model based resource management system for
`use with a multimedia server that dynamically guarantees
`delivery of service to accepted multimedia clients.
`
`DESCRIPTION OF RELATED ART
`
`US 6,279,039 Bl
`
`2
`Users of MM clients (such as desk top computer or other
`electronic or electromechanical devices) require uninter(cid:173)
`rupted delivery of MM information to these devices from
`open servers at a constant bit rate for a specific duration, as
`5 needed by the clients. Current open system servers do not
`have the capability to dynamically guarantee MM data
`delivery for new service requests. This will either lead to
`observable failure of service to one or more MM clients of
`the MM server or substantial underutilization of system
`10 resources. The former results in customer dissatisfaction,
`and the latter in reduced performance/price ratio.
`The present invention addresses the above shortcomings
`of prior art MM communication systems. The methodology
`presented by the present invention is generally applicable to
`different configurations of open computer systems and pro(cid:173)
`vides a basis for realizing GQOS in MM communication
`systems.
`
`15
`
`Multimedia (MM) communication is a high fidelity, high
`productivity technological means for people to confer or to
`gather information wherever they are and whenever they are
`in need, using the media of their choice. It is a technological
`attempt to emulate the bandwidth, fidelity and effectiveness
`that are present in a face-to-face communication. The advent
`of high performance, low cost microprocessors, memory
`systems, redundant arrays of inexpensive disk storage tech(cid:173)
`nology and high bandwidth 1/0 buses, coupled with the
`demand for multimedia communication, is resulting in com(cid:173)
`puters being an integral part of global communication sys(cid:173)
`tems. With the marriage of advanced technologies of com(cid:173)
`puters and communication networks, people can get
`information they need in any form when and where they
`need it. These technologies facilitate activities such as
`watching a video of one's own choice on demand, or
`receiving interactive audiovisual instructions for repairing a
`broken machine from an expert located at a remote site.
`The ability to provide a service to customers as agreed and
`meeting their expectations is vital for success in a competi(cid:173)
`tive business such as communication and computers. In
`communication arena, ATM technology by design provides
`quality of service (QOS). QOS here is defined by guarantees
`on the bandwidth, loss of frames and delay to the network
`customers. Although considerable research has been done in
`specific MM areas, the issue of how to provide "guaranteed"
`quality of service (GQOS) in MM communication involving
`both computers and communication networks is not com(cid:173)
`pletely understood as yet. One method to achieve GQOS is
`to incorporate an admission control strategy where new jobs
`will be turned down based on some criteria.
`Computers typically are configured to accurately com(cid:173)
`plete specific data processing tasks within an average
`response time, acceptable to its customer. Understanding the
`application processing scenario on the system, the
`performance, capacity and reliability characteristics of the
`major system components under the processing scenario are
`adequate to design a good configuration meeting those 60
`needs. However, in MM computing, where certain data
`types such as video or audio must be delivered at the clients
`at a rate required by them, the traditional approaches are not
`adequate. In MM applications, accurate delivery of data
`from the computer to its client alone is not enough; it must 65
`be done at a rate needed by the client or meeting the
`specified deadline.
`
`20
`
`SUMMARY OF THE INVENTION
`To overcome the limitations in the prior art described
`above, and to overcome other limitations that will become
`apparent upon reading and understanding the present
`specification, this invention discloses a computer(cid:173)
`implemented system performance model based resource
`25 management algorithm and method that dynamically guar(cid:173)
`antees delivery of specified constant bit rate MM data for
`specified duration to each accepted MM client request. The
`system of this invention executes on any open system server
`( or an electronic device) that is connected to either an MM
`30 information storage subsystem (such as disks, RAIDs, opti(cid:173)
`cal arrays, etc.) or to an upstream MM information channel
`and the output channels connected to an arbitrary number of
`NTSC ports or other devices requiring a specified constant
`bit rate for a specified duration by means of a data commu-
`35 nication path such as bus hierarchy or ATM connections.
`According to one aspect of the invention, the algorithm
`embodied in this system enables each of the NTSC ports
`being fed with constant bit rates of information for a
`specified time interval from the upstream information
`40 sources as desired by the customers. The algorithm is
`configured to operate in response to an MM server system
`performance model that predicts the number of streams that
`the system can dynamically support. The system of this
`invention enables open system MM servers to guarantee
`45 delivery of information to servers dynamically with no
`buffer overflows or starvation.
`According to one implementation of the algorithm of the
`inventive system, whenever a new service request arrives at
`the MM server, the system recognizes the constant bit rate
`50 needs (based on the characteristics of the client) and the
`duration for which the client needs the guaranteed continu(cid:173)
`ous service. The system maintains a table or database that
`dynamically keeps track of the utilization and capacity of
`key system resources such as the CPU(s), disks, MM data,
`55 memory, system bus and data input/output path bandwidths.
`According to one implementation of the algorithm of the
`present invention, the following steps are practiced:
`(1) If the request is for terminating or cancellation of MM
`service, the algorithm responds to the request and
`updates the appropriate table entries (utilization of key
`resources) reflecting the release of the resources, and
`then continues to step (5). If it is for a new services
`request, then proceed to step (2).
`(2) From a table look-up, the algorithm checks if key
`resources are available for the duration. If they are, then
`proceed to step (3). Otherwise, deny this request as
`there is no way to meet it, and proceed to step (5).
`
`Ex.1025
`APPLE INC. / Page 5 of 11
`
`

`

`US 6,279,039 Bl
`
`3
`(3) Determine if granting the request would bottleneck
`any of the key resources currently being used by
`existing clients. If the answer is yes, the new request is
`turned down (since to satisfy the request, one or more
`users would fail to get service from the server), and step 5
`(5) is executed. Otherwise, proceed to step (4).
`( 4) The algorithm invokes an MM performance prediction
`model logic using the current values for the model
`parameters to predict if the system can guarantee the
`constant bit rate delivery to the new request for the
`specified period while meeting all the existing perfor(cid:173)
`mance guarantees for the current users. If the answer is
`yes, appropriate resources are granted to the new
`request, and the table entries are updated to reflect the
`addition of the new user. If the answer is no, the request
`is turned down (to avoid failure to meet the require(cid:173)
`ments of one or more users) and step (5) is executed.
`(5) Wait for the next request arrival, or completion of
`service for one of the current users. If a service is
`successfully completed, then proceed to step (1). If a 20
`new request arrives, go to step (2).
`
`10
`
`4
`open system computers that include symmetric
`multiprocessing, highly parallel architectures, operating
`systems, databases, high availability and other technologies.
`The present invention provides an approach that enables an
`open system server to support GQOS.
`Admission control has received considerable attention in
`the art. One proposed technique provides an observation
`based method that assumes that the average amount of time
`spent in retrieving a media block from disk does not change
`significantly even after the new client is admitted by the
`server. Requiring a disk subsystem operating under this
`assumption, however, represents a major restriction. In a
`statistical admission control algorithm, distributions of
`access times of media blocks from disk and the play-back
`requirement of streams are used to provide a statistical
`15 service guarantee to each client. Such method assumes that
`occasional loss of media information is acceptable and a
`percentage of frames that may lead to brief distortions in
`play back can be discarded. The present invention resolves
`the shortcomings of such prior art approaches.
`The present invention provides an algorithmic method for
`providing GQOS. A premise of this invention is that once a
`task is assigned resources, it must be allowed to continue
`securing GQOS. Each new user request is evaluated to
`assure that allocating resources to it will not affect the
`GQOS for the user requests currently being served. This is
`done by an open queuing network based performance model
`of the overall MM system. The performance modeling
`determines the maximum number of streams the entire
`system can support and the response time for a typical user
`30 command for such system. For a discussion and analysis of
`such performance modeling for an ATM based diverse data
`enabled server of the general configuration that will be
`hereinafter described with respect to illustrating practice of
`the methodology of this invention, the reader is referred to
`35 my copending cross-referenced U.S. patent application, Ser.
`No. 08/624,337 filed on even date herewith and entitled
`"Predictable Diverse Data Delivery Enablement Method and
`Apparatus for ATM Based Computer System," which is
`herein incorporated by reference. The model, incorporated
`40 as a software driver, mimics the effect of allocating the
`resource to the new user request by updating the bandwidths
`and utilization of relevant components and assures the
`end-to-end relevant data transfer delays that occur for each
`of the current users are within the bounds as not to affect the
`QOS for all users. The present algorithm denies resources to
`the request if GQOS for the existing users were to be
`compromised.
`In the following description, using graph theory and
`queuing networks the model and data structures needed by
`the algorithm of this invention are described. The resource
`management algorithm of this invention is then developed
`and its complexity is determined. Application and illustra(cid:173)
`tion of the methodology of the algorithm as used in middle-
`ware of an MM server system, is then described.
`The Graph Theoretic Model
`Consider a client-server architecture where a number of
`client computers are connected to the server via both ATM
`and local area networks (LAN). The server can be a uni-
`60 processor or multiprocessor system that has a large disk
`farm, a number of ATM OC-3 interfaces that serve clients
`over the ATM network. The server ATM connections are
`assumed to be ordered (from 1 to n) as are the major
`components such as the CPU memory subsystem and 1/0
`65 buses, disk drives, media content resources ( or streams of
`data) etc. A client user, "client (ij)" is the j-th client asso(cid:173)
`ciated with the i-th ATM connection to the server.
`
`BRIEF DESCRIPTION OF IBE DRAWING
`Referring to the Drawing, wherein like reference desig- 25
`nations represent corresponding parts throughout the several
`views:
`FIG. 1 diagrammatically illustrates on embodiment of a
`multimedia system to which the present invention can be
`applied; and
`FIG. 2 diagrammatically illustrates the dominant resource
`graph associated with the system of FIG. 1.
`
`DETAILED DESCRIPTION OF IBE
`PREFERRED EMBODIMENT
`
`45
`
`A meaningful definition for QOS in MM communications
`must encompass the entire solution architecture rather than
`be limited to disk 1/0 or server or network. Taking the lead
`from business computing and communication arena, it is
`reasonable for a MM customer to desire: (a) response time
`for the continuous media requests to be bounded by a
`reasonable time; and (b) service for all admitted clients to be
`continuous for a period with the requested quality in terms
`of Mbit rates, delay and jitter.
`The QOS for MM is determined by the response time and
`data delivery rate at the client. This is influenced by the
`processing scenario within the system incorporating the
`information source (e.g. a disk subsystem), network con(cid:173)
`nections from other servers or clients, and the software.
`GQOS within a system is possible under the following
`conditions:
`1. the system is configured properly meeting the custom(cid:173)
`er's stated needs;
`2. admission control is based on current values of 55
`(monitored or model predicted) performance param(cid:173)
`eters for key components and the impact of new request
`admission on the QOS of new and existing requests;
`and
`3. protection against viruses and misbehaving applica(cid:173)
`tions whose resource demands hurt QOS for all.
`An obvious solution is to design servers that provide
`GQOS from their part and couple them with the ATM
`network that has QOS built by design so that the entire
`system provides GQOS for MM customers. However, pro(cid:173)
`prietary real time computers are more expensive than off(cid:173)
`the-shelf open systems. Further, users might prefer emerging
`
`50
`
`Ex.1025
`APPLE INC. / Page 6 of 11
`
`

`

`US 6,279,039 Bl
`
`6
`the maximum utilization permitted for component (u,v) and
`maximum delay permitted by client(ij) respectively. Two
`lists Le and Ls (initially empty) are used to keep track of
`current list of active users and active streams. With these
`data structures, component utilization, end to end delays,
`and relevant response times are computed each time a user
`request is serviced.
`The algorithm uses function delay( active stream) that
`accepts an active stream ( client(ij), stream_no) as argument
`and returns the expected delay for the transfer of data of
`requested size from disk storage to client(ij). This delay is
`obtained by summing up the latencies the data experiences
`at all components along the data path from the disk to the
`client. As described in the papers to K. V. Bhat, "Perfor-
`15 mance Modeling and Analysis for AT&T 3416 Based Mul(cid:173)
`timedia Server Architecture", Proceedings of International
`Conference on Telecommunications 95, PP 1-8, April 1995;
`and K. V. Bhat, "Performance and Guaranteed Quality of
`Service for AT&T Multimedia Communication Server",
`20 Proceedings of Symposium on Multimedia Communications
`and Video Coding, New York, Oct. 11-13, 1995, and as also
`described in my cross-referenced copending patent
`application, the following queuing network model (used
`extensively in analyzing computer system performance)
`25 computes delays at each of the components.
`
`component (u, v) delay=
`
`service time at (u, v)
`(l _ U(u, v))
`
`(2)
`
`The function delay( client(ij), stream_no) is implemented
`by the following:
`
`u v = '\7 (service time at (u, v))
`( ' ) L.
`(1 - U(u, v))
`EXij,Aij
`
`(3)
`
`5
`We associate a graph G={V, E} with the MM communi(cid:173)
`cation system where V denotes the vertex set and E the edge
`set. The edge set E represents the set of all major compo(cid:173)
`nents (ordered in some fashion) of the system where a
`significant transmission delay or utilization occurs during 5
`the system operation. Relevant system components modeled
`include the CPUs (both server and clients), buses, disk
`subsystems, intelligent interfaces to buses, LAN and WAN
`network links and switches and all software, middleware
`and firmware required for operating the entire system. An 10
`edge (u,v)EE, u,v EV, if and only if (u,v) is a component of
`the system.
`For u,vEV, a walk from u to v in G is a sequence of edges
`{ e1 , e2 , . . . ek} where e1 is incident to u, ek is incident to v,
`and consecutive edges have a vertex in common. A path is
`a walk where each edge appears only once. The client(ij) can
`be accessed by the server CPU via two alternate paths
`namely rij and A;j• The former is for the LAN connection
`from the client(ij) to the server and the later is for the ATM
`network connection.
`When a request (such as read data, exit, pause or resume)
`from client(ij) arrives at the server CPU via the path r rj, the
`path A;j from CPU complex to the client and the path II;j
`from CPU to appropriate disk are uniquely defined from the
`overall system topology. The path information from clients
`to server and the server to stream storage are kept in path
`tables constructed at system configuration time. An address
`translation table that provides the mapping of corresponding
`client addresses to be used for ATM and the LAN drivers is
`also built at that time. Thus for any client, the two drivers 30
`can uniquely figure out the data paths and components
`visited by the data transfers effected by the drivers.
`Streams are numbered and mapped to disks where they
`are stored. A stream is active if it is assigned to a client, and 35
`the data stored in the disk is being moved to the client. An
`active stream is denoted by the paid ( client(ij), stream_
`number).
`A component can be of type-1 or type-2 or type-3. Type-1
`are passive links such as the bus, type-2 are CPUs and type-3
`are storage devices within the system. Associated with each
`component (u,v) E E, T(u,v), and C(u,v) respectively denote
`the type and capacity of the component. C(u,v) is measured
`in megabytes (MB) per second for type-1, in CPU seconds
`for type-2, and in gigabytes (GB) for type-3 components.
`U(u,v) denotes the utilization of component (u,v) based on
`existing user workload. A new user service request when
`granted increases the utilization U(u,v) to U'(u,v) only if the
`new user data visits the component (u,v) at least once within
`the processing scenario, and the following holds true:
`
`Based on the component performance characteristics, the
`service time at each component for specific data chunk is
`40 known (from laboratory measurements or vendor supplied
`design parameter values) for a given system. The utilization
`U(u,v) for each component is computed dynamically by the
`algorithm, deadline(ij) denotes the maximum delay tolerated
`by client(ij) before its QOS is violated and threshold(u,v)
`45 denotes the maximum utilization permitted for component
`(u,v) from performance and delay considerations. These
`parameter values can be computed by the model for specific
`processing scenario on an architecture as indicated in [2].
`Each request from client(ij) provides a request_vector
`50 ( client(ij), type, stream_number, start, size, Mbit_rate) to
`the algorithm indicating the client, request type ( e.g., get
`data from stream number xxx at offset yyy of size zzz
`MBytes at rate rrr MB its/second, exit, pause or resume)
`being made. R is the response time cut-off for a request
`55 acceptable to the user. R is the sum of the time it takes the
`client request to appear at the server CPU (denoted by
`input_latency) and the delay(client, stream_no). Input_
`latency can also be determined by a queuing model. For
`simplicity, it can be assumed that this delay is bounded by
`60 0.1 seconds, which is reasonable.
`
`U'(u,v)=U(u, v)+Ll.(u, v)
`
`(1)
`
`where, li.(u,v)=(request service time in seconds at compo(cid:173)
`nent (u,v))/C(u,v) for type-2 component and (MB/second
`data rate in (u,v))/C(u,v) for the type-1 component as the
`case may be. For each (u,v) EE that is not visited by the data
`within the processing scenario, li.(u,v)=0.
`List La has all active streams at any time. The i-th entry
`in the array Avail[] is true if stream numbered i is available
`for assignment, and is false if already assigned to some
`client. Associated with E, we have three vectors U, U', and
`Veach of dimension IEI, the entries of first two are initialized
`to 0 at the system boot time. Vector C stores the capacities
`of each key component at the system configuration time.
`These vectors are used to manipulate the utilization of the 65
`components in E to mimic the behavior of the system.
`Entries threshold(u,v) and deadline(ij) are used to initialize
`
`The Algorithm and Its Complexity
`
`The following description sets forth the algorithm incor(cid:173)
`porated by a preferred embodiment of the present invention.
`The algorithm uses the following three procedures: WaitQ,
`Current_userQ, and Admission_controlQ.
`
`Ex.1025
`APPLE INC. / Page 7 of 11
`
`

`

`7
`
`Algorithm:
`Procedure Wait( )/* wait for the next request * /
`While (true) do begin
`If (request-false) then do Wait()
`Else do begin
`Get request_vector (client(ij), type,
`stream_no, start, size, Mbit_rate)
`If ( client(ij) E Le) then do
`current_user( )
`/*execute the current_user() module */
`Else do admission_control( )/* A new
`client request requires
`admission_control. * /
`
`End
`
`End
`
`End
`
`Procedure Current_userO
`If (type=exit) then do being /*Reclaim all the resources
`used by the exiting user.*/
`avail [ stream_no ]=true
`Delete client(ij) from Lc
`Delete ( client(ij), stream_no) from La
`For each (u,v) E II;p Aij do begin /*Update utilization
`reflecting the release of resources.*/
`U(u,v)=U(u,v)-~(u,v)
`End
`End
`If ( type=pause) then do begin /*If it is for a Pause,*/
`For each (u,v) E II;p A;p do begin\* Update utilization
`reflecting the release of resources.*/
`U(u,v)=U(u,v)-~(u,v)
`End
`End
`If (type=resume) then do begin/*If resume,*/
`Admission_controlQ\* allocate next data chunk through
`admission control.*/
`
`End
`If (type=data) then do begin/* For active client data request,
`issue disk read and wait.*/
`Read (stream_number, start, size,buffer(ij))
`WaitO
`End
`End/* of Current userO
`Procedure Admission_controlQ/* Both new and resumed
`clients will be admitted only via this module.*/
`If (( avail [ stream_]=false) & ( client(ij)-E Le))
`then do\* If the stream_no is unavailable,
`Error(l)\* deny the request and send error condition(l)
`to the new client(ij).
`WaitO
`End
`Else do begin\* Assess the consequence of admission on 55
`bandwidth utilization and latencies.*/
`For each (u,v) E II;j, A;j do begin/*
`components bottleneck.*/
`U'(u,v)=U(u,v)+~(u,v)
`If (U'(u,v)>threshold(u,v)) then do begin\*If a component
`bottlenecks,*/
`Error(2)\*send error condition(2) to the client(IJ)
`& deny the request*
`WaitO
`End
`End
`
`Check if affected
`
`65
`
`US 6,279,039 Bl
`
`8
`For each (u,v) E II;j A;j do begin/* Assume admission and
`verify latency*/
`U(u,v)=U'(u,v)/*Revise utilization of components.*/
`End
`If ( client (ij), stream_no )-E La) then include client(ij) in
`Lj*lnclude new client in La*/
`
`5
`
`15
`
`20
`
`25
`
`30
`
`End
`For each (client(x,y), stream_no) E La do begin if ((delay
`( client (x,y), stream_no )>deadline(x,y))I( delay( client(x,y),
`10 stream_no)+input_latency>R))
`then do begin/*If QOS for existing streams were to suffer,
`on admission of client(ij), * /
`Error(3) /*send error condition(3) to client(ij), deny the
`request*/
`For each (u,v) E II;j, A;j do begin
`U(u,v)=U(u,v)-~(u,v)/*Reclaim respective bandwidth
`assignments.*/
`Remove client(ij) from La
`WaitO
`End
`End
`Do begin/* Admit client(ij), Update utilization, issue disk
`read and wait for next request.*/
`If (client(ij)-E Le) then do begin
`Include client(ij) in Lj*lnclude client(ij) m current
`user list*/
`End
`Read (stream_no,start,size,buffer(ij))
`WaitO
`End
`End
`End/*of admission_controlO* /
`End/*end of Algorithm.*/
`35 The following assertions are then made.
`Assertion. The Algorithm works correctly and its complex(cid:173)
`ity is linear in terms of the number of streams.
`The procedure WaitO keeps looking for any input
`requests. If there is one, it secures the request_ vector,
`40 executes the current_userO module or admission_control
`depending on whether the client is new one or existing one.
`The current_userO tests the request type. If the type=exit,
`accounts for the release of resources used by the exiting user,
`deletes the client from Lc and marks the stream_no as
`45 available. If type=pause, the utilization of appropriate com(cid:173)
`ponents are updated to reflect the release of the resources
`used by the paused user. If type=resume, then the
`admission_controlO is invoked so that QOS for existing
`users is not compromised. If the type=data, the required data
`50 is read from appropriate disks and sent to the client.
`If WaitO encounters a request that is not from Lc then
`admission_control is invoked. If the requested stream is
`unavailable the module turns down the request and waits for
`the next request. Otherwise, it checks if allocating resources
`to the user would exceed bandwidth utilization of any
`resource beyond an acceptable threshold. If it does, admis-
`sion is denied and WaitO executed. Otherwise, the module
`verifies if resource allocation to the user will violate any of
`the end-to-end latency and response time requirements for
`60 all existing users including the current candidate. If there are
`no violations, the user is allocated resources and the data
`needed is read from the disks for the client. If there were a
`violation, the request is denied and all data st

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket