`United States Patent
`Pitkin et al.
`Aug. 23, 1994
`[45] Date of Patent:
`
`[11] Patent Number:
`
`[191
`
`US005341477A
`
`.. 364/401
`4,858,120 8/1989 Samuelson ..... ..
`364/200
`4,897,781
`1/1990 Chang et al. .
`64/DIG. 1
`4,914,571 4/1990 Baratz et al.
`4,949,248 8/ 1990 Caro .......................... .. 364/200
`5,005,122 4/1991 Griffin et al.
`....................... 364/200
`
`
`
`Primary Examiner—Eddie P. Chan
`Attorney, Agent, or Firm——Kenyon & Kenyon
`
`[57]
`
`ABSTRACI‘
`
`In a computer network, a broker mechanism allocates a
`plurality of servers, each having an available resource
`capacity, to a plurality of clients for delivering one of
`several services to the clients. The broker operates by
`monitoring a subset of all available servers capable of
`delivering the requested service. The allocation is based
`on developing a network policy for the plurality of
`servers by collecting a local policy for each of the serv-
`ers. The broker receives client requests for the services
`and based on the network policy and available resource
`capacity suggests one of the servers, monitors in its
`subset for that particular service, to one of the clients
`making a request. The server suggested enforces its
`local policy by not allowing any connections exceeding
`its available resource capacity.
`
`24 Claims, 10 Drawing Sheets
`
`[54] BROKER FOR COMPUTER NETWORK
`SERVER SELECTION
`
`[75]
`
`Inventors: Richard P. Pitkin, Lowell; John P.
`Morency, Chelmsford, both of Mass.
`
`[73] Assignee:
`
`Digital Equipment Corporation,
`Maynard, Mass.
`
`[21] Appl. No.: 103,722
`
`[22] Filed:
`
`Aug. 6, 1993
`
`[63]
`
`Related U.S. Application Data
`Continuation of Ser. No. 924,390, Aug. 3, 1992, aban-
`doned, which is a continuation of Ser. No. 314,853,
`Feb. 24, 1989, abandoned.
`
`Int. c1.s ..................... .. G06F 13/00; GO6F 15/16
`[51]
`[52] U.S. Cl. .................................. .. 395/200; 395/325;
`364/DIG. 1; 364/281.6; 364/284.4; 364/264;
`364/940.61
`[53] Field of Search ...................... .. 395/200, 325, 725
`
`[56]
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`4,757,267 7/1988 Riskin ................................ .. 379/113
`4,780,821 10/1988 Crossley
`364/200
`4,800,488
`1/1989 Agrawal et
`
`364/200
`
`HEPDSI TORY
`
`
`
`
`
`75
`
`
`
`Samsung - Exhibit 1015 - Page 1
`
`‘A’81’EE
`l;—\5"—“—“
`II
`
`I I
`
`1__--__l_-
`
`5-vO--‘-t-—----Uyzfifib2-,—-
`
`5‘
`
`flrs
`
`-'----_--
`
`|_-r- I 1
`
`'I
`
`I
`
`73
`
`ooooo
`I
`73 HH
`I
`73
`I
`on any ‘ s4
`
`11.;
`
`L ____________________ --_I
`
`1
`
`sag ‘-I-ai
`- ? I
`{~84-1:
`,-I-4_
`
`(E
`
`1.25
`65
`“";;—“
`
`73
`
`54
`
`®®®®---®
`
`75
`
`CLIENT CONNECTION
`
`Samsung - Exhibit 1015 - Page 1
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 1 of 10
`
`5,341,477 '
`
`010000000990
`
`I(IIB\\\\’I\
`O.\O..O\\\\\\\
`
`\\
`
`\
`
`Samsung - Exhibit 1015 - Page 2
`
`Samsung - Exhibit 1015 - Page 2
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 2 of 10
`
`5,341,477
`
`LE7!
`
`Samsung - Exhibit 1015 - Page 3
`
`2 F
`
`IG.
`
`Samsung - Exhibit 1015 - Page 3
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 3 of 10
`
`5,341,477
`
`C’)?1
`
`O”
`
`
`
`[1
`I—!
`
`
`O”
`i‘*4
`
`EA
`
`FIG.
`
`0
`
`0
`
`
`
`LF3ve-I
`
`V‘V‘!
`
`
`
`:2
`
`” (
`
`‘U$0
`
`‘-1
`"*1
`
`0 O
`
`\
`
`’1\\
`
`I
`
`~
`
`\
`
`
`
`0”
`
`Samsung - Exhibit 1015 - Page 4
`
`Samsung - Exhibit 1015 - Page 4
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 4 of 10
`
`5,341,477
`
`FIG. 3
`
`
`
`SERVICE
`
`CHARACTERISTICS
`REQUIRED PER SERVICE
`
`
`SERVER
`PA”‘“ETE”5
`
`
`
`
`
`
` VARY THE
`PARAMETERS OF THE
`SERVER ANDIOR THE
`SERVICES
`
`CAPABILITY OF SERVER
`TO DELIVER SERVICE
`
`
`DOES CAPABILITY
`MATCH INTENDED
`
`POLICY?
`
`
`
`STORE
`LOCAL POLICY
`IN NAME SERVICE
`
`
`
`
`INPUT NEXT SERVER
`'
`AND REPEAT
`
`IS SERVER LIST
`COMPLETED?
`
`Samsung - Exhibit 1015 - Page 5
`
`
`
`
`
`ASSIGN SCAN HEIGHT
`RATIO OF SERVERS
`TO OVERALL POLICY
`
`COLLECT LOCAL
`POLICIES FOR
`NETVORK POLICY
`
`
`
`Samsung - Exhibit 1015 - Page 5
`
`
`
`U
`
`e
`
`tF%%%%%%%%%%%%%%%%%%%%%;__n%%%%%%%%%%%%n__
`%—5mmmm3om&m.@@a@@@E:
`A.1..................--...........-J.:Egg_z\:
`
`m_0__WIr.....J:_.#_Eaa_%_..v®_.I:film3_..m_Q__...~.u_min3r@Li__u._m._2_ME":
`
`any!IllEs3mgm_
`
`m r--|IL_I.-i_mmr..............-im:l-i_
`@@.a@@@_
`
`
`
`7m7mM@...@®@®@@@@m.@M5“.9
`
`S
`
`Samsung - Exhibit 1015 - Page 6
`
`
`
`LlnmaPS”U
`
`3,2
`
`1A
`
`%
`
`m
`
`m.NExam«E>$m
`
`m.@Hk
`
`M,2§>$mImazfim3azmaI§>$m
`
`MWm52.2E52523$2523$252
`
`.....ZSE5E5tags:2:>E§5E3>555E30
`
`77.4,m1M4m39a5W.
`
`Samsung - Exhibit 1015 - Page 7
`
`
`
`P
`
`t
`
`MmA
`
`1mmS
`
`%<0.OHM
`
`M.Nm_>$m.52%
`
`we.2meN22:
`
`EméémI§>$m35:3I§>$w
`
`m:2Nwe
`
`ommwewe
`
`:1_.M,EEM5W.
`7M.M.
`
`Samsung - Exhibit 1015 - Page 8
`
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 8 of 10
`
`5,341,477
`
`FIG. 5
`
`GET SERVICE NAME
`
`SERVERS FOR HINDOH
`OF EACH SERVICE
`
`FROM DIRECTORY STARTUP CONNECTIONS TO
`
`
`
`
`
`
`GET SERVICE NAME INFORMATION
`(HINDOH SIZE. OTHER ATTRIBUTES)
`
`ALLOCATE DATA STRUCTURE FOR
`NAME. HINDOHS, LOCATIONS
`
`
`
`GET SERVERS
`INFORMATION
`
`SERVER
`FOUND?
`
`YES
`
`‘
`
`ALLOCATE DATA STRUCTURE FOR SERVER
`NAME, POLICY. SCAN HEIGHT
`FILL IN VALUES FROM LOCATION INFROMATION
`
`Samsung - Exhibit 1015 - Page 9
`
`Samsung - Exhibit 1015 - Page 9
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 9 of 10
`
`5,341,477
`
`FIG. 6A
`
`RECEIVE MESSAGES MITH
`SERVICE NAME FROM CLIENT
`
`
`
`DOES SERVICE
`LIST EXIST?
`
`
`
`YES
`
` FOUND NAME
`
`IN SERVICE
`
`NAME LIST?
`
`
` MINDOM
`HAS ENTRIES?
`
`
`
`
`
`GET POLICY
`
`
`MTADAMEATIARIH
`REMOVE CONNECTION
`
`
`DECREASE HINDOM SIZE
`
` POLICY LIMIT
`REACHED?
`
`
`
`
`
`
`MOVE TOP HINDDH
`FOUND ENTRY
`POINTER DOMN
`TEMPORARY MARK IN USE
`REMOVE CONNECTION
`DECREASE SCAN HEIGHT
`TO SERVER
`DECREASE MINDOV SIZE
`
`
`
`
`
`SEND MESSAGE MITH
`ERROR TD CLIENT
`
`SEND MESSAGE VITH
`ENTRY TO CLIENT
`
`FILL MINDOM IF NEEDED
`
`Samsung - Exhibit 1015 - Page 10
`
`Samsung - Exhibit 1015 - Page 10
`
`
`
`U.S. Patent
`
`Aug. 23, 1994
`
`Sheet 10 ‘of 10
`
`5,341,477
`
`Samsung - Exhibit 1015 - Page 11
`
`7 F
`
`IG.
`
`Samsung - Exhibit 1015 - Page 11
`
`
`
`1
`
`5,341,477
`
`BROKER FOR COMPUTER NETWORK SERVER
`SELECTION
`
`This is a continuation of application Ser. No.
`07/924,390, filed Aug. 3, 1992, abandoned, which in
`turn is
`a
`continuation of application Ser. No.
`07/314,853, filed Feb. 24, 1989 (now abandoned.
`
`FIELD OF THE INVENTION
`
`This invention relates generally to the allocation of
`resources within a computer network architecture and,
`more particularly, to the use of a broker mechanism
`which responds to a client’s or end user’s request for
`some service and suggests a server to the client capable
`of supplying the requested service.
`
`BACKGROUND OF THE INVENTION
`
`Present day computer network architectures have
`become more and more complex. For example, a net-
`work may comprise multiple devices, such as several
`computer systems, terminals, peripherals, etc., all linked
`together in order to exchange information and share
`resources.
`
`Other more complex networks may comprise thou-
`sands of devices of varying technical capabilities. In
`many applications, both local area networks (LANs)
`and wide area networks (WANS) are connected to form
`parts of a single network. For example, a network can
`include a local area component, a wide area component
`and a component involving communication between
`computers of different vendors,- such as a multi-vendor
`communication “network.
`
`The process for designing computer networks of this
`type involves a complex -analysis of the customer re-
`quirements, the price factor for the various products,
`the necessary communication lines, and other technical
`and business considerations. The allocation and planned
`use of the various resources to support the network
`needs is now part of the normal capacity planning cycle
`done by a network manager when setting up the net-
`work for a customer.
`
`Because of the potential to form complex networks, a
`computer network client has a wide range of resources
`available to supply requested service. This advantage,
`however, also adds to the problem of networks and
`resource management. It is therefore desirable to obtain
`delivery of these services through network service
`names. A service offers clients capacity to use resources
`accessible through the network.
`In these computer networks, a server, made up of
`both hardware and software, may be used as an agent
`between a client and one or more resources to deliver
`the requested service to the client. The server’s soft-
`ware program provides “actions” for a client using one
`or more resources. These actions, for example, may be
`transmission of packets over a communication line—a
`communication server; computations in a problem—a
`CPU server; information access in a data base—a data
`base server; network addressing information distributed
`throughout a network and associated with a requested
`service——a name server; or other similar and multiple
`combinations of these functions.
`
`Because networks may include multiple servers gen-
`erally delivering the same action, a client may access
`any one of these servers with equal delivery of the
`action. One of the disadvantages in the prior art is the
`inability to group these servers providing the same
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`action. This grouping of servers can be seen as a “net-
`work service”. To overcome shortcomings of the prior
`art an ideal system for a client to request a service in a
`complex network should fulfill two requirements. First,
`the request must include generic accessing of a particu-
`lar network service name to provide a higher level
`interface abstraction than the current practice where
`each client knows of all server possibilities. This has the
`advantage of substantially simplifying the decision pro-
`cess for the client in the case where multiple servers can
`deliver the requested service. The only requirement is
`the network service name rather than the name of each
`and every server offering the requested service. Sec-
`ond, the requested service must be guaranteed to exist
`for the client within the server for the duration of a
`connection.
`
`To supply a service to a client, it is known to use a
`broker mechanism coupled to the network for schedul-
`ing a server to a requesting client. The broker receives
`a request for access to a service from a client and trans-
`mits to the client the name of a server which can deliver
`the service to the client. One known type of broker
`operates by assigning an entire server to a client irre-
`spective of the capacity needed by the client.
`A problem with the above broker method is the inef-
`ficient use of network resources. Prior brokers must
`continuously monitor all of the servers coupled to the
`network, increasing the brokers overhead and the net-
`work traffic. Because the resource capacities of the
`servers and the necessary performance level to be allo-
`cated to the client are dynamic factors, a broker mecha-
`nism which can dynamically allocate servers to clients
`using a predetermined policy is needed.
`
`SUMMARY OF THE INVENTION
`
`The present invention is a broker method and appara-
`tus which, on one hand monitors the dynamic status of
`a set of servers and, on the other hand, responds to
`requests from accessing clients concerning which mem-
`ber of that server set is capable of providing the re-
`quested service.
`The service limitations for the requested service are
`preferably established as a matter of policy during the
`network design and modelling process by a system or
`network manager. Based upon the policy, the broker
`thus suggests to the client a server which is best able to
`satisfy the client’s service request. The broker enables a
`client to use a service even though the client is unaware
`of the presence or absence of any individual servers
`within the network.
`
`According to one aspect of the invention, the client
`then requests the service from the recommended server,
`and the server is responsible for granting the request
`only if the server currently has the required capacity
`available for that service. Since the server makes the
`final determination of whether it has the available ca-
`pacity to provide the requested service, there is no risk
`of overloading a server because the broker has out of
`date status information—for example, as when a server
`becomes busy subsequent to its last status message trans-
`mitted to the broker.
`
`As indicated above, the suggestion of a server prefer-
`ably is based on the network policy developed by the
`network manager for the particular network. Because
`of the complex nature of present networks, a modelling
`process is used to develop a local policy for each server
`in the network to deliver a service, based on the individ-
`ual customer requirements for the network. The collec-
`
`Samsung - Exhibit 1015 - Page 12
`
`Samsung - Exhibit 1015 - Page 12
`
`
`
`3
`tion of local policies determines the overall network
`policy for a given service.
`In most cases, the network policy is based on the
`servers’ capacity to deliver a given service. The capac-
`ity of the given service, however, can be changed with
`the addition or subtraction of servers to the network. In
`either case, the changes are made known to the broker
`and are transparent to the clients in the network.
`Upon startup, the broker develops a linked list data
`structure containing sets of server entries for each of-
`fered service corresponding to the local policy obtained
`for each server. According to one aspect of the inven-
`tion, at any given time, the broker monitors a subset of
`those entries as a “preview window”. The broker regu-
`larly rotates different servers into and out of the pre-
`view window. The broker suggests an entry in the pre-
`view window, upon receiving a client request, provided
`the entry has the available resources for the client.
`An advantage of the preview window is to substan-
`tially decrease the number of servers which must be
`monitored by the broker in a network. Because the
`broker maintains current status information on only a
`subset of the servers during a given time interval, the
`number of status messages required to be transmitted
`over the network is reduced. This results in a reduction
`of computational overhead for the broker and commu-
`nications overhead on the network.
`In this embodiment, the broker assigns servers in a
`round robin fashion to ensure that the loss of a single
`server does not have a major impact upon clients that
`request access at similar times. The application of the
`round robin server distribution, however, is regulated
`by use of a “scan weight” parameter for each server.
`The scan weight is defined as the number of client re-
`quests which can be satisfied by a server before that
`server is removed from the preview window.
`The advantages of the broker can be achieved with
`only a minimal increase in connect time from client to
`server. This is because the broker uses the preview
`window to monitor server status information prior to
`receiving any client requests. The broker also uses the
`scan weight value to reduce the frequency by which the
`preview window is changed thus further increasing
`accurate server suggestions by the broker. Therefore,
`the incremental client connect time, as opposed to a
`direct client-server connection, is only increased by the
`time needed to contact the broker and receive a sugges-
`tion.
`
`In a further embodiment, the broker suggests to a
`client both the top server entry and the next active
`server entry in the preview window. By suggesting two
`possible servers, the broker compensates for the case in
`which a failure of the first server is alleviated without
`the need to recontact the broker, thus decreasing the
`load on the network.
`
`Another embodiment of the present invention makes
`use of multiple brokers for suggesting servers to multi-
`ple different clients. The ability to have multiple bro-
`kers running simultaneously provides a highly reliable
`service by alleviating any single point of failure.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a block diagram illustrating the environ-
`ment of the present invention.
`FIG. 2 is a block diagram of a network architecture
`including the present invention.
`FIG. 2A is a block diagram conceptually illustrating
`the broker mechanism of the present invention.
`
`5,341,477
`
`4
`
`FIG. 3 is a flow chart describing the modelling of a
`network policy according to the present invention.
`FIG. 4 is a conceptual block diagram of the broker
`mechanism in FIG. 2.
`
`FIGS. 5 and 5A are examples showing the operation
`of server data structures used in the present invention.
`FIGS. 6 and 6A are flow charts describing the opera-
`tion of the broker mechanism of the present invention.
`FIG. 7 is a conceptual block diagram of an embodi-
`ment of the present invention.
`
`DETAILED DESCRIPTION OF THE
`INVENTION
`
`I. Network Environment
`
`Referring to FIG. 1, there is shown a model block
`diagram of a system enhanced by the present invention.
`In a network 5, a plurality of users or clients, generally
`designated 10, each have access to a plurality of servers,
`generally designated 20. Some number of the servers 20
`can provide access to services A;—A,,requested*by one
`of the clients 10. When the client 10 requests access to
`a service via one of the servers 20, a connection is made
`from the selected client 10 through the server 20. As
`can be seen in FIG. 1, a plurality of servers 20 are avail-
`able to make the requested connection. Therefore, it
`becomes a problem to know which client 10 should
`connect to which server 20. Further, the resource ca-
`pacity of the server 20 must be known in order to make
`an appropriate connection. Because the server re-
`sources have various and different capabilities,
`i.e.,
`small CPU size, large CPU size, used and unused capac-
`ities, different serial line speeds, etc., different servers 20
`are more appropriate for a particular client request.
`Also, it is necessary to know the service level desired by
`the client 10.
`
`FIG. 2 is an example of a network architecture incor-
`porating the present invention. The network 5 includes
`a plurality of servers 21-27 and clients 11-19 coupled to
`a communications medium 6, e.g., an Ethernet. A bro-
`ker, conceptually shown as block 30, is also coupled to
`the communications medium 6. Further, a distributed
`repository 42 or 44 is illustratively coupled to the com-
`munication medium 6 for storing network policy. These
`repositories may be replications of each other distrib-
`uted throughout the network. Each server can provide
`access to various services A1—A,, for a requesting client
`11-19. In operation, the broker 30 receives the local
`policies from the repository 42 or 44 for building data
`structures for each service supported by the servers,
`processes client requests for a service, and responds in
`accordance with network policy with one or more
`server suggestions to the client.
`FIG. 2A conceptually illustrates the architectural
`diagram of FIG. 2. A broker mechanism 30 is used to
`suggest to clients 11-19 an appropriate server 21-26 for
`delivering the requested service (service A1 or A2).
`Further, individual servers 23, 24 and 25 can provide
`more than one service. While FIG. 2A only illustrates
`two services, it is apparent that the broker 30 can sug-
`gest servers to clients requesting any number of differ-
`ent services. When a particular client, shown as client
`13, requests access to service Al, the client 13 places a
`request with the broker 30 on path 54. The broker 30
`suggests an appropriate server from server set 21-24 to
`the client 13.
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`60
`
`65
`
`Samsung - Exhibit 1015 - Page 13
`
`Samsung - Exhibit 1015 - Page 13
`
`
`
`5
`
`II. Modelling
`
`5,341,477
`
`6
`
`IV. Broker
`
`A modelling process is used to efficiently determine
`the allocation of resources when designing a network.
`The modelling is accomplished by the network man-
`ager both prior to implementing the network and upon
`making subsequent changes to the network. In the mod-
`elling process factors are taken into account such as
`whether data transmissions during the use of a particu-
`lar network service are of a bulk or interactive (bursty)
`nature and their effect on resource capacities. Because
`bulk data transfers usually have a high transfer rate i.e.,
`an entire floppy diskette of data is transferred and, on
`the other hand, interactive data transfers generally are
`of a short, bursty nature, it is generally possible in a
`given server to support more interactive data transfers
`than bulk data transfers.
`
`FIG. 3 is a flow chart depicting the modelling pro-
`cess that occurs in the development of the network
`policy to be implemented by the broker mechanism for
`each service. The modelling begins by determining the
`service characteristics that are desired by the customer,
`i.e. the various possible actions, and the parameters of
`the server in the network,
`i.e. the resources of the
`server. Both the server parameters and required service
`characteristics are inputs to a modelling process such as
`is described in “Processor-sharing queuing model for
`time-shared systems with bulk arrivals” Kleinrock et al,
`Networks v.l n.l pp. 1-13 (1971) or “Models for Com-
`puter Networks” Kleinrock, IEEE Int Conference on
`Communications, Conf. Rec, Boulder, Colo., June 9-11,
`Session 21 pp. 9-16 (1969) .
`
`III. Network Policy
`
`The modelling process produces a prediction of the
`measured performance characteristics of each server
`based on the performance desired for each service of-
`fered by that server. This prediction is compared to the
`desired performance for each service to check if the
`server meets the performance limitations. If the predic-
`tion matches the desired performance, then the local
`policy for that particular server has been obtained.
`However, if the prediction does not match the desired
`performance, then the local policy parameters of the
`server and/or the service characteristics are varied
`before being modeled again. This process continues
`until the server prediction matches the desired perfor-
`mance thus developing the local policy for the server.
`Next, the modelling process checks whether a local
`policy has been established for each server in the net-
`work. If not, then the above process is repeated. Once
`all local policies have been determined, the local poli-
`cies are collected to form a network policy for the
`entire network. The network policy is stored in the
`distributed repository 42 or 44 as shown in FIG. 2.
`When customers set up their network, they therefore
`establish a network policy to allocate resources for each
`service. Policy decisions must be cognizant of all of the
`available factors made prior to implementing the sys-
`tem, i.e., based on the physical constraints of the hard-
`ware, the communication lines, the price factor for the
`product, the usage load, i.e., heavy, medium, light, fast
`CPUs, slow CPUs and other appropriate factors. A
`broker implements the network policy when suggesting
`an appropriate server to maximize the efficiency of the
`complex network.
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`S0
`
`55
`
`65
`
`An advantage of the present invention allows the
`broker 30 (FIG. 2A) to operate when there is: a net-
`work policy for a given service; a method to easily
`determine the current usage of the server relative to the
`established local policy; and servers which can enforce
`their local policy e.g. by rejecting any connection at-
`tempted beyond their local policy limit. These factors
`are used by the broker mechanism 30 to suggest a server
`21-26 to a requesting client 13 based on the collection of
`local server policies in the broker.
`The data structures of the broker mechanism 30 are
`formed from an application of the distributed repository
`storing the network policy for each service from which
`the broker linked list data structures can be built. Many
`conventional means such as name server computer pro-
`grams are available to facilitate the creation of these
`broker linked list data structures. The distributed repos-
`itory is a non-volatile storage area which holds the
`collection of local policies and the attributes of how
`each supporting server supports a service.
`The repository is distributed throughout the network.
`As part of the broker startup procedures, the broker 30
`extracts from the repository attribute information about
`each of the services it implements, i.e., service A1-Anas
`shown in FIG. 1. The repository is necessary to store
`the network policy in the event of a broker failure as the
`broker functions to compare current server capacity to
`network policy.
`FIG. 4 is a conceptual diagram showing an example
`of the data structures produced by the code in the bro-
`ker mechanism 30. The data structure shown inside the
`broker 30 are linked lists of entries created by the broker
`code.
`
`The broker obtains the entries to create the data
`structures from the distributed repository 42. A service
`list 71 is generated for those services found in the repos-
`itory 42. Further, a server list 74 is created for each
`service in the service list 71 to hold the local policy
`information for the particular servers 21-26 that sup-
`port the service. Lastly, the broker creates a “server
`status block” which contains a number of server con-
`nection entries 922, 923, 924 and 926, there being one
`connection entry in the server status block for each
`server 22, 23, 24 and 26 which currently is in the “pre-
`view window” of at least one service. (Preview status
`will be defined below.) Each connection entry stores
`the current status information for its corresponding
`server. The policy information contained in the server
`list and service lists remains static, while the server
`status block information is of a dynamic or volatile
`nature.
`A server connection section 75 of the broker 30 estab-
`lishes communication paths 31-34 with the servers re-
`ferred to in the preceding paragraph. The communica-
`tion paths 31-34 allow the broker to poll each coupled
`server (22, 23, 24, 26) to receive its status. The status of
`the servers 22, 23, 24 and 26 is stored in connection
`entries 922, 923, 924 and 926, respectively, within the
`server status block. In response to subsequent client
`requests for service, the broker examines these connec-
`tion entries to determine each coupled server’s capacity
`or availability to deliver a requested service, as will be
`described below with reference to FIGS. 5 and 5A. The
`server connection section 75 therefore supports a list of
`connection entries 922-926 that map (80-84) to a select
`number of server entries in the server lists 74 for each of
`
`Samsung - Exhibit 1015 - Page 14
`
`Samsung - Exhibit 1015 - Page 14
`
`
`
`7
`the services in service list 71. As shown in FIGS. 4, a
`server 24 can overlap between service offerings. The
`server connection section 75 may therefore have sev-
`eral server lists 74 with entries pointing to a single
`server connection. This pooling of server connections is
`importa.nt for providing multiple service offerings from
`one broker location.
`
`1. Preview Window
`
`During the broker’s startup, data structures are devel-
`oped in the broker. The data structures include a service
`list 71 and server lists 74 having sets of server entries
`(121-125) and (223-226) corresponding to the servers
`(21-25) and (23-26) for each service (A1 & A2) in the
`service list 71. The data structure information is pro-
`vided from the repository 42. The broker 30 then moni-
`tors, via couplings 80-84, subsets (22, 23 & 24) and (24
`& 26) of the servers from the above sets to form a pre-
`view window for each service. The size of the preview
`window is predetermined according to the network
`policy and stored as an attribute of the network policy
`in the repository 42. The preview windows 72 ensure
`that a sufficient number of servers, i.e., based upon the
`network policy for each service, are coupled and pro-
`viding status to the broker 30 for each service before a
`client request is made. The size of the preview window
`generally reflects the number of servers necessary to
`satisfy the average rate of client requests received by
`the broker. By using preview windows, the connection
`time from client to servers is reduced by accumulating
`status and policy information in the broker in advance
`of the client request.
`Without using the preview windows 72, the precon-
`nection time for a client to receive data from a service
`would be the accumulation of: (client connect time to
`broker) (broker connect time to server location)+-
`(server startup time)+(accumulation of connect times
`needed to find a server location having unassigned ca-
`pacity). By monitoring the resources of the servers
`contained in the preview windows via communication
`paths 31-34 associated with each service, the broker 30
`can suggest the top entry in the preview window to a
`client requesting that particular service. The suggestion
`is made provided the entry has the available resources.
`If an entry does not have the available resources it is
`removed from the preview window. Servers are thus
`regularly rotated in and out of the preview window.
`FIGS. 5 and 5A illustrate by example the data struc-
`tures within each server (shown here as servers 1 and 2)
`to indicate server capacity and status to deliver a re-
`quested service. The status information is provided to
`the broker 30 for comparing the capacity to the local
`policy for each server as shown by communication
`paths 31-34 in FIG. 4.
`In FIG. 5, server 1 is shown supporting two services
`101 and 102 having service names A1 and A2, respec-
`tively. Similarly, server 2 supports services 103 and 104
`having service names A1 and A2. Associated with each
`service 101-104 is a client capacity number 111-114
`indicating the number of clients that can be processed
`by each server for the particular service. The client
`capacity number reflects the server’s local policy for
`each service. As shown in FIG. 5, the client capacity
`number can be a scalar value which is reduced as the
`server increases its load. The current value of the capac-
`ity number 111-114 is therefore checked by the broker
`whenever the broker tries to use the server.
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`60
`
`65
`
`5,341,477
`
`8
`A second implementation of the database of server 1
`is shown in FIG. 5A wherein client slots 105-108 are
`assigned to each service A1 and A2 (101 and 102). For
`example, assuming each server has four client slots, two
`client slots (105, 106 and 107, 108) can be allotted for
`each service 101 and 102, respectively in server 1. This
`is equivalent to a client capacity number of two for each
`service 101 or 102. Client slots allow the broker to
`determine the availability of servers by checking a flag
`indicating whether particular client slots are used.
`However, a more powerful use of client slots is
`achieved in server 2 as shown in FIG. 5A by allowing
`certain client slots to provide multiple services. Again,
`only four client slots are allocated for server 2. The first
`client slot 115 is dedicated for use with service A1(103)
`and the second client slot 118 is provided for service A2
`(104). In this case, however, the third and fourth client
`slots are made available to each service 116, 117, 119, 12
`1. Therefore, if three requests are received for service
`A2 and only one request for service A1, then by making
`the client slots available for use with each service, all of
`the requests can be satisfied. Again, this is implemented
`by the server connection section 75 (FIG. 4) checking
`flags to determine the availability of each client slot.
`These flags indicate the status for the servers against
`which the local policies are checked in connection
`entries 922-926.
`
`Through the use of the preview window and the
`overlapping of servers to provide different services, the
`broker mechanism 30 thus reduces the number of active
`network connections (31-34) necessary to poll the serv-
`ers to obtain their status. Otherwise, active connections
`would be required from all of the servers in the net-
`work. Therefore, only a reasonably managed subset of
`servers is maintained by the broker 30 as active connec-
`tions to reduce message traffic on the network and to
`simplify the computational overhead in the broker.
`
`B. Scan Weights
`
`Referring again to FIG. 4, because the capacity of
`each server to deliver a given service may not be equal
`depending upon the network policy, an additional pa-
`rameter termed scan weight 73 is added. The scan
`weight 73 is the number of client requests which can be
`satisfied from a server before the particular server is
`removed from the preview window 72. Each server is
`assigned a particular scan weight value for each service
`by the network manager. The sca.n weight 73 allows the
`network manager to apply the network policy to more
`efficiently assign clients 10 to servers in a round-robin
`manner. The values chosen for the scan weight are
`developed as part of the capacity planning modelled in
`FIG. 3.
`
`An example of scan weight operation is given in the
`server list 74 for service A1 as shown in FIG. 4. Five
`servers 21, 22, 23, 24, and 25 having server entries
`121-125, respectively, are determined to have the fol-
`lowing capacities to access a service for a client: server
`21,2 c1ients(10); server 22,6 c1ients