throbber
United States Patent (19)
`Lee
`
`54 TREESTRUCTURED WARIABLE PRIORITY
`ARBTRATION MPLEMENTING A
`ROUND-ROBIN SCHEDULING POLICY
`75 Inventor: Kuo-Chu Lee, Franklin, N.J.
`73 Assignee: Bell Communications Research, Inc.,
`Livingston, N.J.
`(21) Appl. No.: 113,588
`22 Filed:
`Aug. 27, 1993
`Related U.S. Application Data
`Continuation of Ser. No. 537,683, Jun. 14, 1990, aban
`doned.
`Int. C.’....................... G06F 13/37; H04Q 11/00
`51
`52 U.S. C. .................................... 395/725; 395/325;
`364/242.6; 364/242.8; 364/242.9; 364/DIG. 1;
`364/937.01; 364/940.92; 364/9408; 364/DIG.
`2; 340/825.5; 370/54; 370/63
`58 Field of Search .................. 395/725, 325; 370/54,
`370/63
`
`63
`
`56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`3,353,160 6/1967 Lindquist ............................ 364/200
`4,314,335 2/1982 Pezzi ................................... 364/200
`4,347,498 8/1982 Lee et al. ............................ 395/325
`4,621,342 11/1986 Capizzi et al. ...................... 364/900
`4,672,536 6/1987 Giroir et al. ........................ 395/725
`4,924,380 5/1990 McKinney et al. ................. 395/325
`4,953,081 8/1990 Feal et al. ........................... 364/200
`4,980,854 12/1990 Donaldson et al. ................ 395/325
`5,025,370 6/1991 Koegel et al. ....................... 364/200
`5,053,942 10/1991 Srini.................................... 395/325
`5,060,139 10/1991 Theus.
`395/435
`5,072,363 12/1991 Gallagher ......
`395/325
`5,088,024 2/1992 Vernon et al. ...................... 395/725
`OTHER PUBLICATIONS
`"The Design of Nectar:A Network Backplane for Het
`erogeneous Multicomputers', E. A. Arnould et al.,
`ASPLOS-III Proc. 3rd Int'l Conf. on Archit. Support
`for Prog. Lang, and Oper. Systems, pp. 205-216, Bos
`ton, MA, Apr. 3-6, 1989.
`“An O(log N)2) Control Algorithm', T-Y Feng et al.,
`Proc. of Conf. on Parliel Processing, pp. 334-340, 1985.
`“A Self-Routing Benes Network and Parallel Permuta
`
`
`
`USOOS301.333A
`Patent Number:
`11
`45) Date of Patent:
`
`5,301,333
`Apr. 5, 1994
`
`tion Algorithms", D. Nassimi et al, IEEE Transaction
`on Computers, vol. C-30, No. 5, pp. 332-340, May
`1981.
`“Performance Measurements on a 128-Node Butterfly
`Parallel Processor', W. Crowther et al., Proc. 1985 Int.
`Conf. Parallel Processing, pp. 531-540, Aug. 1985.
`“The IBM Research Parallel Processor (Prototype
`(RP3):Introduction and Architecture", G. F. Pfister et
`al., Proc. of Int'l Conf. on Parallel Processing, pp.
`764-771, 1985.
`“How to Emulate Shared Memory', A. G. Ranade,
`IEEE Symposium on Foundation of Computer Science,
`pp. 185-194, 1987.
`“Non-Von's Performance on Certain Database Bench
`marks", B. K. Hillyer et al., IEEE Transactions on
`Software Engineering, vol. 12, No. 4, pp. 577-583, Apr.
`1986.
`"Multicomputers: Message-Passing Concurrent Com
`puters", W. C. Athas et al, IEEE Computer, pp. 9-24,
`Aug. 1988.
`“Multi-Level Shared Caching Techniques for Scalabil
`ity in VMP-MC", D. R. Cheriton et al., ACM Sympo
`sium on Computer Architecture, pp. 16-24, 1989.
`"GAMMA-A High Performance Dataflow Database
`Machine', D. DeWaitt et al., Proc. of the VLDB Conf.
`Japan, pp. 228-237, Aug. 1986.
`(List continued on next page.)
`Primary Examiner-Robert B. Harrell
`Assistant Examiner-Timothy L. Philipp
`Attorney, Agent, or Firm-Leonard C. Suchyta; Loria B.
`Yeadon
`ABSTRACT
`57
`An inventive arbiter controls access to a resource in a
`high speed computer or telecommunications network.
`'The arbiter is capable of performing round-robin sched
`uling for N requests with P possible priority levels with
`a sublinear time complexity. The high arbitration speed
`is achieved through use of a tree structure with a token
`distribution system for implementing the round-robin
`scheduling policy.
`
`13 Claims, 3 Drawing Sheets
`
`a is 5
`
`O i
`Fo
`
`is
`
`Fo
`
`E.
`
`IPR2020-00038
`MM EX1017, Page 1
`
`

`

`5,301,333
`Page 2
`
`OTHER PUBLICATIONS
`"The Wisconsin Multicube: A New Larger-Scale
`Cache Coherent Multiprocessor", J. Goodman et al.,
`IEEE International Symposium on Computer Archi
`tecture Conference, pp. 422-432, 1988.
`"The Knockout Switch': A Simple Modular Architec
`ture for High-Performance Packet Switching, Y.S. Yeh
`et al., IEEE Journal on Selected Areas in Comm., vol.
`SAC-5, No. 8, pp. 1274-1282, Oct. 1987.
`"A Survey of Interconnection Networks", T-Y Feng,
`Computer, pp. 5-20, Dec. 1981.
`DBC/1012 Data Base Computer "Concepts and Facili
`ties", CO2-0001-05 Release 3.1, pp.2-1-2-
`“Parallel Processing the Cm' Experience", E. Gehr
`inger et al, Digital Press, pp. 11-13, 1987.
`"Applications of Self-Routing Switches to LATA
`Fiber Optic Networks", C. Day et al., Int'l Switching
`Symposium, Phoenix Arizona, Mar. 1987.
`"Starlite: A Wideband Digital Switch', A. Huang et al.,
`Proc. of Globecom '84, pp. 121-125.
`
`"Distributed Round-Robin and First-Come, Fir
`st-Serve Protocols and Their Application to Multipro
`cessor Bus Arbitration", M. K. Vernon et al, The ACM
`15th Ann. Intl. Symp. on Computer Arch, 1988.
`"Arbitration and Control Acquisition in the Proposed
`IEEE 896 Futurebus', D. M. Taub, IEEE Micro, vol.
`4, No. 4, pp. 28-41, Aug. 1984.
`"A Fully Distributed Arbiter for Multi-processor Sys
`tems', G. Ciofi et al, Microprocessor and Micropro
`gramming, vol. 11, pp. 15-22, 1983.
`"High-Speed Bus Arbiter for Multiprocessors", A. B.
`Kovaleski, IEE Proc. vol. 130, Pr, E, No. 2, pp. 49-56,
`Mar. 1983.
`"A Variable Priority Arbiter for Resource Allocation
`in Asynchronous Multiprocessor Systems", Bogdan
`Lent, Microprocessing and Microprogramming, vol. 9,
`pp. 299-307, 1982.
`"Arbiter Designs for Multiprocessor Interconnection
`Networks", Joseph K. Muppala et al, Microprocessing
`and Microprogramming, vol. 26, pp. 31-43, 1989.
`
`IPR2020-00038
`MM EX1017, Page 2
`
`

`

`U.S. Patent
`U.S. Patent
`
`Apr. 5, 1994
`Apr. 5, 1994
`
`Sheet 1 of 3
`Sheet 1 of 3
`
`5,301,333
`5,301,333
`
`So
`o cy
`
`\
`
`18-J
`
`E.
`16-J
`-III-EE O EE
`
`
`
`u
`cu
`c
`
`)
`O
`
`c
`cy
`
`N ()
`S
`
`V /R (i.
`
`|
`
`C
`
`2.
`
`ur
`e
`
`FIG.4 14-4 \ji
`
`ve
`
`5
`
`e
`
`u
`un
`
`Tvwo
`~-_
`
`c.
`a
`wo
`Ss
`aD
`a
`
`qj
`@
`Do
`3
`2
`a
`
`eee
`
`Ss
`OC
`Oo
`@
`a
`
`—
`co
`_
`
`@e@e
`
`7
`Ou
`s
`

`Ow
`w
`
`>]
`oy
`
`IPR2020-00038
`MM EX1017, Page 3
`
`IPR2020-00038
`MM EX1017, Page 3
`
`

`

`U.S. Patent
`
`Apr. 5, 1994
`
`Sheet 2 of 3
`
`5,301,333
`
`CONTENTION PHASE
`50
`30 N.
`
`40
`
`FIG. 2A
`
`32
`
`23
`34
`E FREEEE Eu?
`TOKEN DISTRIBUTION PHASE50
`
`FIG. 2B
`
`(TO PARENT NODE)
`
`(FROM PARNET NODE)
`
`-
`a
`as
`o od O.
`a
`is
`to 9 to
`a C
`
`FIG. 3
`
`39
`
`
`
`
`
`
`
`CONTENTION
`RESOLUTION
`LOGIC
`
`NON-LEAF NODE
`CONTENTION
`TOKEN
`corero H REDISTRIBUTION
`WINL
`LOGIC
`
`38
`
`4.
`
`s
`
`s an
`
`o
`-
`
`(FROM LEFT
`CHILD NODE)
`
`(FROM RIGHT
`CHILD NODE)
`
`-
`d
`C
`an
`C9
`
`: CH I
`
`TO LEFT
`LD NODE)
`
`(TO RIGHT
`CHILD NODE)
`
`IPR2020-00038
`MM EX1017, Page 4
`
`

`

`U.S. Patent
`
`Apr. 5, 1994
`
`Sheet 3 of 3
`
`5,301,333
`
`FIG. 4
`
`60
`
`STARVATION
`DETECTION
`CIRCUI
`
`
`
`
`
`52
`
`DATA
`
`
`
`OVERFLOW
`
`GRANT
`
`70
`
`
`
`PRIORITY
`PROMOTION
`CIRCUIT
`
`
`
`TREE
`ARBITER
`
`30 -
`
`IPR2020-00038
`MM EX1017, Page 5
`
`

`

`1.
`
`5
`
`10
`
`15
`
`TREESTRUCTURED WARIABLE PRIORITY
`ARBITRATION MPLEMENTING A
`ROUND-ROBIN SCHEDULING POLICY
`This is a continuation of application Ser. No.
`07/537,683 filed Jun. 14, 1990, now abandoned.
`RELATED APPLICATION
`An application entitled "Packet Parallel Interconnec
`tion Network' filed for K. C. Lee on even date here
`with, bearing, U.S. Pat. No. 5,191,578, issued Mar. 2,
`1993, and assigned to the assignee hereof contains sub
`ject matter related to the subject matter of the present
`application. The contents of the related application are
`incorporated herein by reference.
`FIELD OF THE INVENTON
`The present invention relates to an arbiter and an
`arbitration process for controlling access to particular
`resources in a high speed computer or telecommunica
`tions network. More particularly, the present invention
`relates to a high speed variable priority round-robin
`arbiter with a sublinear time complexity, i.e., an arbiter
`for which the time duration of an arbitration cycle in
`25
`creases with the log of the number N of inputs rather
`than linearly with the number of inputs.
`BACKGROUND OF THE INVENTION
`Resource arbitration is a fundamental problem in
`30
`computer and telecommunications network design. An
`arbiter controls the access of a plurality of competing
`inputs to a desired resource such as a bus or multiplexer
`in a computer or telecommunications network.
`In particular, high speed arbitration among inputs
`with variable priorities is needed to support variable
`priority communications in a dynamic multi-tasking
`environment and to accommodate the increasing speed
`of highly pipelined data parallel buses and high speed
`statistical multiplexers. Preferably, the arbiter carries
`out a round-robin scheduling process to insure that all
`inputs of equal priority achieve fair access to the re
`source being arbitrated.
`A variety of arbiters have been disclosed in the prior
`art. However, as is discussed below, only a few of the
`45
`prior art arbiters support variable priority arbitration.
`One prior art arbiter is disclosed in Bogdan Lent, "A
`Variable Arbiter for Resource Allocation in Asynchro
`nous Multiprocessor Systems,' Microprocessing and
`Microprogramming, Vol. 9, 1982. This arbiter is based
`SO
`on a priority comparison matrix and can support vari
`able priority arbitration or a first-come, first-served
`scheduling policy. However, this arbiter does not sup
`port round-robin scheduling on variable priority classes
`and, in addition, it has a space complexity of order N2,
`where N is the number of inputs. Thus, this arbiter
`increases in size too rapidly with large N.
`An arbiter with a round-robin scheduling policy and
`with a policy for handling urgent requests is disclosed in
`Mary K. Vernon et al, "Distributed Round-Robin and
`First-come, First-serve Protocols and Their Applica
`tion to Multiprocessor Bus Arbitration,” The ACM
`15th Annual International Symposium on Computer
`Architecture, 1988 and in D. M. Taub, "Arbitration and
`Control Acquisition Scheme for the IEEE 896 Future
`65
`bus," IEEE Micro, Vol. 4, No. 4, August 1984, pp.
`28-41. However, these arbiters do not support round
`robin arbitration for multiple priority classes.
`
`5,301,333
`2
`To match the speed of highly pipelined buses and
`multiplexers, an arbiter with a round-robin scheduling
`policy and with a sublinear time complexity is desired.
`The linear array based round-robin arbiters proposed in
`Joseph K. Muppala et al, "Arbiter Designs for Multi
`processor Interconnection Networks,” Microprocess
`ing and Microprogramming, Vol. 26, 1989 and in G.
`Ciofi et al., “A Fully Distributed Arbiter for Multi
`processor Systems,” Microprocessor and Micropro
`gramming, Vol. 11, pp. 15-22, 1983, have O(N) time
`complexity. Since the arbitration time of these arbiters
`increases linearly with the number of input ports, they
`cannot match the data transfer rate in a highly pipelined
`communications system where N is large.
`In the case of highly parallel and pipelined bus sys
`tems, arbitration needs to be performed in each system
`clock cycle. Arbitration schemes that utilize a table
`lookup method to implement rotating priority arbitra
`tion (see, e.g., A. B. Kovaleski, "High-Speed Bus Arbi
`ter for Multiprocessor," IEE Proc. Vol. 130, Pr, E, No.
`2, March 1983 and the Vernon et all reference identified
`above) can reach higher speeds but their space com
`plexity will increase exponentially with the number of
`inputs and the number of priority classes.
`Distributed arbiters (see, e.g., the Taub, Ciofi et al,
`and Vernon et al references identified above) have the
`advantage of modularity. However, their arbitration
`time is relatively long compared to that of a centralized
`arbiter. In order to support P priority levels and Ninput
`nodes, the arbitration process needs to step through
`log priority bus lines and logN node identification bus
`lines to reach a distributed arbitration decision. Al
`though the distributed arbitration scheme only requires
`on the order of (log+logN) time steps, each step re
`quires a long delay due to off-chip communications, bus
`propagation, wire or glitch elimination, and capacitive
`loads of bus drivers.
`Briefly stated, none of the above-identified prior art
`arbiters is entirely satisfactory for use in high speed
`computer and telecommunications networks because
`they are incapable of handling multiple priority classes,
`because they do not utilize a round-robin scheduling
`policy, because the time or space complexity is too
`great, or because centralized arbitration is not utilized.
`In view of the foregoing, it is an object of the present
`invention to provide an arbiter and associated arbitra
`tion process which overcomes the shortcomings of the
`above-described prior art arbiters. It is a further object
`of the invention to provide an arbiter and associated
`arbitration process suitable for use in high speed com
`puter and communications systems.
`More particularly, it is an object of the invention to
`provide an arbiter and associated arbitration process
`whose arbitration speed increases sublinearly as the
`number N of inputs increases. It is also an object of the
`invention to provide an arbiter and associated arbitra
`tion process which implements a round-robin schedul
`ing policy, which is capable of handling multiple prior
`ity classes and which is centralized.
`SUMMARY OF THE INVENTION
`The present invention is a centralized variable prior
`ity arbiter based on a tree structure. The tree-structured
`arbiter takes N contending requests belonging to P
`priority classes and utilizes an arbitration process to
`select only one of the contenders as a winner.
`More particularly, the inventive arbiter comprises a
`plurality of leaf and non-leaf nodes arranged in a tree
`
`35
`
`55
`
`IPR2020-00038
`MM EX1017, Page 6
`
`

`

`5
`
`O
`
`15
`
`5,301,333
`3
`4.
`Turning to FIG. 1, a crosspoint switch 10 is illus
`structure. Each of the non-leaf nodes is connected to
`trated. The crosspoint switch 10 comprises the input
`two child nodes below it and one parent node above it
`buses 12-1, 12-2, ...,12-J. Associated with each input
`in the tree structure. In each arbitration cycle of the
`arbiter, each of the leaf nodes is associated with a con
`bus 12-1, 12-2,...,2-J is a decoder 14-1, 14-2, ..., 14-J.
`tending request. A winner chosen from among the con
`The switch 10 also includes a plurality of output buses
`tending requests is determined at the root or uppermost
`16-1, 16-2, . . . , 6-5. Associated with each output bus
`16-1, 16-2,...,16-J is an output port 18-1, 18-2, . . . .18.J.
`node in the tree. To determine the winner in each arbi
`Requests for particular output buses 16 arrive via the
`tration cycle, each non-leaf node executes a contention
`input buses 12. Each decoder 14 decodes routing infor
`resolution algorithm to choose a local winner from two
`mation contained in arriving requests to determine to
`requests transmitted to it from the two child nodes
`which output ports 18 the arriving requests are to be
`below it in the tree. Information identifying the local
`winning requests is passed up the tree to the nodes in the
`routed. Depending on whether a request is involved in
`next higher level so that finally a global winner is deter
`a point-to-point transmission, a multicast transmission
`or a broadcast transmission, the request is routed via the
`mined at the root node.
`A round-robin scheduling policy is implemented
`crosspoint network 20, to one or all of the output ports
`through use of a token distribution process. In particu
`18.
`Each output port, e.g., output port 18-2, includes a
`lar, once a winner is determined at the root node, token
`queue corresponding to each input bus. Thus the output
`information is distributed down the tree using a token
`distribution algorithm executed at each non-leaf node,
`port 18-2 contains the queues 22-1, 22-2, ... 22-J corre
`so that in the next arbitration cycle, the winning leaf
`O sponding to the input busses 12-1, 12-1 ... 12-J, respec
`tively. Requests arriving via the input buses 12-1, 12-2.
`node has a lower priority than other leaf nodes in the
`same priority class. A winning leaf node maintains its
`... 12-J and routed to the output port 18-2 are stored in
`lower priority status until all leaf nodes in its priority
`the corresponding buffers 22-1, 22-2. . . 22-J. The arbi
`class have been served, thereby implementing the
`ter 30 controls the access of the requests stored in the
`25
`round-robin scheduling process.
`buffers 22-1, 22-2. . . 22-i to the outgoing bus 16-2.
`In short, the inventive arbiter may be understood as
`An arbiter 30 in accordance with an illustrative em
`bodiment of the present invention is shown in FIG. 2A
`follows. Leaf nodes of the tree-structured arbiter are
`connected to requests. Each arbitration cycle comprises
`and FIG. 2B,
`a contention resolution phase and a token redistribution
`The arbiter 30 of FIG. 2A has a tree structure formed
`30
`phase. In the contention resolution phase, priority infor
`from the leaf nodes 32 and non-leaf nodes, e.g., 6,38, 40.
`mation in the form of a local winning request is deter
`Each non-leaf node is connected to two child nodes
`below it in the tree and to one parent node at a higher
`mined at each non-leaf node in the tree as a result of the
`execution of a contention resolution algorithm. The
`level in the tree. The uppermost node in the tree is the
`priority information flows upward to the root of the
`root node 40. The leaf nodes 32 of the tree arbiter 30 are
`35
`tree so that in each arbitration cycle a global winning
`each associated with a queue or buffer 34 which con
`request is identified at the root of the tree. In the token
`tains requests. In general, the tree arbiter 30 has N leaf
`nodes 32 and the requests can belong to P priority
`redistribution phase, token information determined at
`each non-leaf node using a token distribution algorithm
`classes. In the example of FIG. 2A, P-3 so that the
`flows down the tree so that the winning leaf node may
`priority classes 1, 2, and 3 are utilized, with class 3
`40
`having the highest priority. Each queue 34 has a num
`receive a lower priority than other leaf nodes in the
`same priority class in the next arbitration cycle to imple
`ber associated with it which indicates the priority class
`ment a round-robin scheduling policy.
`of the requests stored therein. The arbiter 30 serves to
`control the access of the requests stored in the queues 34
`The tree-structured contention resolution scheme
`to a resource (e.g. a bus or multiplexer) associated with
`used by the centralized arbiter of the present invention
`45
`has a sublinear time complexity on the order of logN,
`the root node 40 of the tree arbiter 30.
`i.e., the arbitration time increases with the log of the
`In an arbitration cycle, the arbiter takes N input re
`number N of inputs rather than linearly with N. This
`quests as contenders (i.e. one from each input queue 34)
`represents a significant advantage over prior art vari
`and selects only one winner. The winner is deleted from
`able priority arbiters.
`its queue and a new contender from the winner's queue
`SO
`is considered in the next arbitration cycle.
`BRIEF DESCRIPTION OF THE DRAWING
`The tree arbiter 30 utilizes a tree structured conten
`tion resolution scheme to achieve an order logNlatency.
`FIG. 1 illustrates an output buffered crosspoint
`This sublinear latency is a significant advantage of the
`switch which uses an arbiter in accordance with the
`present invention.
`inventive arbiter.
`FIGS. 2A and 2B schematically illustrate an arbiter in
`Each arbitration cycle is divided into a contention
`resolution phase and a token redistribution phase. The
`accordance with an illustrative embodiment of the pres
`contention resolution phase is illustrated in FIG.2A and
`ent invention.
`the token redistribution phase is illustrated in FIG. 2B,
`FIG. 3 illustrates a node for use in the arbiter of
`During the contention resolution phase, each non-leaf
`FIGS. 2A and 2B.
`60
`FIG. 4 schematically illustrates a priority promotion
`node of the tree 30 (e.g. nodes 36, 38, 40) executes a
`contention resolution algorithm so that a single winner
`circuit for use in the arbiter of FIG. 2A and FIG. 2B.
`is ultimately determined at the root node 40. The token
`DETALED DESCRIPTION OF THE
`redistribution phase is utilized to implement a round
`NVENTION
`robin scheduling policy at the leaf nodes. In particular,
`Before describing the arbiter of the present invention
`in an arbitration cycle, after a winner is selected, token
`in detail, it may be helpful to illustrate one application
`bits are redistributed among the leaf nodes to insure that
`the winning leaf node is not serviced again until the
`of such an arbiter.
`
`55
`
`SS
`
`IPR2020-00038
`MM EX1017, Page 7
`
`

`

`5,301,333
`5
`6
`other leaf nodes belonging to the same priority class
`ning leaf node can join the next arbitration cycle. How
`have been serviced.
`ever, this new request will have a lower priority then a
`Roughly speaking, the contention resolution phase
`request from the same priority class at a leaf node that
`operates as follows. Each non-leaf node serves as a
`has a set token bit. Therefore in the example of FIG. 2A
`comparator which compares the priorities of two re
`and FIG.2B, the request to the right of the previously
`quests received from the two child nodes below it in the
`granted leaf node (i.e. the fourth leaf node from the left)
`tree. Of the two requests received at each non-leaf node,
`will win the next arbitration cycle.
`the request of the higher priority wins, and if the two
`After the last leaf of the highest priority class (in this
`priorities are equal, the request on the left wins. The
`case, priority level 3) has been victorious in the conten
`priority of a request is determined by both its priority
`10
`tion resolution process of an arbitration cycle, the token
`class and its token bit.
`bits of the leaves in the left Ctree will all be set to one
`More particularly, associated with each request is a
`to restart the next arbitration. Thus, a round-robin
`token bit which may be clear or set. Within a particular
`scheduling policy is emulated within the Ctree from left
`priority class, a request with a clear token bit has a
`to right. The token redistribution logic may be summa
`lower priority than a request with a set token bit. In
`15
`rized as follows:
`FIG. 2A, the set token bits are indicated by "dots'
`If the last leaf node in the Ctree is the winner then set
`adjacent the corresponding leaf nodes and clear token
`the token bits at the leaf nodes on the Left Ctree
`bits are indicated by the absence of such dots. Note that
`clear the token bits at the leaf nodes of the Right
`in FIG. 2A, all requests of the highest priority class (i.e.
`Ctree
`class 3) have set token bits. Thus, in FIG. 2A, the win
`20
`else clear the token bits at the leaf nodes in the Left
`ning request follows the path 50 to the root node 40 of
`Ctree set the token bits at the leaf nodes in the
`the tree arbiter 30. During the token redistribution
`Right Ctree
`phase, tokens are distributed to the leaf nodes to insure
`The algorithms executed by the non-leaf nodes of the
`that the winning leaf node (i.e. the third leaf node from
`tree arbiter are now considered in more detail. FIG. 3
`the left) has a lower priority than the other leaf nodes of
`25
`schematically illustrates a non-leaf node 38. The non
`the same priority class until the other leaf nodes of the
`leaf node 38 executes a contention resolution algorithm
`same priority class are served, thereby implementing a
`using contention resolution logic 39 during the conten
`round-robin scheduling policy. As shown in FIG. 2B,
`tion resolution phase of an arbitration cycle and a token
`the token bits have been redistributed so that the win
`redistribution algorithm using token redistribution logic
`ning leaf node in the contention resolution phase now
`30
`41 during a token redistribution phase of an arbitration
`has a clear token bit and therefore a lower priority than
`other leaf nodes in the highest priority class.
`cycle. The contention resolution algorithm sets or
`The arbitration cycle may be understood in more
`clears two bits stored in the non-leaf node 38. These bits
`are WinL and Contention as schematically illustrated in
`detail as follows. A contention tree is a subtree that
`connects leaf nodes of equal priority value. A winner's
`FIG. 3. The outputs of the contention resolution algo
`contention tree, identified herein as "Ctree' is a subtree
`rithm which are generated in the node 38 and propa
`which connects the leaf nodes of the highest priority
`gated up the tree to the parent node are Addrout (i.e.
`class (e.g. in FIG. 2A, priority class 3) to the root node
`the address of the winning request), Dataout (i.e. the
`of the tree. In FIG. 2A, the Ctree is identified in bold
`priority level and token value of the winning request),
`print. A grant trace of an arbitration cycle is a path from
`and Lastout (i.e. an indication that there is a contender
`the root to the winning leaf node. Thus, in FIG. 2A, the
`to the right of the current winner). The input signals for
`grant trace is the path 50. A "Left Ctree' is the portion
`the contention resolution algorithm are LastL and
`of the Ctree including the grant trace and to the left of
`LastR (i.e. the Lastout signals from the left and right
`the grant trace. A "Right Ctree' is the portion of the
`lower level child nodes connected to the node 38).
`Ctree to the right of the grant trace.
`AddrR and AddrL (i.e. the leaf addresses of the requests
`In the contention resolution phase of an arbitration
`from the left and right lower level child nodes), Toke
`cycle, each non-leaf node records the state of the con
`ninL and TokennR (i.e. the token values associated
`tention and the position of the winner in "Contention'
`with the requests from the left and right child nodes)
`and "WinL' bits (see FIG. 3). The Contention bit is set
`and DataL and DataR (i.e. the priority classes of the
`if the two input requests to a node have the same prior
`50
`requests from the left and right lower level child nodes).
`ity and WinL is set when the priority level from the left
`The following algorithm is executed at the node 38
`input is greater than or equal to the priority level from
`by the contention resolution logic 39 during the conten
`the right input.
`tion resolution phase of an arbitration cycle:
`After the winner is determined (e.g. in FIG. 2A the
`third leaf from the left), the token redistribution phase
`55
`begins. In this phase, token bits of the leafs in the Left
`Ctree will be cleared and the token bits in the Right
`Ctree will be set to one to carry out the round-robin
`scheduling policy. In the example of FIG. 2A, the leaf
`nodes corresponding to the priority 1 and priority 2
`60
`requests do not contain the winner and thus are not
`affected by the token redistribution phase. The redistri
`bution of token bits with clear token bits being distrib
`uted to the leaf nodes in the Left Ctree and set token bits
`being distributed to the leaf nodes in the Right Ctree is
`shown in FIG. 2B.
`Once the token bit of the winning leaf node is cleared,
`a new request from the queue associated with the win
`
`WinL = ge(DataLGTokeninLDataRGTokenlnR)
`Contention = equ(DataLDataR)
`LastOut = (-Contention
`LastL WinL) (Lastr
`Addr, if WinL = 1
`o
`AddrR, if WinL = 0
`DataLGToken.nl, if WinL = 1
`ot
`DataRGTokenlnR, if WinL = 0
`
`35
`
`45
`
`-WinL)
`
`AddrCut =
`
`DataOut =
`
`It should be noted that XGDY means that the binary
`representation of Y is concatenated to the binary repre
`sentation of X, equ(x,y) means the evaluation is true if x
`is equal to y, and ge(x,y) means the evaluation is true if
`
`IPR2020-00038
`MM EX1017, Page 8
`
`

`

`Token L = Tolkenn
`TokenR = (Contention
`Grantlin WinL) e TokenIn
`CtreeL = Ctreeln
`(Contention
`WinL)
`CtreeR = Ctreeln
`(Contention -WinL)
`Grant as Grantin Win
`Grant is Grantin -WinL
`
`15
`
`5,301,333
`7
`of the highest priority class (i.e. the leaf nodes of the
`x is greater than or equal toy. According to these defi
`Ctree). In an alternative embodiment, fairness among
`nitions, priority 1 is the lowest priority class.
`the leaf nodes of the Ctree may be insured by using a
`As shown in FIG. 3, in the token redistribution phase,
`random process to choose a winner when there is con
`the node 38 receives the signals Tokenin, Ctreeln, and
`tention at the non-leaf nodes. In particular, when there
`Grantin from its parent node in the tree arbiter. As a 5
`is contention at a non-leaf node, a winner is chosen
`result of executing a token redistribution algorithm by
`based on the state of a flipflop at the node rather than
`the token redistribution logic 41, the node 38 transmits
`based on the state of two input token bits. The flipflops
`TokenL, CtreeL, and GrantL signals to the left child
`at all the non-leaf nodes in the Ctree are toggled in
`node and the TokenR, CtreeR and GrantR signals to
`every arbitration cycle to achieve a random distribution
`the right child node. The token distribution algorithm is 10
`of victories among the leaf nodes. The scheme does not
`as follows.
`require token distribution and there is no correlation
`between input position and queuing delay.
`A variety of pipelining schemes may be used to in
`prove the performance of the inventive tree-structured
`arbiter. In a cascading pipeline, the arbitration tree is
`partitioned into multiple subtrees of smaller size.
`Round-robin arbitration policies are carried out simulta
`neously within each subtree and the winners at the low
`level subtrees become contenders at the next level sub
`trees. This scheme permits a larger arbitration tree to be
`formed using pipeline buffers between tree levels but
`the tradeoff is that a global round-robin sequence is not
`guaranteed and a high priority request may be blocked
`behind a low priority request at the top of each subtree.
`In an overlapping pipeline, at each clock cycle, a new
`batch of requests is accepted into the arbitration tree
`before the arbitration process is completed for the pre
`vious batch of requests. Each node records r copies of
`the WinL and Contention bits so that r overlapped
`Ctrees, which are separated by one clock cycle, are
`maintained in the tree arbiter. The overlapping pipeline
`technique utilizes an r level deep circulating buffer
`between the arbiter and each input queue. The winning
`request will be taken out of its circulating buffer and the
`losers will reenter the circulating buffers with higher
`priority than new input requests. However, since new
`requests are input into the arbiters before the winners of
`previous cycles are deleted, it is possible that later arriv
`ing requests will be granted before earlier arriving re
`quests. Therefore, this scheme does not guarantee the
`sequential servicing of requests for each input.
`Finally, the above-described embodiments of the
`invention are intended to be illustrative only. Numerous
`alternative embodiments may be devised by those
`skilled in the art without departing from the spirit and
`scope of the following claims.
`I claim:
`1. A method implemented in a tree-structured arbiter
`circuit for controlling access to a resonance in an elec
`tronic network comprising the steps of
`during an arbitration cycle, receiving at a plurality of
`leaf nodes of the tree-structured arbiter circuit a
`plurality of requests, each request having a token
`bit and being from a plurality of priority classes for
`said resonance,
`in a contention resolution phase of said arbitration
`cycle, determining, at each of a plurality of non
`leaf nodes of the tree-structured arbiter circuit,
`priority information by executing a contention
`resolution algorithm and transmitting said priority
`information up the tree-structured arbiter circuit so
`that a winning request is determined at a root node
`of the tree-structured arbiter circuit, and
`in a token redistribution phase of said arbitration
`cycle, determining, at each non-leaf node, token
`redistribution information by executing a token
`redistribution algorithm and transmitting the token
`
`At the root node, the Tokenin signal is connected to 20
`the Lastout signal and both Grantin and Ctreen are
`asserted. As indicated previously, the Lastout signal for
`each node indicates whether there is a contender to the
`right of the current winner. If the current winner at the
`root node originates from the rightmost leaf node in its 25
`priority class, Lasto

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket