`Mansharamani et al.
`
`USOO6831891B2
`(10) Patent No.:
`US 6,831,891 B2
`(45) Date of Patent:
`Dec. 14, 2004
`
`OO C a
`
`- - -
`
`9/2000 Aimoto et al. .............. 370/235
`6,122.252 A
`6,144,640 A 11/2000 Simpson et al. ..
`... 370/236
`p
`6,167,027 A 12.2000 Aubert et al......
`... 370/230
`gig. R : g: R mi et al...
`- - - 2.
`6,320.845 B1 * 11/2001 Davie ...............
`... 370/230
`6,381,649 B1
`4/2002 Carlson ...................... 370/232
`FOREIGN PATENT DOCUMENTS
`:
`OOO731611 A2 11/1996
`............ HO4N/7/30
`* cited by examiner
`ACEO. A chau Ba Nguyen
`74) Attorney, Agent, or Firm-Donald R. Boys, Central
`y, Ag
`y
`Coast Patent Agency, Inc.
`gency
`57
`ABSTRACT
`(57)
`fabric
`Amethod for managing data ratic in nodes in
`network, each node having internally-coupled ports, follows
`the Steps of establishing a managed queuing System com
`prising one or more queues associated with each port, for
`managing incoming data traffic, and accepting or discarding
`data directed to a queue according to the quantity of data in
`the queue relative to queue capacity. In one preferred
`embodiment the managed System accepts all data directed to
`a queue less than full, and discards all data directed to a
`queue that is full. In Some alternative embodiments the
`queue manager monitors quantity of data in a queue relative
`to queue capacity, and begins to discard data at a predeter
`mined rate when the quantity of queued data reaches the
`threshold. In other cases the queue manager increases the
`.
`. .
`.
`q
`9.
`rate of discarding as the quantity of queued data increases
`above the preset threshold, discarding all data traffic when
`the queue is full.
`
`6 Claims, 3 Drawing Sheets
`
`24 - 12
`
`(54) SYSTEM FOR FABRIC PACKET CONTROL
`(75) Inventors: Deepak Mansharamani, San Jose, CA
`(US); Erol Basturk, Cupertino, CA
`(US)
`(73) Assignee: Pluris, Inc., Cupertino, CA (US)
`
`(*) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 0 days.
`(21) Appl. No.: 09/800,678
`22) Filled:
`Mar. 6, 2001
`9
`(65)
`Prior Publication Data
`US 2002/0126634 A1 Sep. 12, 2002
`(51) Int. Cl." ............................. Hoal 1228. Hoa 3/14
`(52) U.S. Cl. ........................................ 370/229; 370/413
`(58) Field of Search .............................. 370/229, 230.1,
`370/232–234, 252,253,352,395.21, 468,
`477, 395.43, 412,429, 395.71, 230, 235,
`416, 418
`
`(56)
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`5,282,203 A * 1/1994 Oouchi - - - - - - - - - - - - - - - - - - - - - - - 370/232
`5,442,624 A
`8/1995 Bonomi et al. ............. 370/253
`5,555,264 A * 9/1996 Sallberg et al.............. 370/414
`5,768,257 A * 6/1998 Khacherian et al. ........ 370/414
`5,768,258 A * 6/1998 Van As et al. .............. 370/236
`5,777.984. A 7/1998 Gun et al.
`5,793,747 A * 8/1998 Kline ......................... 370/230
`5,828,653 A * 10/1998 Goss .......................... 370/230
`5,949,757 A * 9/1999 Katoh et al. ................ 370/235
`
`Fabric Card
`
`External Port
`
`
`
`205
`
`Optical
`Interface
`N 207
`
`Queue
`Manager
`209
`
`d
`C
`
`C -
`
`Juniper Exhibit 1001
`
`
`
`U.S. Patent
`US. Patent
`
`Dec. 14, 2004
`Dec. 14, 2004
`
`Sheet 1 of 3
`Sheet 1 013
`
`US 6,831,891 B2
`US 6,831,891 B2
`
`101
`1 0 1
`Switching Fabric
`Switching Fabric
`
`
`
`
`
`
`
`->
`
`_____)
`Represents Flow
`Represents Flow
`Control Messages
`Control Messages
`
`
`Fig. 1 (Prior Art)
`Fig. 1 (Prior Art)
`
`
`
`U.S. Patent
`US. Patent
`
`Dec. 14, 2004
`Dec. 14, 2004
`
`Sheet 2 of 3
`Sheet 2 0f3
`
`US 6,831,891 B2
`US 6,831,891 B2
`
`Fabric Card
`Fabric Card
`
`201
`201
`
`
`
`External Port
`External Port
`
`205
`205
`
`Optical
`Optical
`Interface
`Interface
`207
`
`Queue
`Queue
`Manager
`Manager
`
`Crossbar
`
`203
`
`
`
`U.S. Patent
`US. Patent
`
`Dec. 14, 2004
`Dec. 14, 2004
`
`Sheet 3 of 3
`Sheet 3 0f3
`
`US 6,831,891 B2
`US 6,831,891 B2
`
`Card
`Card
`
`Card
`Card
`
`I
`
`I
`-
`
`Card 31 7
`Card 317
`
`Fabric Card
`Fabric Card
`319
`3 19
`
`I
`
`Sard
`382m
`
`I Card
`Card
`307
`307
`
`315
`315
`Card
`Card
`
`I
`
`I
`I
`
`I
`
`C 353ar
`C al
`
`3 11
`31 1
`Card
`Card
`
`309
`309
`Card
`Card
`
`Fig. 3
`Fig. 3
`
`
`
`1
`1
`SYSTEM FOR FABRIC PACKET CONTROL
`SYSTEM FOR FABRIC PACKET CONTROL
`
`FIELD OF THE INVENTION
`FIELD OF THE INVENTION
`The present invention is in the field of routing packets
`The present invention is in the field of routing packets
`through alternative paths between nodes in a routing fabric,
`through alternative paths between nodes in a routing fabric,
`and pertains in particular to methods by which back-ups in
`and pertains in particular to methods by which back-ups in
`a fabric may be avoided.
`a fabric may be avoided.
`BACKGROUND OF THE INVENTION
`BACKGROUND OF THE INVENTION
`With the advent and continued development of the well
`With the advent and continued development of the well-
`known Internet network, and of Similar data-packet
`known Internet network, and of similar data-packet
`networks, much attention has been paid to computing
`networks, much attention has been paid to computing
`machines for receiving, processing, and forwarding data
`machines for receiving, processing, and forwarding data
`packets. Such machines, known as routers in the art, typi
`packets. Such machines, known as routers in the art, typi-
`cally have multiple interfaces for receiving and Sending
`cally have multiple interfaces for receiving and sending
`packets, and circuitry at each interface, including typically a
`packets, and circuitry at each interface, including typically a
`processor, for handling and processing packets. The circuitry
`processor, for handling and processing packets. The circuitry
`at the interfaces is implemented on modules known as line
`at the interfaces is implemented on modules known as line
`cards in the art. In some routers the line cards are intercon-
`cards in the art. In Some routers the line cards are intercon
`nected through what is known as the internal fabric, which
`nected through what is known as the internal fabric, which
`comprises interconnected fabric cards handling transmis
`comprises interconnected fabric cards handling transmis-
`Sions through the fabric. Fabric interconnection has not
`sions through the fabric. Fabric interconnection has not
`always been a part of routers in the art, and is a fairly recent
`always been a part of routers in the art, and is a fairly recent
`innovation and addition for packet routers.
`innovation and addition for packet routers.
`FIG. 1, labeled prior art, illustrates a number of intercon-
`FIG. 1, labeled prior art, illustrates a number of intercon
`nected fabric nodes, labeled in this example A through J,
`nected fabric nodes, labeled in this example A through J,
`each node of which may be fairly considered to comprise a
`each node of which may be fairly considered to comprise a
`fabric card in a Switching fabric in a router. It will be
`fabric card in a switching fabric in a router.
`It will be
`apparent to the skilled artisan that FIG. 1 is an exemplary
`apparent to the skilled artisan that FIG. 1 is an exemplary
`and partial representation of nodes and interconnections in a
`and partial representation of nodes and interconnections in a
`Switching fabric, and that there are typically many more
`switching fabric, and that there are typically many more
`nodes and interconnections than those shown.
`nodes and interconnections than those shown.
`One purpose of FIG. 1 in this context is to illustrate that
`One purpose of FIG. 1 in this context is to illustrate that
`there are a wide variety of alternative paths that data may
`there are a wide variety of alternative paths that data may
`take within a Switching fabric. For example, transmission
`take within a switching fabric. For example, transmission
`from node E to node J may proceed either via path E-F-H-
`from node E to node J may proceed either via path E-F-H-
`G-J, or alternatively via E-F-D-G-J. The skilled artisan will
`G-J, or alternatively via E-F-D-G-J. The skilled artisan will
`also recognize that the nodes and interconnections shown
`also recognize that the nodes and interconnections shown
`are but a tiny fraction of the nodes and interconnections that
`are but a tiny fraction of the nodes and interconnections that
`might be extant in a practical System.
`might be extant in a practical system.
`In conventional Switching fabric at the time of the present
`In conventional switching fabric at the time of the present
`patent application fabric nodes in Such a structure are
`patent application fabric nodes in such a structure are
`implemented on fabric cards or chips that do Flow Control.
`implemented on fabric cards or chips that do Flow Control.
`Such Flow Control
`is very well-known in the art, and
`Such Flow Control is very well-known in the art, and
`comprises a process of monitoring ports for real or potential
`comprises a process of monitoring ports for real or potential
`traffic Overflow, and notifying an upstream port to Stop or
`traffic overflow, and notifying an upstream port to stop or
`slow sending of further data. That is, if node G as shown in
`slow sending of further data. That is, if node G as shown in
`FIG. 1, becomes overloaded at a particular input port, for
`FIG. 1, becomes overloaded at a particular input port, for
`example, the port from D, the Flow Control at G will notify
`example, the port from D, the Flow Control at G will notify
`D to restrict or suspend traffic to G In this example, D may
`D to restrict or suspend traffic to G In this example, D may
`receive traffic from upstream neighbors that it cannot for
`receive traffic from upstream neighbors that it cannot for-
`ward to G, and it may then have to notify these neighbors to
`ward to G, and it may then have to notify these neighbors to
`Suspend Sending traffic to D. This example illustrates how
`suspend sending traffic to D. This example illustrates how
`Flow Control may cause traffic changes made by nodes as a
`Flow Control may cause traffic changes made by nodes as a
`result of an overflow condition at a downstream node to
`result of an overflow condition at a downstream node to
`propagate further upstream affecting further nodes, and
`propagate further upstream affecting further nodes, and
`further stopping or diverting traffic. In FIG. 1 arrows
`further stopping or diverting traffic.
`In FIG. 1 arrows
`between nodes are indicative of Flow Control indicators
`between nodes are indicative of Flow Control indicators
`passed, and the skilled artisan will also understand that
`passed, and the Skilled artisan will also understand that
`traffic may be in any direction, and that Flow Control
`traffic may be in any direction, and that Flow Control
`indicators are therefore passed in both directions as well.
`indicators are therefore passed in both directions as well.
`A serious problem with Flow Control as conventionally
`A serious problem with Flow Control as conventionally
`practiced is that the upstream notifications, inherent in flow
`practiced is that the upstream notifications, inherent in flow
`control, propagate further upstream and hinder or Stop traffic
`control, propagate further upstream and hinder or stop traffic
`that there is no need to stop, partly because the intercon-
`that there is no need to Stop, partly because the intercon
`
`10
`
`15
`15
`
`20
`
`25
`25
`
`30
`
`35
`35
`
`40
`40
`
`45
`45
`
`50
`50
`
`55
`55
`
`60
`60
`
`65
`65
`
`US 6,831,891 B2
`US 6,831,891 B2
`
`2
`2
`nections of nodes may be quite complicated and the alter
`nections of nodes may be quite complicated and the alter-
`native paths quite numerous. Further, a node that has been
`native paths quite numerous. Further, a node that has been
`informed of a downstream overload condition cannot select
`informed of a downstream overload condition cannot Select
`to stop or divert traffic just for that particular link, but only
`to stop or divert traffic just for that particular link, but only
`to stop or divert all traffic. These effects, because of the
`to stop or divert all traffic. These effects, because of the
`complexity and interconnection of nodes in a fabric, can
`complexity and interconnection of nodes in a fabric, can
`result in complete Stultification of parts of a System, or of an
`result in complete stultification of parts of a system, or of an
`entire network.
`entire network.
`There have been in the art several attempts to improve
`There have been in the art several attempts to improve
`upon flow control, but all Such Solutions have only been
`upon flow control, but all such solutions have only been
`partly Successful, and Still use upstream propagation of
`partly successful, and still use upstream propagation of
`control indicators, which always still have a good chance of
`control indicators, which always still have a good chance of
`causing unwanted difficulty.
`causing unwanted difficulty.
`What is clearly needed is a way to deal with temporary
`What is clearly needed is a way to deal with temporary
`overloads at fabric nodes without resorting to problematic
`overloads at fabric nodes without resorting to problematic
`upstream messaging without impacting traffic that does not
`upstream messaging without impacting traffic that does not
`need to use the overloaded link.
`need to use the overloaded link.
`SUMMARY OF THE INVENTION
`SUMMARY OF THE INVENTION
`In a preferred embodiment of the present invention a
`In a preferred embodiment of the present invention a
`method for managing data traffic at Switching element in a
`method for managing data traffic at switching element in a
`fabric network, each node having two or more internally
`fabric network, each node having two or more internally
`coupled ports is provided, comprising the steps of (a)
`coupled ports is provided, comprising the steps of (a)
`establishing a managed queuing System comprising one or
`establishing a managed queuing system comprising one or
`more queues associated with each port, for managing incom
`more queues associated with each port, for managing incom-
`ing data traffic, and (b) accepting or discarding data directed
`ing data traffic; and (b) accepting or discarding data directed
`to a queue according to the quantity of data in the queue
`to a queue according to the quantity of data in the queue
`relative to queue capacity.
`relative to queue capacity.
`In some embodiments all data is discarded for a full
`In some embodiments all data is discarded for a full
`queue. In Some other embodiments the queue manager
`queue.
`In some other embodiments the queue manager
`monitors quantity of queued data in relation to a preset
`monitors quantity of queued data in relation to a preset
`threshold, and begins to discard data at a predetermined rate
`threshold, and begins to discard data at a predetermined rate
`when the quantity of queued data reaches the threshold. In
`when the quantity of queued data reaches the threshold. In
`Still other embodiments the queue manager increases the rate
`still other embodiments the queue manager increases the rate
`of discarding as quantity of queued data increases above the
`of discarding as quantity of queued data increases above the
`preset threshold, discarding all data traffic when the queue is
`preset threshold, discarding all data traffic when the queue is
`full.
`full.
`In another aspect of the invention a Switching element for
`In another aspect of the invention a switching element for
`a fabric network is provided, comprising two or more
`a fabric network is provided, comprising two or more
`internally-coupled ports, and a managed queuing System
`internally-coupled ports, and a managed queuing system
`comprising one or more queues associated with each port,
`comprising one or more queues associated with each port,
`for managing incoming data traffic. The Switching element
`for managing incoming data traffic. The switching element
`is characterized in that the queue manager accepts or dis
`is characterized in that the queue manager accepts or dis-
`cards data directed to a queue according to the quantity of
`cards data directed to a queue according to the quantity of
`data in the queue relative to queue capacity.
`data in the queue relative to queue capacity.
`In some embodiments all data is discarded for a full
`In some embodiments all data is discarded for a full
`queue. In Some other embodiments the queue manager
`queue.
`In some other embodiments the queue manager
`monitors quantity of queued data against a preset threshold,
`monitors quantity of queued data against a preset threshold,
`and begins to randomly discard data when the quantity of
`and begins to randomly discard data when the quantity of
`queued data exceeds the threshold. In still other embodi-
`queued data exceeds the threshold. In still other embodi
`ments the queue manager increases the rate of discarding as
`ments the queue manager increases the rate of discarding as
`the quantity of queued data increases above the preset
`the quantity of queued data increases above the preset
`threshold.
`threshold.
`In Still another aspect of the invention a data router having
`In still another aspect of the invention a data router having
`external connections to other data routers is provided, com-
`external connections to other data routerS is provided, com
`prising an internal fabric network, and a plurality of Switch
`prising an internal fabric network, and a plurality of switch-
`ing elements in the internal fabric network, each having
`ing elements in the internal fabric network, each having
`internally-coupled ports, and a managed queuing System
`internally-coupled ports, and a managed queuing system
`comprising one or more queues associated with each port,
`comprising one or more queues associated with each port,
`for managing incoming data traffic. The router is character
`for managing incoming data traffic. The router is character-
`ized in that the queue manager accepts or discards data
`ized in that the queue manager accepts or discards data
`directed to a queue according to the quantity of data in the
`directed to a queue according to the quantity of data in the
`queue relative to queue capacity.
`queue relative to queue capacity.
`In some embodiments all data is discarded for a full
`In some embodiments all data is discarded for a full
`queue. In Some other embodiments the queue manager
`queue.
`In some other embodiments the queue manager
`monitors quantity of queued data against a preset threshold,
`monitors quantity of queued data against a preset threshold,
`
`
`
`US 6,831,891 B2
`US 6,831,891 B2
`
`10
`
`15
`15
`
`20
`
`25
`25
`
`30
`
`35
`35
`
`3
`3
`and begins to randomly discard data when the quantity of
`and begins to randomly discard data when the quantity of
`queued data exceeds the threshold. In still other embodi
`queued data exceeds the threshold. In still other embodi-
`ments the queue manager increases the rate of discarding as
`ments the queue manager increases the rate of discarding as
`the quantity of queued data increases above the preset
`the quantity of queued data increases above the preset
`threshold.
`threshold.
`In various embodiments of the invention taught below in
`In various embodiments of the invention taught below in
`enabling detail, for the first time a System is provided for
`enabling detail, for the first time a system is provided for
`routers that accomplished the purposes of flow control
`routers that accomplished the purposes of flow control
`without requiring upstream notification of problems, which
`without requiring upstream notification of problems, which
`can often result in extensive and unnecessary cessation or
`can often result in extensive and unnecessary cessation or
`diversion of traffic.
`diversion of traffic.
`BRIEF DESCRIPTIONS OF THE DRAWING
`BRIEF DESCRIPTIONS OF THE DRAWING
`FIGURES
`FIGURES
`FIG. 1 is a prior art diagram illustrating fabric node
`FIG. 1 is a prior art diagram illustrating fabric node
`interconnections and upstream propagation of flow control
`interconnections and upstream propagation of flow control
`indicators.
`indicators.
`FIG. 2 is a diagram of a fabric card in an embodiment of
`FIG. 2 is a diagram of a fabric card in an embodiment of
`the present invention.
`the present invention.
`FIG. 3 is a diagram of a fabric network of fabric cards in
`FIG. 3 is a diagram of a fabric network of fabric cards in
`an embodiment of the present invention.
`an embodiment of the present invention.
`DESCRIPTION OF THE PREFERRED
`DESCRIPTION OF THE PREFERRED
`EMBODIMENTS
`EMBODIMENTS
`FIG. 2 is a plan view of a fabric card 201 in an embodi-
`FIG. 2 is a plan view of a fabric card 201 in an embodi
`ment of the present invention. In this embodiment there are
`ment of the present invention. In this embodiment there are
`nine (9) ports on each card, rather than four as indicated in
`nine (9) ports on each card, rather than four as indicated in
`the prior art diagram of FIG. 1. This is not meant to imply
`the prior art diagram of FIG. 1. This is not meant to imply
`that the prior art is limited to four ports per node, as FIG. 1
`that the prior art is limited to four ports per node, as FIG. 1
`was exemplary only.
`was exemplary only.
`In the fabric card of this embodiment, as shown in FIG.
`In the fabric card of this embodiment, as shown in FIG.
`2, there are nine queue managerS 209, one for each external
`2, there are nine queue managers 209, one for each external
`port 205, with each queue manager isolated from its con
`port 205, with each queue manager isolated from its con-
`nected external port by an optical interface 207. The inter
`nected external port by an optical interface 207. The inter-
`node communication in this embodiment is by optical links.
`node communication in this embodiment is by optical linkS.
`Queue managers 209 interface with crossbar 203, which
`Queue managers 209 interface with crossbar 203, which
`connects each of the nine ports with the other eight ports
`connects each of the nine ports with the other eight ports
`internally in this embodiment, although these internal con
`internally in this embodiment, although these internal con-
`nections are not shown in the interest of simplicity.
`nections are not shown in the interest of Simplicity.
`FIG. 3 is a diagram illustrating a fabric having intercon
`FIG. 3 is a diagram illustrating a fabric having intercon-
`nected fabric cards according to the embodiment described
`nected fabric cards according to the embodiment described
`above with reference to FIG. 2. In this diagram one card 319
`above with reference to FIG. 2. In this diagram one card 319
`is shown connected to nine neighbor cards 301, 303, 305,
`is shown connected to nine neighbor cards 301, 303, 305,
`307, 309, 311, 313, 315, and 317. Each of the neighbor cards
`307, 309,311,313,315, and 317. Each of the neighbor cards
`is illustrated as having eight additional ports for intercon
`is illustrated as having eight additional ports for intercon-
`necting to further neighbors in addition to the one port
`necting to further neighbors in addition to the one port
`connecting the near neighbor with card 319. It will be clear
`connecting the near neighbor with card 319. It will be clear
`to the skilled artisan from this diagram that interconnection
`to the skilled artisan from this diagram that interconnection
`complexity escalates at a very great rate as ports and cards
`complexity escalates at a very great rate as ports and cards
`(nodes) proliferate.
`(nodes) proliferate.
`Referring now back to FIG. 2, each port on each card in
`Referring now back to FIG. 2, each port on each card in
`this example passes through a queue management gate 209
`this example passes through a queue management gate 209
`as indicated in FIG. 2. Each queue manager comprises a Set
`as indicated in FIG. 2. Each queue manager comprises a set
`55
`of virtual output queues (VOO), with individual VOQs
`of virtual output queues (VOQ), with individual VOQs
`55
`associated with individual ones of the available outputs on
`asSociated with individual ones of the available outputs on
`a card. This VOO queuing System manages incoming flows
`a card. This VOQ queuing system manages incoming flows
`based on the outputs to which incoming packets are directed.
`based on the outputs to which incoming packets are directed.
`Data traffic coming in on any one port, for example, is
`Data traffic coming in on any one port, for example, is
`directed to a first-in-first-out (FIFO) queue associated with
`directed to a first-in-first-out (FIFO) queue associated with
`an output port, and the queue manager is enabled to discard
`an output port, and the queue manager is enabled to discard
`all traffic when the queue to which data is directed is full.
`all traffic when the queue to which data is directed is full.
`There are,
`in this scheme, no Flow Control
`indications
`There are, in this scheme, no Flow Control indications
`generated and propagated upstream as is done in the prior
`generated and propagated upstream as is done in the prior
`art.
`art.
`In this unique arrangement the Size of each queue is Set to
`In this unique arrangement the size of each queue is set to
`provide adequate flow under ordinary, and to Some extent
`provide adequate flow under ordinary, and to some extent
`
`40
`40
`
`45
`45
`
`50
`50
`
`60
`60
`
`65
`65
`
`4
`4
`extraordinary, load conditions without data loSS, but under
`extraordinary, load conditions without data loss, but under
`extreme conditions, when a queue is full, data is simply
`extreme conditions, when a queue is full, data is simply
`discarded until the situation corrects, which the inventors
`discarded until the Situation corrects, which the inventors
`have found to be less conducive to data loss than the
`have found to be leSS conducive to data loSS than the
`problems associated with conventional Flow Control, which
`problems associated with conventional Flow Control, which
`uses the previously described upstream-propagated Flow
`uses the previously described upstream-propagated Flow
`Control indicators.
`Control indicators.
`In an alternative embodiment of the present invention
`In an alternative embodiment of the present invention
`each queue manager on a card has an ability to begin to drop
`each queue manager on a card has an ability to begin to drop
`packets at a pre-determined rate at Some threshold in queue
`packets at a pre-determined rate at some threshold in queue
`capacity short of a full queue. In certain further embodi
`capacity short of a full queue. In certain further embodi-
`ments the queue manager may accelerate the rate of packet
`ments the queue manager may accelerate the rate of packet
`dropping as a queue continues to fill above the first thresh
`dropping as a queue continues to fill above the first thresh-
`old. In these embodiments the incidence of dropping packets
`old. In these embodiments the incidence of dropping packets
`is minimized and managed, and spread over more traffic than
`is minimized and managed, and spread over more traffic than
`would be the case if dropping of packets were to begin only
`would be the case if dropping of packets were to begin only
`at a full queue, wherein all packets would be dropped until
`at a full queue, wherein all packets would be dropped until
`the queue were to begin to empty.
`the queue were to begin to empty.
`A distinct advantage of the queue management Scheme of
`Adistinct advantage of the queue management scheme of
`the present invention is that the intelligence required is
`the present
`invention is that the intelligence required is
`considerably lessened, and there is no addition to the traffic
`considerably lessened, and there is no addition to the traffic
`load by generating Flow Control indicators.
`load by generating Flow Control indicators.
`It will be apparent to the person with ordinary skill in the
`It will be apparent to the person with ordinary skill in the
`art that the embodiments of the invention described in this
`art that the embodiments of the invention described in this
`Specification are exemplary, and may vary in a number of
`specification are exemplary, and may vary in a number of
`ways without departing form the Spirit and Scope of the
`ways without departing form the spirit and scope of the
`present invention. For example, there may be more or fewer
`present invention. For example, there may be more or fewer
`than nine ports and queue managers per fabric card, the
`than nine ports and queue managers per fabric card, the
`System may be implemented on a chip or a Set of chips, and
`system may be implemented on a chip or a set of chips, and
`the size of each queue may vary. There are many other
`the size of each queue may vary. There are many other
`alterations within the Spirit and Scope of the invention as
`alterations within the spirit and scope of the invention as
`well, and the scope of the invention is limited only by the
`well, and the scope of the invention is limited only by the
`claims which follow.
`claims which follow.
`What is claimed is:
`What is claimed is:
`1. A method for managing data traffic at Switching ele
`1. A method for managing data traffic at switching ele-
`ment nodes in a fabric network, each switching element
`ment nodes in a fabric network, each Switching element
`node having a plurality of input and output ports, comprising
`node having a plurality of input and output ports, comprising
`the Steps of:
`the steps of:
`(a) establishing at each input port, a number of virtual
`(a) establishing at each input port, a number of virtual
`output queues equal to the number of output ports, each
`output queues equal to the number of output ports, each
`Virtual output queue at each individual input port
`virtual output queue at each individual
`input port
`dedicated to an individual output port, Storing only
`dedicated to an individual output port, storing only
`packets destined for the associated output port, for
`packets destined for the associated output port, for
`managing incoming data traffic, and
`managing incoming data traffic; and
`(b) accepting or discarding data at each virtual output
`(b) accepting or discarding data at each virtual output
`queue directed to a queue according to a quantity of
`queue directed to a queue according to a quantity of
`data in the queue relative to queue capacity by provid
`data in the queue relative to queue capacity by provid-
`ing a queue manager for monitoring quantity of queued
`ing a queue manager for monitoring quantity of queued
`data in relation to a preset threshold, and discarding
`data in relation to a preset threshold, and discarding
`data from each Virtual output queue at a predetermined
`data from each virtual output queue at a predetermined
`rate, when the quantity of queued data reaches or
`rate, when the quantity of queued data reaches or
`exceeds the threshold;
`exceeds the threshold;
`wherein in Step (b), the queue manager increases the rate
`wherein in step (b), the queue manager increases the rate
`of discarding as quantity of queued data increases
`of discarding as quantity of queued data increases
`above the preset threshold, discarding all data traffic
`above the preset threshold, discarding all data traffic
`when the queue is full.
`when the queue is full.
`2. The method of claim 1 wherein, in step (b), all data is
`2. The method of claim 1 wherein, in step (b), all data is
`discarded for a full queue.
`discarded for a full queue.
`3. A switching element node for a fabric network, com-
`3. A Switching element node for a fabric network, com
`prising:
`prising:
`a plurality of input an output ports,
`a plurality of input an output ports;
`a number of Virtual output queues at each input port equal
`a number of virtual output queues at each input port equal
`to the number of output ports, each virtual output queue
`to the number of output ports, each virtual output queue
`at each individual input port dedicated to an individual
`at each individual input port dedicated to an individual
`output port, Storing only packets destined for the asso
`output port, storing only packets destined for the asso-
`ciated output port, for managing incoming data traffic;
`ciated output port, for managing incoming data traffic;
`and
`and
`
`
`
`5
`S
`characterized in that a queue manager accepts or discards
`characterized in that a queue manager accepts or discards
`data directed to a queue according to a quantity of data
`data directed to a queue according to a quantity of data
`in the queue relative to the queue capacity by moni
`in the queue relative to the queue capacity by moni-
`toring quantity of queued data against a preset
`toring quantity of queued data against a preset
`threshold, and discarding data from each virtual output
`threshold, and discarding data from each virtual output
`queue at a predetermined rate, when the quantity of
`queue at a predetermined rate, when the quantity of
`queued data reaches or exceeds the threshold;
`queued data reaches or exceeds the threshold;
`wherein the queue manager incr