throbber
USOO54l4704A
`.
`5,414,704
`[11] Patent Number:
`5
`[19]
`United States Patent
`Spinney
`[45] Date of Patent:
`May 9, 1995
`
`
`4,933,937 6/1990 Konishi ............................ 370/85.13
`7:221
`~~
`,
`,
`ro er e a .
`...... ..
`395/425
`5,109,336 4/1992 Guenther et a1.
`5,121,495 6/1992 Nemes .. . .. .. . . . . ...
`. . .. .. .. 395/600
`5,136,580 8/1992 Videlock et al.
`.. 370/94.1 X
`5,197,002 3/1993 Spencer ...............
`395/400 x
`5,247,620 9/1993 Fukuzawa et al.
`................. 395/325
`,
`FOREIGN PATENT DOCUMENTS
`0522743
`1/1993 E
`P : orr
`HO4L 29/O6
`uropean a .
`.
`
`4023527
`1/1991 Germany .............
`H04L 12/46
`
`
`
`.
`Primary Examz'ner—I.\/lelvin Marcelo
`Attorney, Agent, or Fzrm—-A. Sidney Johnston; David
`A. Dagg
`ABSTRACT
`[57]
`A way of doing source address and destination address
`lookups is described, as may be used in a packet data
`communication system. A way of searching a relatively
`large database is described’ using a combination of pro.
`grammable hash algorithms, binary search algorithms,
`and a small content-addressable memory (CAM). The
`technique is efficient in space, time and cost, compared
`to prior methods. For example, prior methods using
`conventional binary reads may have used thirteen reads,
`whereas this technique requires on average two reads,
`-
`WM‘ 3 “’°’5t ‘me °f f°“’ reads‘
`
`22 Claims, 8 Drawing Sheets
`
`[54] AD])REss Loo](Up IN PACKET DATA
`
`AND
`
`CO
`
`NT_
`
`R
`B
`ADD ESSA LE
`
`“
`Inventor: Barry A. Spinney, Wayland, Mass.
`.
`_
`_
`_
`[73] Assignee= Digital Equipment Corporation,
`Maynard, Mass.
`
`[75]
`
`[21] App1' NO‘: 223’379
`_
`[22] Filed:
`AIii- 5, 1994
`
`[63]
`[51]
`
`Related US_ Application Data
`_
`_
`0116 .
`Eontiinuation of Ser. No. 964,738, Oct. 22, 1992, aban-
`Int. (31.6 ........................ H04J 3/26; HO4L 12/46;
`H04L 12/56
`[52] U.S. Cl. ..................................... 370/60; 370/94.1;
`395/400; 395/600
`Of Search .................... 60,
`85.14,
`370/9413 340/825-523 395/400» 600
`References cited
`U.S. PATENT DOCUMENTS
`
`[55]
`
`4,587,610 5/1986 Rodman .............................. 395/400
`Ee1'%“5°‘t‘
`--
`.
`,
`,
`es er e
`364/200
`4,695,949 9/1987 Thatte et a1.
`364/200
`4,780,316 10/1933 Connell ...........
`4,922,417 5/1990 Churm et al.
`..................... .. 364/200
`
`
`
`LCP
`
`'25
`
`70
`
`25
`
`73
`75
`7 4
`
`7 7 — 1:
`
`—_
`
`CONTROLLER
`
`79A
`
`193
`
`20
`
`PROC.
`
`75
`
`15
`
`13
`
`13
`
`CROSSBAR
`SWITCH
`
`
`
`1g
`
`‘
`27
`
`PACKET
`MEMORY
`
`24
`
`23
`
`73
`
`QUEUES
`
`
`
`CB SWITCH
`CONTROL
`PROCESSOR
`
`38
`
`1
`
`'
`
`ARISTA 100
`
`1
`
`ARISTA 1004
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 1 of 8
`
`5,414,704
`
`:
`
`0“I4
`
`EjoEzoo
`
`D
`
`M;
`
`wzmmmomo
`
`Iot>>m
`
`M;M;
`
`Os-
`
`mm.:oE.zoo
`
`:
`
`—IZ<..
`
`-2
`
`NlZ<._
`
`N
`
`.0?‘
`
`2
`
`
`

`
`nu
`
`a
`
`2
`
`4.0anA»M.
`
`as%
`
`ndmmQ04
`
`
`
`ms<m_
`
`tm_M.m:9.mmmc.
`
`“Mm.mm“
`
`
`
`----o<2mm
`
`anmommmooma
`mmJomkzoo
`
`.Mmmamao
`
`xotammo
`
`wms>mo2m:mpwxo<a
`
`MIQ:§mM.m_m<mmmomom.
`
`_m
`
`3
`
`3
`
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 3 of 8
`
`5,414,704
`
`I
`
`PACKET
`
`‘
`
`OUTBOUND
`TRANS.
`(OT)
`
`4
`
`OUTBOUND
`REC.
`(OR)
`
`4
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 4 of 8
`
`5,414,704
`
`so
`
`mm.mm
`
`mm
`
`_QQm
`
`omo
`
`.ouz_
`
`o4m:
`
`044maow
`
`5E
`
`mmo<m:
`
`mmo<m:
`
`we
`
`J4J4J4%T4L[TV
`_oo¢
`5K0.3Nmom
`
`mmo<m:
`
`mwo<m:
`
`vn
`
`H<2momm:<muBog
`
`mm_o
`
`no
`
`mmo<m:
`
`:<oG
`
`mm
`
`mm
`
`mm
`
`044Nmom
`
`:<o6
`
`_oom
`
`
`
`
`
`p<:momm2<mu»<4mm
`
`no
`
`.mmo<m:
`
`mmo<m:
`
`mmo<m:
`
`we
`
`mmo<m:
`
`4<oo4
`
`_ooL
`
`mm
`
`4.U~n~
`
`
`
`5
`
`
`
`
`
`
`
`
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 5 of 8
`
`5,414,704
`
`mmmm
`
`_®
`
`ommm
`
`mm
`
`mm
`
`mm
`
`mm
`
`V”
`
`mmo<m:
`
`xomzo
`
`EDW
`
`¢2<»mmhfi
`
`Hmozo
`
`
`
`mzz.o>mm
`
`ozm_
`
`Hmozo
`
`ozmo
`
`QEQOU
`
`m2<mm
`
`2m»m>m
`
`Hmom
`
`noozm
`
`Hmxo<a
`
`2<mm»m
`
`mz:
`
`
`
`toua>»zo:<4mz<mH
`
`
`
`m.U~h
`
`mu
`
`mm
`
`_m
`
`om
`
`mm
`
`mm
`
`mm
`
`mm
`
`mm
`
`maosmma
`mumaom
`
`momaom
`
`xz:QOI
`
`X23
`
`
`
`I0_._>>mm_n_>Hxz:
`
`
`momaomJooohoma
`
`
`
`.»mmo.a<omogmmomoSmmm
`
`
`
`xz:Io:§mmm<4o
`
`6
`
`
`
`
`
`
`
`
`
`
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 6 of 8
`
`5,414,704
`
`.m
`
`_m
`
`_m
`
`mm
`
`m,moooonxmoz_
`
`m_%.ooouxmoz_
`
`m_»mooouxmoz_
`
`o.*oommuxmgz_
`
`m.*_ommnxwoz_
`
`o,%mowmnxmoz_
`
`m.%ooomuxmoz_
`
`m_%_oomuxmoz_
`
`m.%moomnxmoz_
`
`om
`
`mmmm
`om_
`
`om
`
`
`
`OHmxosm:m<:
`
`
`
`—HmxoamIm<I
`
`
`
`NHmxoam:m<:
`
`mm
`
`o_*mmmmnxmoz_
`
`m,%L¢nmuxmoz_
`
`m_%mmm<nxmoz_
`
`o.%¢Lm<uxmoz.
`
`©,%_¢m¢nxmoz_
`
`o_%¢un<uxmoz_
`
`©_*L¢uLnxmoz_
`
`m.U~n~
`
`7
`
`
`
`
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 7 of 8
`
`5,414,704
`
`SIZE
`
`BALANCE TREE
`
`STORAGE ORDER
`
`7
`
`5
`
`4
`
`FIG. 7
`
`99
`
`98
`
`D
`3/ \<
`B
`A/ \C E/F\G
`
`96
`96
`95
`
`D
`
`B
`A[ \C
`
`B/°\D
`
`/
`
`3 /\
`
`8
`
`

`
`U.S. Patent
`
`May 9, 1995
`
`Sheet 8 of 8
`
`5,414,704
`
`48-BWS
`
`
`
`
`INPUT ADDRESS
`
`CAM MATCH
`
`85
`
`' M1 MAPHNG
`
`86
`
`
`HASH FUNCTIONS
`
`
`248- 1 POSSIBLE
`
`HASH FUNCNON
`
`
`97
`88
`16 ESSSEES
`
`
`
`
`32 ETS
`
`.
`
`. 700
`
`
`
`89
`
`HASH TABLE
`
`
`
`
`90
`HASH BUCKET (1 OF 64K)
`POWTER 15 ETS
`
`SZE
`3 ETS
`
`TRANSLAHON TABLE
`
`90
`
`90
`
`TRANSLAWON
`
`SOURCE
`SEEN
`
`
`
`FIG. 8
`
`9
`
`

`
`1
`
`5,414,704
`
`ADDRESS LOOKUP IN PACKET DATA
`COMMUNICATIONS LINK, USING HASHING
`AND CONTENT-ADDRESSABLE MEMORY
`
`5
`
`2
`search. This is prohibitive from a performance stand-
`point, because the device receiving the packet with the
`address must process packets at a very high rate to keep
`current with the traffic on a typical network. The mem-
`ory accesses and processing time in devices made using
`commonly available semiconductor technology are not
`compatible with this method.
`CAM technology, on the other hand, requires only
`one read operation to compare all stored addresses with
`an incoming address. However, the complexity and cost
`are prohibitive. Perhaps 100-times more transistor de-
`vices are required, compared to standard static RAM
`devices. An example of an address translation method
`using a CAM in a virtual addressing mechanism for
`computer memory is disclosed in U.S. Pat. No.
`4,587,610.
`Hashing algorithms have provided the most efficient
`solutions to the address lookup problem. The hashing
`solutions previously developed, however, exhibit ineffi-
`ciencies in memory usage, low speed, and large worst-
`case delays, compared to the method provided herein.
`There were several goals and constraints imposed in
`the development of this invention. Generally, it was
`desirable to reduce the space needed on printed circuit
`board to implement the system, and it was to be of
`minimum cost and electrical power consumption. To
`this end, adding chips to the already—required chips
`population, or adding pins to these chips, was undesir-
`able. That is, a goal is to provide address translation in
`a system without any additional overhead compared to
`a system without translation.
`Generally, there was a need for a capability of about
`16K address entries as a maximum, where ten to one
`hundred bridge handles were available. There was no
`need for more than this because of the limited address-
`ing requirements of Ethernet. The address lookup
`should not require more than about 2.4 microsec per
`packet so that the smallest messages could be handled
`continuously in real time. The lookup should be com-
`pleted before the next packet arrives so the inefficient
`use of packet memory will be avoided for packets that
`will be discarded, and no “yet-to-be-1ooked—up” queue
`need be maintained. Another burden is that address sets
`are not randomly assigned, so if a hash function is
`chosen randomly, the hash function will not be func-
`tional for some sets of addresses (hash will result in too
`many collisions) and a new hash function is required.
`Generally, it is desirable that there is a less than 1-in-100
`chance of a new hash function being required for any
`given set of 16K addresses, meaning that probably
`fewer than 1-in-1000 customers will ever have to re-
`, hash. A general requirement is that the memory needed
`for the translation function (e.g., hash tables, etc.
`)
`should not impose hardware burdens on the system, and
`the format of the memory (data path width, etc.) should
`match that needed for packet memory usage.
`SUMMARY OF THE INVENTION
`
`This application is a continuation, of application Ser.
`, No. 07/964,738, filed Oct. 22, 1992, now abandoned.
`RELATED CASES
`
`The application discloses subject matter also dis-
`closed in the following copending U.S. patent applica-
`tions, all of which are assigned to Digital Equipment
`Corporation:
`Ser. No. 07/964,791, filed Oct. 22, 1992, by Nigel
`Terence Poole, for “BACKPLANE WIRING FOR
`HUB IN PACKET DATA COMMUNICATIONS
`SYSTEM” (PD92-0558);
`'
`Ser. No. 07/964,792, filed Oct. 22, 1992, by Nigel
`Terence Poole,
`for “CROSSBAR SWITCH FOR
`SYNTHESISING MULTIPLE BACKPLANE IN-
`TERCONNECT TOPOLOGIES IN COMMUNICA-
`TIONS SYSTEM” (PD92-0559);
`Ser. No. 07/965,651, filed Oct. 22, 1992, by Bryan
`Alan Spinney, for “PACKET FORMAT IN HUB
`‘ FOR PACKET DATA COMMUNICATIONS SYS-
`TEM” (PD93-OOl2); and
`Ser. No. 07/969,121, filed Oct. 22, 1992, by Martin
`Edward Griesmer et al, for “APPARATUS AND
`METHOD FOR MAINTAINING FORWARDING
`INFORMATION IN A BRIDGE OR ROUTER”
`(PD93-0013).
`
`BACKGROUND OF THE INVENTION
`This invention relates to address translation as used in
`packet data communications, and more particularly to a
`way of doing source and destination address lookups in
`such a system, using a combination of hashing, binary
`search, and CAM lookup.
`In a packet data communication network of the
`Ethernet, token ring, or FDDI type, for example, there
`is usually the need to translate addresses. Some proto-
`cols or systems specify a 48-bit source and destination
`address so that a globally unique address is provided.
`However, for efficient use of resources at a local seg-
`ment of a large network,
`it
`is advantageous to use
`smaller address fields instead of 48-bit addresses, for
`efficiency in bit-count of_ messages as well as efficiency
`in processing and storage. For this reason, while the
`48-bit addresses are carried in the packet throughout its
`lifetime, shorter addresses are generated for local rout-
`ing and processing. Thus, a translation mechanism must
`be provided to allow switching between global and
`local addresses. Examples of prior address translation
`methods used in packet dam communications networks
`are disclosed in U.S. Pat. Nos. 4,933,937, 5,027,350, and
`5,136,580.
`A typical translation method employs a database of
`addresses in one format, with indexing into the database
`using the addresses of the other format. Hashing is often
`used in a method of this type to shorten size of the
`addresses needed, and to provide more heavily popu-
`lated data structures (reduce memory requirements). A
`binary search engine is also commonly used for such
`address lookups. Content addressable memories are a
`third technique for solving the search requirement.
`An address database of, for example, 16K addresses
`(requiring a 14—bit address to enter) wouldrequire a
`worst case of fourteen reads in a straightforward binary
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`_
`
`50
`
`55
`
`60
`
`65
`
`In accordance with one embodiment of the invention,
`a way of doing source address and destination address
`lookups is provided, as may be used in a packet data
`communication system. A way of searching a relatively
`large database uses a combination of programmable
`hash algorithms, binary search algorithms, and a small
`content-addressable memory (CAM). The technique is
`efficient in space,
`time and cost, compared to prior
`methods. For example, prior methods using conven-
`tional binary reads may have used thirteen reads,
`
`10
`
`10
`
`

`
`3
`whereas this technique requires on average two reads,
`with a worst case of four reads.
`
`5,414,704
`
`4
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`A feature is the use of two different techniques to
`perform the address lookup. The vast majority of look-
`ups are obtained by hashing the address, using a pro-
`grammable 48-bit to 48-bit linear hash function, and
`then using the low-order l6-bits, to obtain one of 64K
`hash buckets, where each hash bucket contains both a
`pointer to a set of zero to seven lookup records, and the
`number of lookup records in this set. A set of lookup
`records is organized as a perfectly balanced, breadth-
`first binary tree. To obtain the required address, a bi-
`nary search of this tree is done, and since the maximum
`depth of any tree is three, tile maximum number of reads
`required is four—one to read the pointer from the hash
`bucket and the tree size, and three reads (maximum) to
`traverse a tree. A breadth-first storage representation is
`chosen because storage allocation for a breadth-first
`tree is never greater than the number of elements in the
`tree. For example, a hash bucket which points to five
`entries will take exactly five lookup records—with no
`lookup records required to be empty.
`The second technique used in tile combination is to
`handle the reasonably-rare case when more than seven
`48-bit addresses hash to the same bucket. To handle this
`case, one of the addresses is simply put in a CAM mem-
`ory chip that is present anyway. Also, it is noted that a
`destination address is compared to the CAM contents
`anyway, for other purposes, so no new hardware or
`operation is needed. The result of the comparison is
`either (a) tile entry is not found in the CAM, or (b) a
`number (0-to-255) supplying the index of tile associated
`lookup record in a translation table. About thirty-two
`or so of these entries can be handled in a CAM with not
`significant burden, in one embodiment, with plenty of 35
`room left in the CAM for protocol filtering entries.
`There is a third technique used in the combination,
`according to one embodiment. Rather than storing the
`full 48-bit address (along with the new shorter address)
`in the lookup record, to enable tile binary search com-
`parison matching (i.e., greater than, less than, or equal
`to) to be done, as was true in prior methods, a smaller
`value is stored. To avoid requiring two reads (in an
`embodiment using 56-bit data paths to memory), advan-
`tage is taken of fact that all of the possible (247-1) pro-
`grammable hash functions are mathematically one-to-
`one functions, and hence the comparison can just as
`easily be done with the full 48-bit hashed address as
`with the original incoming packet address. Since all of
`the hash entries associated with a particular hash bucket
`must have the same lower 16-bit hash value, it is only
`necessary to store the additional 32-bit part (48-bit
`hashed address minus the bottom 16-bit field) in the
`lookup record.
`Another interesting feature in one embodiment is the
`use of a powerful set of programmable hash functions.
`Any hash function can be used of the form:
`
`40
`
`45
`
`50
`
`55
`
`(M*addr)MOD(X4s +x36+x15 +x*°+ 1)
`
`where M is any non-zero 48-bit polynomial, and where
`the arithmetic is performed in “field of polynomials of
`degree 48 over the Galois field of order 2+ (same math-
`ematical structures as are used for network CRC, cyclic
`redundancy checks). Traditional techniques often used
`a fixed hash function, which can cause problems for
`certain address sets. With this invention, however, a
`new value of M is picked and a re-hash is done to re-
`place the tables, whenever too many hash bucket colli-
`
`60
`
`65
`
`11
`
`sions occur. Also, some conventional hash techniques
`used functions that weren’t as “randomizing” and thus
`created too many collisions, or used functions which
`weren’t both efficient and fast in both hardware and
`software.
`The technique of the invention is intended to be used
`whenever a large database (e.g., from less than 1K to
`greater than 64K entries) is to be searched. In the partic-
`ular use described herein, the technique is employed to
`perform a hardware search of an address database,
`which could have up to 32K entries, for a matching
`“network” address, at a very high speed. In one em-
`bodiment, the technique is able to look up a 48-bit desti-
`nation address and a 48-bit source address for each
`packet, at a rate of over 400,000 packets per second, as
`may be needed in an FDDI system. The worst case
`single address lookup requires at most four memory
`reads, and typically averages about two memory reads.
`This high address lookup rate is required to implement
`bridging on high speed network links operating by the
`IEEE 802.1D standard, in order to perform source and
`destination address lookups on minimum-sized packets;
`the systems which may use this method include FDDI
`(l00Mbits/sec), T3/E3 (45/34Mbits/sec) and Sonet
`(l55Mbits/sec).
`Accordingly, a technique is provided that is taster
`than binary searches, more deterministic than conven-
`tional software hash functions (i.e., worst case perfor-
`mance isn’t much worse than typical performance), and
`cheaper than strictly CAM approaches. Thus, a less
`expensive approach is provided to perform very high
`speed lookups )e.g., more than 100,000 address searches
`per second on data bases with greater than 8,000 ad-
`dresses. This solution is needed in bridges and routers
`on high speed links in packet data communications
`networks.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The novel features believed characteristic of the in-
`vention are set forth in the appended claims. The inven-
`tion itself, however, as well as other features and advan-
`tages thereof, will be best understood by reference to
`the detailed description of a specific embodiment, when
`read in conjunction with the accompanying drawings,
`wherein:
`FIG. 1 is a diagram in block form of a communica-
`tions network which may use features according to one
`embodiment of the invention;
`FIG. 1a is an electrical diagram in block form of a
`controller for the communications network of FIG. 1;
`FIG. 2 is a diagram of the controller of FIG. 1 and la
`showing processes executed in the controller;
`FIG. 3 is a diagram of a controller of FIG. 1 and la
`connected in a network;
`FIG. 4 is a diagram of frame formats used in the
`network of FIGS. 1 or 3;
`FIG. 5 is a diagram of the fields contained in an added
`header in the formats of FIG. 3;
`FIG. 6 is a diagram of a data structure used for a hash
`table in the system of FIGS. 1 and la;
`FIG. 7 is a diagram of breadth-first binary trees used
`in the method of the invention, to store translated ad-
`dresses; and
`FIG. 8 is a flow chart of address lookup procedures
`used in the embodiment of the invention of FIGS. 1 and
`la.
`
`11
`
`

`
`5
`
`5,414,704
`
`DETAILED DESCRIPTION OF SPECIFIC
`EMBODIMENT
`
`Referring, to FIG. 1, a packet data communications
`network which may use the features of the invention
`includes a controller 10 for interface between an FDDI
`link 11 and a crossbar switch device 12. The crossbar
`switch device 12 has a number of input/output ports 13,
`and each one of these ports 13 may be connected by
`another controller 10 to another network segment 11
`such as an FDDI link or a token ring or Ethernet bus,
`for example. The crossbar switch 10 ordinarily makes a
`direct point-to-point interconnect between one port 13
`and another port 13, so that the crossbar acts as a bridge
`or router in the network, linking one network segment
`to another. A station on a link 11 sends a packet onto its
`network segment with a destination address which is on
`another, different segment. The controller 10 for this
`segment detects the address as being that of a station on
`one of the remote segments, and generates local switch-
`ing information to send to the crossbar 12 so that the
`appropriate interconnect can be made to send the
`packet to the proper port 13 and link 11, via another
`controller 10. As set forth in the above-mentioned co-
`pending applications, the crossbar switch device can
`function as a flexible interconnect device to create a
`ring or bus using the ports 13, as well as functioning as
`a point-to-point connector as is the usual case for cross-
`bar switches.
`Referring to a more detailed view of FIG. 1a, each
`port 13 of the crossbar has a data-in path 14 and a sepa-
`rate data-out path 15. The interface between the con-
`troller 10 and the FDDI link 11 is by way of a media
`access control (MAC) device 16, functioning to convert
`the serial light transmission on the incoming fiber optic
`cable 17 to electrical pulses, to recover the clock, con-
`vert the serial dam on the optic loop to 6-bit parallel
`symbols, act as an elastic buffer to allow reclocking of
`data entering the controller 10, etc. Of course, all of
`these functions are reversed for outgoing data on the
`cable 18. The interface between the controller 10 and
`the MAC device 16 is by an incoming 8-bit parallel data
`path 19a (with additional parity and control lines) and
`an outgoing 8-bit parallel path 19b.
`The controller 10 contains a processor or state ma-
`chine 20 to execute various processes as will be de-
`scribed, and accesses a packet memory 21 via an inter-
`face 22, as well as a content addressable memory
`(CAM) 23 via interface 24. The packet memory 21 is
`addressed by a 20-bit address bus, and data is transferred
`by a 56-bit bidirectional data bus, included in the inter-
`face 22; various control lines are also in the interface 22.
`The CAM 23 is driven by a 14-bit bus and various con-
`trol lines in the interface 24. The packet memory 21 is a
`RAM which stores a number of queues for incoming
`and outgoing data packets, as well as translation tables
`and hash tables as will be described. In addition, the
`packet memory stores certain data for which addresses
`are matched in the CAM 23.
`The controller 10 also interfaces with a line card
`processor 25 by bus 26. The line card processor 25 is
`used to execute some diagnostic and initialization func-
`tions, and does not operate in routine packet transfer.
`Logically, the processor 20 in the controller 10 exe-
`cutes six independent processes. There are two for in-
`bound packet processing,
`two for outbound packet
`processing, one for interfacing to the external packet
`memory, and one for line card processor access. Pack-
`
`5
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`12
`
`6
`ets inbound on FDDI line 17 and going through the
`controller 10 to the crossbar switch 12 are referred to as
`“inbound.” Likewise, packets going in the direction of
`crossbar switch 12 through the controller 10 to the
`FDDI output line 18 are referred to as “outbound.” By
`having independent processes which can operate in
`parallel, the controller 10 can process inbound and out-
`bound packets at full speed. Distributed among the
`processes are control, parameter and status registers,
`that are used to define the operational modes and to
`determine the internal state of the controller 10; these
`registers are accessed through the node processor inter-
`face 26.
`
`Referring to FIG. 2, the inbound receive (IR) process
`27 executed on the processor 20 receives packets from
`the interface 19a to the FDDI ring via the media access
`control device 16. The IR process 27 parses and de-
`codes each incoming packet from line 19a, places the
`packet data into an intermediate FIFO 28, and performs
`the necessary processing to determine if the packet is to
`be forwarded to the crossbar 12. For bridged packets,
`this decision is made by performing destination address,
`source address, and protocol field lookups, with the aid
`of the content addressable memory 23 and the internal
`hash function, according to the invention. The result of
`the lookup will determine where the packet is to go.
`The packet may have to be transmitted to another port
`13 on the crossbar 12 to reach its destination, the packet
`may be destined for the line card processor 25, the
`packet may be destined for a processing engine 38 in the
`crossbar switch 12, or the packet may be filtered and
`discarded. When the IR process 27 decides a packet is
`to be forwarded to the crossbar 12, it uses the lookup
`results to generate the information in an added header as
`will be described. It will select an inbound queue to
`store the packet in the external packet memory 21 and
`will initiate memory requests to transfer the data from
`its intermediate FIFO 28 to the selected queue in the
`external packet memory 21. The processor 20 executing
`this IR process 27 performs this operation by generating
`IR requests that are sent to the packet memory 21.
`The IR process 27 also handles a number of exception
`conditions. The forwarding of packets to processing
`engines in the switch 12 can be rate limited by software
`to prevent the traffic from a single port from over-
`whelming the shared processing resources of the switch
`12. When these rate limits are exceeded the IR process
`27 will discard these packets until the rate limit is re-
`plenished.
`The inbound transmit (IT) process 29, again referring
`to FIG. 2, services the queues in packet memory 21 that
`contain packets to be transmitted to the crossbar switch
`12. These packets are stored in queues in the external
`packet memory 21 of the controller 10. When an en-
`abled queue has a packet count greater than one, the IT
`process 29 will begin processing of the packet by initiat-
`ing packet memory requests to move data from the
`packet memory 21 to an internal FIFO 30. Part of the
`data transferred is the added header associated with the
`packet. The IT process 29 uses this added header field
`to request a connection on the crossbar 12 to the desired
`destination port 13; it requests this connection by send-
`ing information to the port interface 13 that indicates
`the destination port of the switch 12 and service class
`information describing how the connection is to be
`serviced. Prior to requesting the connection, the IT
`process 29 checks the timestamp stored with the packet
`and will discard the packet if it has expired. If the IT
`
`12
`
`

`
`7
`process 29 determines that a packet has expired after a
`connection has been requested but before the connec-
`tion is made, it will discard the packet, abort the re-
`quested connection and continue servicing the enabled
`queues.
`
`When a connection is established, the IT process 29
`transfers the data from its intermediate FIFO 30 to the
`crossbar 12 via path 14 and performs the proper packet
`formatting during transmission.
`The outbound receive (OR) process 31 executing on
`the processor 20 receives packets from the crossbar
`switch 12 interface 15, parses and decodes each packet,
`places the packet data into an intermediate FIFO 32 and
`performs the necessary validity checks of the packet’s
`added header to determine if the packet should be re-
`ceived and placed at the tail of an enabled queue in
`packet memory 21. Packet reception is established
`when the controller 38 for the crossbar indicates that
`this controller 10 is the desired destination port for a
`connection. The OR process 31 indicates its willingness
`to take part in a connection by asserting a “next” con-
`trol signal in the interface 15 that the crossbar controller
`38 samples. When this “next” signal is asserted, the
`crossbar controller may return a signal indicating that a
`connection between a source port and this destination
`port has been made on the crossbar 12. The OR process
`31 immediately monitors the outbound data received at
`the crossbar interface 15 to delimit a packet. It will also
`deassert the “next” signal and will not reassert it until it
`has determined that it is appropriate to establish a new
`connection.
`The OR process 31 uses the service class field and one
`specific protocol class field value from the added
`header of the outbound packet to place the packet on a
`queue in external packet memory 21. The OR process
`31 also handles multiple exception conditions on the
`crossbar including parity errors in the symbols, and
`packets with bad delimiters.
`The outbound transmit (OF) process 33 services the
`queues in packet memory 21 that contain packets to be
`transmitted to the FDDI line 18 via path 19b and device
`16. These packets are stored in the external packet
`memory 21 of the controller 10. When an enabled queue
`has a packet count greater than one, the TO process 33
`will begin processing the packet by initiating packet
`memory requests to move the data from the external
`memory 21 to an internal fifo 34. The OT process 33
`will discard the packet if the time stamp stored with the
`packet has expired, otherwise it will request the MAC
`device 16 to begin a transmission to the FDDI line 18.
`If the OT process 33 is operating in a bridge link mode,
`it will deeapsulate the packet by removing the added
`header that was stored with the packet, prior to trans-
`mission on the path 191) to the FDDI link. If the OT
`process 33 is attached to a switch link, it will keep and
`update the appropriate fields of the added header during
`transmission to the path 19b and FDDI link.
`The packet memory (PM) access process 35 controls
`the interface 22 to the external packet memory 21. The
`packet memory 21 is used to hold the packet data of the
`multiple inbound and outbound queues and is also used
`to store the PC table, the hash table as described herein,
`and the translation database used for lookups. The PM
`process 35 controls access to the external packet mem-
`ory 21 by arbitrating among all the processes requesting
`access to it. The PM process 35 will indicate to the
`requesting process when a cycle has been granted. The
`external packet memory is accessed via a 56-bit data bus
`
`10
`
`15
`
`20
`
`25
`
`30
`
`35
`
`45
`
`50
`
`55
`
`60
`
`65
`
`13
`
`5,414,704
`
`8
`
`in interface 22. The PM process 35 can select up to
`1-million 56-bit words over its 20-bit external packet
`memory address bus in interface 22. Read or write cy-
`cles can be performed every 80-ns to provide 600-Mbps
`of packet memory bandwidth.
`When a process 27, 29, 31, or 33 requests a memory
`cycle via paths 36, 37, 38, or 39, respectively, it also
`indicates to the PM process 35 the desired queue that
`the cycle is associated with by driving the queue num-
`ber. When the PM process 35 grants the cycle to the
`requesting process, it uses the queue number provided
`to simultaneously enable the pointers to the queue to the
`external memory 21. The PM process 35 will drive the
`corresponding tail (write cycle) or head (read cycle)
`pointer onto the external address pins of interface 22
`and will perform the necessary pointer maintenance
`when the cycle completes. The maintenance includes
`wrapping the head or tail pointer when the boundary of
`a queue is reached. The PM process 35 will also monitor
`the queue pointers to detect and signal when a queue is
`approaching an overflow condition.
`Each queue in the external packet memory 21 must be
`defined before the PM process 35 can perform an access
`to it. To define a queue, the boundaries for the queue
`must be specified by programming the external memory
`block addresses that point to the beginning and end of
`each queue. A queue is programmable in size in incre-
`ments of 256 56-bit word units (1792 bytes) which con-
`stitute a block. In addition to defining the size of the
`queue, the owners of the queue head and tail must be
`specified,
`to control the direction of packet flow. A
`separate queue enable is provided for each queue in
`order to turn each one on and off independently.
`The line card processor (LCP) access process 40
`provides the interface between the processor 25 and the
`command and status registers 41 within the controller
`10. Some of these registers are accessed directly by the
`processor 25 and some are accessed indirectly using the
`LCP process 40. The registers 41 that are indirectly
`accessible are shared among the processes 27, 29, 31, 33,
`and 35 of the controller 10 and must be arbitrated for.
`The indirect access interface of the LCP process 40
`provides the mechanisms to select the desired indirect
`register and to trigger the desired read or write opera-
`tion. Since the processor 25 does not have direct access
`to the external packet memory 21 of the controller 10, it
`must also use the indirect access interface of the LCP
`process 40 to perform read and write cycles to the ex-
`ternal memory 21.
`In addition to direct and indirect register 41 access,
`the LCP process 40 also maintains the traffic event
`counters and interrupt event registers. The LCP pro-
`cess 40 monitors the traffic events signaled by the pro-
`cesses 27, 29, 31, 33, and 35 to update the corresponding
`counter. In addition to specified counters, the LCP
`process 40 provides two general purpose counters that
`can be programmed to count a particular traffic event.
`The LCP monitors events in the controller 10 and will
`set corresponding interrupts. These interrupts are cate-
`gorized to be associated with inbound traffic, outbound
`traffic, or fatal interrupts. All interrupts, except fatal
`interrupts, are maskable. The presence of a non-masked
`interrupt will result in the assertion of an interrupt sig-
`nal.
`
`The operation of the processor 20 in controller 10
`using the processes referred to includes a number of
`functions related to inbound and outbound packets.
`
`13
`
`

`
`9
`These include filtering, address lookup, address match-
`ing, rate limiting, parity checking, etc.
`The controller 10 performs input port filtering to
`prevent undesired traffic from entering the net, and this
`filtering is based on certain addresses contained in a
`packet. A 48-bit destination address and a 48-bit source
`address in the packet can be filtered for a number of
`addresses, up to the size of the translation table in mem-
`ory 21. Certain addresses identified in the protocol as
`IEEE 802.2 LLC DSAP and IEEE 802.2 LLC SNAP
`can be filtered

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket