`United States Patent
`5,041,963
`[45]
`Ebersole et a1.
`Date of Patent:
`Aug. 20, 1991
`
`Patent Number:
`
`[ll]
`
`[54]
`
`[75]
`
`[731
`
`[21]
`
`[22]
`
`[51]
`
`[52]
`
`[531
`
`[56]
`
`LOCAL AREA NETWORK WITH AN ACI'IVE
`STAR TOPOLOGY COMPRISING RING
`CONTROLLERS HAVING RING MONITOR
`LOGIC FUNCTION
`Inventors:
`
`Ronald J. Ebersole, Beaverton;
`Frederick J. Pollack, Portland, both
`of Greg.
`Intel Corporation, Santa Clara, Calif.
`
`Assignee:
`
`Appl. No; 291,700
`
`Filed:
`
`Dec. 29, 1988
`
`Int. C1.5 ...................... GOGF 15/16; GOGF 11/30;
`G06F 13/36; G06F 13/00
`U.S.Cl..
`364/200; 364/221 6;
`364/221.7; 364/2293 364/230; 364/240;
`364/240.;7 364/241. 1 364/2418; 364/242;
`364/24295; 370/60; 370/941
`Field of Search .................. 364/200, 900; 370/85,
`370/86, 60, 94.1
`
`References Cited
`U.S. PATENT DOCUMENTS
`
`6/1982 Girardi .................................. 370/86
`4,334,305
`4.482.980 11/1984 Korowitz et al.
`.. 364/900
`
`-----
`4539.379 12/1984 Lam” Ct 31-
`~ 364/200
`4'493'021 V1985 Agrawal e‘ “1' "
`" 364/200
`
`“ 364/900
`4'539’655 9/1985 Tmsse” et al'
`6/1983 Weisshaar et a1.
`4.754.395
`364/200
`
`9/1988 Blasbalg ........
`4.771.391
`364/200
`
`.. 364/200
`4,777,59l 10/1988 Chang el al.
`.
`...................... 364/200
`4,835,673
`5/1989 Rushby et al.
`
`4,888,726 12/1989 Struger et a1.
`
`...................... 364/900
`
`Primary Examiner—Stuart N. Hecker
`Assistant Examiner—George C. Pappas
`Attorney, Agent. or Firm—Owen L. Lamb
`
`ABSTRACT
`[57]
`A star local area network includes a ring bus hub (4)
`capable of being connected to a plurality of nodes (3, 5,
`. 9) geographically distant from the hub by means of low
`speed serial
`links (18, 19, 21, 28). The nodes include
`processor means (2, 30, 31) for creating messages for
`transfer on the network. A plurality of duplex commu-
`nication links (18, 19, 21, 28) connect the nodes to the
`ring bus hub (4). The hub (40) is comprised of a plurality
`of ring controllers (10, 12, 14, 16) driven by a common
`clock source (7). Each ring controller is connected by
`means of a number of parallel lines to other ring con-
`trollers in series to form a closed ring. Each one (3) of
`the plurality of nodes is geographically distant from the
`hub (4) and is connected to a corresponding one (10) of
`the ring controllers by means of one (18, 19) of the
`duplex communication links. The node controllers in-
`cluding node interface means (40) for transmitting the
`messages as a contiguous stream of words on the duplex
`communication link. The ring controllers include ring
`bus interface means (42) for forming the messages into
`discrete data packets for insertion onto the ring bus and
`means (32 34) for bufferrin data messa es r c '
`d
`'
`g.
`g
`e we
`form the "0‘18 and 0"" ‘he ““8 b115-
`
`10 Claims, 2 Drawing Sheets
`
`f9
`_____fig-
`---...[4
`" 1'11 om
`[Moms
`7'11
`H a
`2
`
`
`
`RING MONITOR
`PROTOCOL
`(NOOE ADDRESS
`
`
`
`PROCESSOR
`
`
`
`
`7
`
`
`non:
`
`CONTROLLER
`
`CONTROLLER
`CONTROLLER
`
`
`comouta .llmong ADDRESS
`(NOOE ADDRESS
`
`
`(11011511
`moors)
`181'
`3;
`n
`};
`
`
`1
`I
`1
`I
`
`
`l
`i
`RING
`H
`
`
`.
`;
`CONTROLLER
`1.
`
`1
`.
`(MODE 1111111555
`1
`1
`1
`1
`21
`.1
`L_-______._J l. __________
`______..___i L _____ ._
`
`
`
`
`
`
`CONTROLLER
`(MODE 2)
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 1
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 1
`
`
`
`US. Patent
`
`Aug. 20, 1991
`
`Sheet 1 of 2
`
`5,041,963
`
`9
`_ ____Li_____ .Z-4____________________ _/_---
`_'
`I'IIooEIeIs
`2
`‘ RINGHUB
`Woom 3 1
`
`
`PROTOCOL
`PROTOCOL
`PROCESSOR
`PROCESSOR
`
`RING MONITOR
`(NODE ADDRESS
`m
`
`*2 24
`
`
`
`
`
`
`
`
`
`RING
`RING
`
`CONTROLLER
`CONTROLLER
`CLOCK
`
`
`
`(NODE ADDRESS
`SOURCE
`(NOOE ADDRESS
`
`
`3)
`
`CONTROLLER
`(NUDE ADJDRESS
`
`
` CONTROLLER
` FIG.I
`
`I NODE 2)
`
`PROTOCOL
`PROCESSOR
`
`III II IIIII IIIIII|
`
`TO MEDIA
`
`FROM MEDIA
`
`36
`
`INPUT LINK
`INTERFACE
`
`32
`
`3?
`
`OUTPUT LINK
`INTERFACE
`
`34
`
`OUTPUT FIFO
`
`I
`
`
`INPUT FIFO
`
`40
`
`
`
`
`
`
`
`SHARED
`.
`
`
`
`RING BUS INTERFACE
`
`
`
`
`SHARED
`
`44 MODE 43
`
`PINS
`
`NODE INTERFACE
`
`42
`
`FIG.2
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 2
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 2
`
`
`
`US. Patent
`
`‘Aug. 20, 1991
`
`Sheet 2 of 2
`
`5,041,963
`
`62
`
`SYSTEM
`BUS
`
`56
`
`AP
`BUS1
`
`T0 NODE
`CONTROLLER
`(FIG.2)
`
`
`386/380 DNA J
`
` TO MODE
`INTERFACE
`_
`
`CONTROLLER
`(“6.2)
`
`PROCESSOR
`(INTEL
`
`
`
`
`
`
`
`
`
`FlG.6b IEEE 302.3 STANDARD GROUP MESSAGE FORMAT
`
`120
`
`+22
`
`124 .
`
`1/6
`
`1
`
`CMMI
`
`reserved
`
`126
`
`128
`
`LID
`
`F IG.7CI
`
`INDIVIDUAL LOCAL ADDRESS FORMAT
`
`no
`
`432
`
`134
`
`136
`
`i
`
`[/6
`
`CNN!
`
`reserved A
`
`FlG.7b GROUP LDCAL ADDREss FORMAT
`
`Petitioner Riot Games, Inc. - EX. 1031, p. 3
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 3
`
`
`
`1
`
`5,041,963
`
`LOCAL AREA NETWORK WITH AN ACTIVE STAR
`TOPOLOGY COMPRISING RING CONTROLLERS
`HAVING RING MONITOR LOGIC FUNCTION
`
`CROSS REFERENCES TO RELATED
`APPLICATIONS
`
`.
`
`[5
`
`20
`
`25
`
`30
`
`35
`
`This application is related to US. Pat. No. 4,939,724,
`granted on July 3, 1990 “Cluster Link Interface: of 10
`Ronald Ebersole, “Ring Bus Hub for a Star Local Area
`Net-work” Ser. No. 07/291,756 of Ronald Ebersole
`now abandoned and “Node Controller for a Local Area
`Network” Ser. No. 07/291,640 now abandoned of R0-
`nald Ebersole, all filed concurrently herewith and as-
`signed to Intel Corporation.
`BACKGROUND OF THE INVENTION
`1. Field of the Invention
`The invention relates to data processing systems and
`more particularly to a method and apparatus for inter-
`connecting a plurality of computer workstations with
`I/O devices and other workstations which allows the
`users to share [/0 resources.
`2. Description of the Related Art
`A Local Area Network, or LAN, is a data communi-
`cations system which allows a number of independent
`devices to communicate with each other within a mod-
`erately-sized geographical area. The term LAN is used
`to describe networks in which most of the processing
`tasks are performed by a workstation such as a personal
`computer rather than by a shared resource such as a
`main frame computer system.
`‘
`With the advent of the inexpensive personal com-
`puter workstation, LANs equipped with various kinds
`of desktop computers are beginning to replace central-
`ized main frame computer installations. The economic
`advantage ofa LAN is that it permits a number of users
`to share the expensive resources, such as disk storage or
`laser printers, that are only needed occasionally.
`In a typical LAN network a desktop workstation
`performs processing tasks and serves as the user's inter—
`face to the network. A wiring system connects the
`workstations together, and a software operating system
`handles the execution of tasks on the network. In addi-
`tion to the workstations, the LAN is usually connected
`to a number of devices which are shared among the
`workstations, such as printers and diskstorage devices.
`The entire system may also be connected to a larger
`computer to which users may occasionally need access.
`Personal computers are the most popular desktop work-
`stations used with LANs.
`The configuration of the various pieces of the net-
`work is referred to as the topology. In a star topology a 55
`switching controller is located at the center or hub of
`the network with all of the attached devices, the indi-
`vidual workstations, shared peripherals, and storage
`devices. on individual links directly connected to the
`central controller. In the star configuration. all of these
`devices communicate with each other through the cen-
`tral controller which receives signals and transmits
`them out to their appropriate destinations.
`A second kind of topology is the bus topology. In this
`topology. wiring connects all of the devices on the
`LAN to a common bus with the communications signal
`sent from one end of the bus to the other. Each signal
`has an address associated withvit which identifies the
`
`45
`
`50
`
`60
`
`65
`
`2
`particular device that is to be communicated with. Each
`device recognizes only its address.
`The third topology employs a circular bus route
`known as a ring. In a ring configuration, signals pass
`around the ring to which the devices are attached.
`Both bus and ring networks are flexible in that new
`devices can be easily added and taken away. But be-
`cause the signal is passed from end to end on the bus, the
`length of the network cable is limited. Star topologies
`have the advantage that the workstations can be placed
`at a considerable distance from the central controller at
`the center of the star. A drawback is that star topologies
`tend to be much slower than bus topologies because the
`central controller must intervene in every transmission.
`In a star configuration, the signaling method is differ-
`ent than in bus or ring configurations. In the star config-
`uration the processor central controller processes all of
`the communication signals. In a bus topology there is no
`central controller. Each device attempts to send signals
`and enter onto the bus when it needs to. If some other
`device trys to enter at the same time, contention occurs.
`To avoid interference between two competing signals,
`bus networks have signaling protocols that allow access
`to the bus by only one device at a time. The more traffic
`a network has, the more likely a contention will occur.
`Consequently,
`the performance of a bus network is
`degraded if it is overloaded with messages.
`Ring bus configurations have even more complex
`signaling protocols. The most widely accepted method
`in ring networks is known as the token ring, a standard
`used by IBM. An electronic signal, called a token, is
`passed around the circuit collecting and giving out
`message signals to the addressed devices on the ring.
`There is no contention between devices for access to
`the bus because a device does not signal to gain access
`to the ring bus; it waits to be polled by the token. The
`advantage is that heavy traffic does not slow down the
`network. However, it is possible that the token can be
`lost or it may become garbled or disabled by failure of
`a device on the network to pass the token on.
`The physical line which connects the components of
`a LAN is called the network medium. The most com-
`monly used media are wire, cable, and fiber optics.
`Coaxial cable is the traditional LAN medium and is
`used by Ethernet TM , the most widely recognized stan-
`dard. The newest LAN transmission medium is fiber-
`optic cable which exhibits a superior performance over
`any of the other media.
`The Fiber Distributed Data Interface (FDDI) is an-
`other standard. FDDl is a token-ring-implementation
`fiber media that provides a 100 m-bit/second data rate.
`There is an increasing need for high-performance-
`intemode communication,
`that is broader [/0 band-
`width. The mainframe computer is being extended or
`replaced by department computers, workstations, and
`file servers. This decentralization of computers in-
`creases the amount of information that needs to be
`transferred between computers on a LAN. As comput-
`ers get faster, they handle data at higher and higher
`rates. The Ethernet TM standard is adequate for con-
`necting 20—30 nodes, each with a performance in the
`range of l to 5 mips. Ethernet TM is inadequate when
`the performance of these nodes ranges from 5 to 50
`mips.
`An [/0 connectivity problem also exists that con-
`cerns I/O fanout and I/O bandwidth. The bandwidth
`problem was discussed above with respect to internode
`communication. The 1/0 fanout problem is related to
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 4
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 4
`
`
`
`3
`that central processing systems are getting
`the fact
`smaller and faster. As the computing speed increases,
`the system is capable of handling more and more 1/0.
`However, as the systems get smaller, it becomes harder
`to physically connect the 1/0 to the processors and
`memory. Even when enough I/O can be configured in
`the system. the I/O connectivity cost can be prohibi-
`tive. The reason is that the core system (processors and
`memory) must be optimized for high-speed processors
`and memory interconnect. The cost of each high-speed
`I/O connection to the core is relatively expensive.
`Thus, cost-effective [/0 requires that the connection
`cost be spread over several I/O devices. On mainframe
`computers, the solution to the connectivity problem is
`solved by using a channel processor. A channel proces-
`sor is a sub-processor that controls the transfer of data
`between several I/O devices at a time by executing
`channel instructions supplied by the main processor.
`The main processor system is connected to several of
`these channel processors. Several channels can share
`one core connection.
`It is therefore an object of the present invention to
`provide an improved LAN that allows high perfor-
`mance interdevice communication and has the ability to
`connect a number of I/O devices to the network.
`
`SUMMARY OF THE INVENTION
`
`The above object is accomplished in accordance with
`the present invention by providing a LAN which com-
`bines the advantages of a star LAN with a ring bus
`LAN. The star configuration provides links to nodes at
`the relatively slow bandwith of the node link. The hub
`of the star uses the relatively high bandwidth of a ring
`bus.
`Nodes attach to the hub of the star through duplex
`communication links. Messages transferred between
`nodes are passed through the hub, which is responsible
`for arbitration and routing of messages. Unlike the prior
`bus topology, or ring topology. each node of the active
`star responds only to those messages that are intended
`for it. Routing of messages is accomplished by a destina-
`tion address in the header of the message. These ad-
`dresses are unique to each node and provide the means
`by which the hub keeps the communication between
`nodes independent.
`The active star configuration of the present invention
`has the advantage that it increases network bandwidth.
`In typical networks the performance of the node's
`means of attachment to the network is equivalent to the
`network bandwidth. This is because messages can be
`transferred only at the rate of the media, and only one
`message can be transferred at a time. Ethernet, Star
`Lari, FDDI, all exhibit this characteristic as they are
`essentially broadcast buses, in which every node sees
`every other node’s traffic.
`In the active star configuration of the present inven—
`tion, every data communication is an independent com—
`munication between two nodes. Simultaneous. indepen-
`dent communication paths between pairs of nodes can
`be established at the same time. Each path can handle
`data transfers at the link media transmission speed, pr0«
`viding a substantial increase in the total network band-
`widths. When two nodes want to communicate with the
`same destination, the hub arbitrates between them and
`buffers the message from the node that is locked out.
`An addressing mechanism maintains a consistent ad-
`dress environment across the complete network that
`facilitates routing. Each node address is composed of
`
`IO
`
`.
`15
`
`20
`
`25
`
`30
`
`,
`
`35
`
`45
`
`50
`
`55
`
`65
`
`5,041,963
`
`4
`two fields, one field providing a node address relative to
`the hub it is attached to, and the other field. a hub ad-
`dress relative to the other hubs in the network. The
`combination of these two fields is a unique network
`address that is used to route any message to its ultimate
`destination.
`The foregoing and other objects, features, and advan-
`tages of the invention will be apparent from the follow-
`ing more particular description of a preferred embodi-
`ment of the invention as illustrated in the accompanying
`drawings.
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`FIG. 1 is a functional block diagram of the Local
`Area Network of the present invention;
`FIG. 2 is a functional block diagram of the interface
`controller shown in FIG. 1 used in one mode of opera-
`tion as a node controller and in another mode of opera-
`tion as a ring controller;
`FIG. 3 is a diagram of a micro based subsystem;
`FIG. 4 is a block diagram of a cluster I/O subsystem;
`FIG. 5 illustrates the message format;
`FIGS. Ga and 6b illustrates the IEEE 802.3 standard
`message format; and,
`FIGS. 70 and 7b show the structure of the Local
`address field for a native Cluster node address.
`
`DESCRIPTION OF THE PREFERRED
`EMBODIMENT
`
`The interconnect architecture shown in FIG. 1 is
`designed to transfer blocks of data, called messages,
`between nodes attached to it. The interconnect function
`is implemented in a single VLSI component known as
`the Cluster Interface Controller (CLIC), shown in FIG.
`2. The transfer of messages between nodes attached to
`the Cluster is controlled by a protocol running in each
`of the nodes, which maintains the orderly access and
`use of the network.
`The present application defines the Cluster architec-
`ture. The CLIC component is described in application
`Ser. No. 07/291,640. The link between the node con-
`troller and the ring controller is more fully described in
`US. Pat. No. 4,939,724.
`The LAN architecture is based on an active star
`topology, as illustrated in FIG. I. Nodes attach to the
`hub (4) of the star through duplex communication links.
`Messages transferred between nodes all pass through
`the hub, which is responsible for arbitration and routing
`of messages. Unlike Ethernet TM or token rings, each
`node sees only those messages that are intended for it.
`Routing of messages is determined by a destination
`address in the header of the message. These addresses
`are unique to each node and provide the means for the
`hub to keep the communication between nodes indepen-
`denL
`The active star configuration increases network
`bandwidth. In typical networks, the performance of the
`node’s attachment to the network is equivalent to the
`network bandwidth. This is because messages can be
`transferred only at the rate of the media, and only one
`can be transferred at a time. Ethernet TM , Starlan TM ,
`FDDI, etc. all exhibit
`this characteristic as they are
`essentially broadcast buses.
`in which every node sees
`every other node's traffic.
`Hub-to-hub connectivity extends the capability of the
`network, providing for a much wider variety of config-
`urations. The network addressing mechanism maintains
`a consistent address environment across the complete
`
`Petitioner Riot Games, Inc. - EX. 1031, p. 5
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 5
`
`
`
`5,041,963
`
`6
`MESSAGE FORMAT
`
`The message format is illustrated in FIG. 5. Starting
`(80) and ending (92) delimiters are shown separately to
`illustrate the dependency on the actual link between the
`node and the hub. Framing bits are a function of the link
`and are removed/regenerated every time a message
`crosses a link boundary. The fields are defined as fol—
`lows:
`SD : Starting Delimiter (Link Dependent)
`DA : Destination Address (6 bytes)
`SA : Source Address (6 bytes)
`L : Length (2 bytes)
`INFO : Information (Up to 4.5 K bytes)
`FCS : Frame Check Sequence (4 bytes)
`ED : Ending Delimiter (Link Dependent)
`ADDRESS FIELDS
`
`10
`
`15
`
`5
`network that facillitates routing. Each node address is
`composed of two fields, one providing a node address
`relative to the hub it is attached to and the other a hub
`address relative to the other hubs in the network. The
`combination is a unique network address, that is used to
`route any message to its ultimate destination.
`A network is composed of only a few functional
`modules, as illustrated in FIG. 1. The interconnect func-
`tionality is contained in the Node controllers (6, 30) and
`Ring Controllers (10, 12, 14, 16), which are two sepa-
`rate personalities of one VLSI chip selected by a mode
`pin input to the CLIC component shown in FIG. 2. A
`significant amount of common logic exists in the CLIC
`for buffering and connecting to the media interface. The
`CLIC will be referred to as either a Node Controller or
`Ring Controller, depending on its function in the net-
`work.
`Media interfaces provide a method of connecting the
`CLIC to different link media, such as twisted pair wires,
`coax cable, or fiberoptic cable. Media interfaces typi-
`cally consist of an interface component or components
`designed for an existing network. For example,
`the
`combination of the 82501 Ethernet TM manchester en-
`coder-decoder component and a differential driver/-
`receiver allow interfacing to twisted pair wires.
`The hub (4) is a “register insertion ring". The net-
`work uses the ring bus described in Ser. No. 07/291,756
`at the hub for arbitration and routing of messages. The
`hub is composed of the Ring Controllers (10, 12, 14, 16)
`and their media interfaces (18, 19. 21. 28). Every node
`controller has a corresponding Ring Controller at-
`tached via the media interfaces and the link between
`them. The ring controllers are connected as a unidirec-
`tional ring bus that closes on itself.
`The node is composed of the protocol processor (2),
`Node Controller (6), and media interface (18, 19). Al-
`though the protocol processor is not part of the present
`invention, it is shown to illustrate how a workstation is
`interfaced to a node controller. The protocol processor
`is responsible for supplying messages (to be sent on the
`network from the node) to the node controller (6), and
`removing messages received by the node controller
`from the network. Protocol handlers running on the
`protocol processor control the flow of messages be-
`tween the node and other nodes on the network. The
`network hub is responsible for actual transfer of the
`message to its destination.
`
`CLUSTER INTERFACE CONTROLLER (CLIC)
`The CLIC provides both the node and hub interface
`capabilities of the Cluster. FIG. 2 illustrates the three
`major functional blocks of the CLIC and how they are
`related. The link interface (36, 37) is common to both
`the node and Ring Controller functions of the compo-
`nent. When used in a network, either the node interface
`(40) or ring bus interface (42) is selected (43), allowing
`the selected interface access to the link interface (36, 37)
`and the [/0 pins (44, 45) of the combined node/ring bus
`interface.
`
`NETWORK ADDRESSING
`
`20
`
`25
`
`30
`
`35
`
`40
`
`45
`
`50
`
`55
`
`The source address (SA) and destination address
`(DA) fields are 48-bits in length and have identical
`formats. The source address (84) identifies the node
`originating the message. The destination address identi-
`fies the node receiving the message. The IEEE 802.3
`standard defines two address lengths, 16-bit and 43-bit.
`Only the 48-bit length is used in the Cluster. The IEEE
`802.3 address format is shown in FIG. 6a and 6b. The
`fields are defined as follows. For FIG. 6a:
`I/G : Individual ([0) or Group (I 1) Address
`U/L : Locally Administered Address (1 1)
`ADDR I Station Address (46 bits)
`For FIG. 6b:
`I/G : Individual (IO) or Group ([1) Address
`U/L : Universal Address (10)
`VIDiVendors Identification (22 bits)
`NAlNode Address (24 bits, assigned by Vendor)
`The I/G bit (94, 100) identifies the address as an
`individual or group address. Individual addresses are to
`a single node. Group can be to a subset of the total set
`of nodes on the network or all nodes (broadcast).
`Broadcast is defined as all address bits equal to 1. Indi-
`vidual and group addresses (other than broadcast) are
`further qualified by the U/L bit (96, 102). Universal
`addresses are unique addresses administered by the
`IEEE. Each manufacturer of 802.3 nodes, or control-
`lers, receives a vendor identification number from the
`IEEE. That manufacturer will then assign 24 bit node
`addresses to each product in sequence. The combina-
`tion of vendor ID and node ID creates a unique, univer-
`sal ID that can be used on any network.
`Locally administered addresses are defined within a
`single network and are independent. They allow special
`functions, grouping, etc. Cluster provides for both
`Local and Universal addresses. Native Cluster ad-
`dresses are encoded in the Local address and have an
`architecturally defined structure. The structure facilli-
`tates efficient routing between interconnected hubs.
`Nodes interfacing to a Cluster network through a Node
`' Controller (CLIC) are identified only by Local ad-
`dresses.
`Universal addresses are supported to allow attach-
`ment of 802.3 nodes without altering their software or
`hardware. The intent is to provide a migration path and
`performance increase
`to existing nodes with no
`changes.
`
`LENGTH FIELD
`
`The length field (L) is two bytes or 16—bits in length,
`It’s value indicates the ‘ength, in number of bytes, ofthe
`
`Petitioner Riot Games, Inc. - EX. 1031, p. 6
`
`The IEEE 802.3 Standard message format and ad-
`dress model are used in the Cluster. The Cluster pro-
`vides a link interface mode that allows a node imple-
`mented in accordance with the IEEE 802.3 standard to
`connect directly to the Cluster without a gateway or
`bridge. The adaption to the Cluster architecture is pro-
`vided by the Ring Controller (CLIC) component.
`
`65
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 6
`
`
`
`7
`INFO field. The length field is not used by the Cluster
`Hardware.
`
`5,041,963
`
`8
`
`NODE INTERFACE
`
`5
`
`10
`
`Two methods of interfacing a node to the Cluster are
`provided. The IEEE Standard 802.3 compatible link
`interface allows an unmodified 802.3 node to be directly
`attached to a Cluster through one of several different
`available 802.3 media choices. High performance nodes
`use the Node Controller (CLIC) which is described in
`copending application Ser. No. 07/291,640. The Node
`Controller (CLIC) provides for high bandwidth links
`and supports low latency protocols.
`IEEE 802.3 LINK INTERFACE
`
`FRAME CHECK SEQUENCE FIELD
`
`A cyclic redundancy check (CRC) is used to generate
`a CRC value for the FCS field (90). The FCS is 32-bits
`and is generated over the DA, SA. L, and INFO fields.
`The CRC is identical to the one used in the 802.3 stan-
`dard.
`
`MAXIMUM MESSAGE LENGTH
`
`Two maximum message lengths are enforced. The
`IEEE 802.3 standard has a maximum message length of
`1.5 K bytes, while Cluster will handle up to 4.5 K bytes.
`Native Cluster nodes (those with a Node Controller)
`have only Local addresses. Universal addresses are
`reserved for 802.3 nodes and imply that only 1.5 K byte
`messages may be sent to these nodes.
`Grou messa es are be restricted to 1.5 K b tes to
`p
`g
`y
`enforce compatibility with 802.3 nodes. The smaller
`group message size also allows the use of a more effi-
`cient broadcast mechanism in the Cluster. Conse-
`quently, native Cluster group messages are restricted to
`l.5 K bytes.
`CLUSTER ADDRESS STRUCTURE
`
`FIGS. 7a and 7b show the structure of the Local
`address field for a native Cluster node address. Both
`group (FIG. 7a) and individual address (FIG. 7b) struc-
`tures are illustrated. The 1/6 field (120, 130) is the
`Individual or Group Address (1 bit). The CMMI field is
`the Cluster Management Message Identifier (6 bits). A
`24 bit field is reserved. The HID field is the Hub Identi-
`fier (8 bits). The LID filed is the Link Identifier ( 8 bits).
`The GID field is the Group Identifier (16 bits).
`LOCAL ADDRESS FORMAT
`The CMMI field is a Cluster defined field used to
`identify network control functions. A zero value in the
`field indicates that the message is to be handled nor-
`mally. A nonzero value identifies a special function to
`be performed by the CLIC selected by the HID and
`LID field.
`(CMMs) are ad-
`Cluster Management Messages
`dressed directly to a Ring Controller and are used to
`manage functions in the network that cannot be directly
`handled by the hardware. Examples are network map-
`ping, initialization of routing functions, diagnostic eval-
`uation, and performance monitoring. Ring Controllers
`recognize the message as a CMM and treat it as a nor-
`mal message unless it is addressed to them. If addressed
`to the Ring Controller,
`the function defined by the
`CMMI field are performed. CMMs are described more
`fully in U.S. Pat. No. 4,939,724.
`All Cluster Hubs are given a unique Hub Identifier at
`initialization. All links attached to the hub are also as-
`signed a unique identifier relative to the hub they are
`attached. Up to 256 hubs, and 256 links per each hub,
`can be supported in a single Cluster network. A native
`Cluster node connected to a link takes on the address of
`the hub and link to which it is attached. 802.3 nodes
`may use either the Local address or a Universal address.
`The Group Identifier (GID) provides for multi-cast
`addressing. Local group addressing is not supported by
`Cluster hardware, deferring the interpretation to the
`node itself. All messages addressed to a group are
`broadcast to all nodes, where they can filter the address.
`
`15
`
`25
`
`Two basic operational modes are incorporated in the
`CLIC shown in FIG. 2, the Node Controller mode and
`the Ring Controller mode. A common block of logic,
`including the FIFO buffers (32, 34), output link inter-
`face (36) and input link interface (37), are shared by two
`independent logic blocks (40, 42) that implement the
`,
`_
`20 operational modes. When the Node Controller interface
`logic (40) is selected, the internal interface (44) is dedi-
`cated to it. along with the I/O Interface pins (46), and
`the Ring Controller hub interface logic (42) is disabled.
`The 1/0 Interface pin functions and timing are deter-
`mined by the mode selected.
`The Node Controller provides a slave direct memory
`access (DMA) interface to a protocol processor respon-
`sible for controlling the Cluster connection. The slave
`DMA interface is capable of very high transfer rates
`and incorporates additional
`functionality to support
`low-latency protocol development. The interface is
`optimized for use with a protocol processor/high per-
`formance DMA controller combination, such as an
`Intel 80386 with an Intel 82380 DMA controller, an
`Intel 80960CA with integrated DMA, or a Channel
`Processor (64). Two Cluster controller subsystems are
`illustrated in FIGS. 3 and 4.
`As shown in FIG. 3, a microprocessor is coupled
`with memory, the Node Controller, and a system bus
`interface. This is a high performance subsystem for use
`in a department computer or mainframe. The local
`memory is used for the 'protocol processor code and
`variables, with the actual messages transferred between
`system memory and the Node Controller. The combina-
`tion of large FIFO buffers and the low latency protocol
`support, makes this model practical, and avoiding extra
`copies.
`
`30
`
`35
`
`50
`
`55
`
`6O
`
`65
`
`NODE CONTROLLER SUBSYSTEM
`CONTROLLER
`
`The following sequence describes the general model
`for message reception and transfer to system memory:
`1. As the header of an incoming message is received,
`it is transferred to local memory (54).
`2. Once the header has been transferred, the Node
`Controller interrupts the protocol processor (56).
`3. The processor (56) acknowledges the interrupt and
`examines the header.
`4. While the protocol processor examines the header,
`the remainder of the message is stored in the Node
`Controller Input FIFO (34), but is not transferred to the
`subsystem or memory.
`5. Once the disposition of the message has been deter-
`mined, the remainder of the message is transferred to
`the destination buffer in system memory (54).
`6. A final interrupt is generated, indicating the avail-
`ability of a status message, after the complete message
`has been transferred.
`
`Petitioner Riot Games, Inc. - EX. 1031, p. 7
`
`Petitioner Riot Games, Inc. - Ex. 1031, p. 7
`
`
`
`5,041,963
`
`10
`
`MESSAGE TRANSFER ON NODE
`CONTROLLER 1/0 BUS
`
`9
`7. The processor acknowledges the interrupt and
`reads the status register.
`8. The transfer is then completed based on the status
`received .
`The above sequence allows processing of the message
`header to be overlapped with the receipt of the mes-
`sage. In many systems, the transfer to system memory is
`faster than the transfer rate of incoming messages. The
`time spent in processing the header at the beginning is
`gained back in the transfer of the message. Copying the
`message into a local buffer is unnecessary due to the
`large buffer in the Node Controller and the flow control
`on the link to the hub.
`
`ALIGNMENT OF MESSAGE BUFFERS
`
`Messages provided by a system for transmission on a
`network can begin on any byte boundary within system
`memory and be any number of bytes long within the
`maximum and minimum size boundaries. The message
`can also be composed of multiple buffers chained to-
`gether with each buffer having different lengths and
`beginning on different boundaries. The Node Control-
`ler is designed as a 32-bit device to achieve high transfer
`rates with minimum impact on system bandwidth.
`The Node Controller does not provide for data align—
`ment. leaving that task to the protocol processor and/or
`DMA controller. Available DMA controllers provide
`alignment and assembly/disassembly capabilities that
`eliminate the need for providing them in the Node Con-
`troller.
`The Node Controller expects all messages to be trans-
`ferred as contiguous words (32-bits wide) starting with
`the first four bytes to be transferred. Messages less than
`an integral number of words long are designated in the
`control word initiating the message transfer. The last
`word of the message transferred to the Node Controller
`is truncated based on the length provided in the control
`word.
`Input messages for the system are handled in a similar
`fashion. T