`Gaddis et al.
`
`US005457681A
`11) Patent Number:
`(45) Date of Patent:
`
`5,457,681
`Oct. 10, 1995
`
`54 ATM-ETHERNET PORTAL/CONCENTRATOR
`75) Inventors: Michael E. Gaddis, Richard G.
`Bubenik, both of St. Louis; Pierre
`Costa, Bridgeton; Noritaka Matsuura,
`St. Louis, all of Mo.
`73 Assignees: Washington University; SBC
`Technology Resources, Inc., both of St.
`Louis, Mo.
`
`(21) Appl. No.: 894,445
`.
`Jun. 5, 1992
`22 Filed:
`(51) Int. Cl. ..................... H04L 1266
`52 U.S. Cl. ......................... 370/56; 370/85.13; 370/94.1
`58 Field of Search ............................... 370/56, 60, 60.1,
`370/82, 84, 85.13, 85.14, 94.1, 94.2, 110.1,
`108, 118
`
`56
`
`References Cited
`U.S. PATENT DOCUMENTS
`5,088,090 2/1992 Yacoby ................................ 370185.13
`5,101,404 3/1992 Kunimoto et al. ........................ 370/60
`5,113,392 5/1992 Takiyasu et al. .
`370/94.1 X
`5,136,584 8/1992 Hedlund ................................. 370/94.1
`
`5,204,882 4/1993 Chao et al. ........................ 370/94.1 X
`5,208,811
`5/1993 Kashio et al. ......................... 370/94.1
`5,210,748 5/1993 Onishi et al. ......................... 370/60X
`5,214,642 5/1993 Kunimoto et al. .................... 370/79 X
`Primary Examiner-Melvin Marcelo
`Attorney, Agent, or Firm-Rogers, Howell & Haferkamp
`57)
`ABSTRACT
`An ATM-Ethernet portal/concentrator permits a transparent
`interconnection between Ethernet segments over an ATM
`network to provide remote connectivity for Ethernet seg
`ments. The portal includes an Ethernet controller and an
`ATM cell processor, both of which receive and transmit data
`to and from a dual port shared memory under control of a
`direct memory access controller. A control microprocessor
`monitors and controls the shifting of data through the dual
`port memory. In this scheme, original data is written and
`read directly into and out of the dual port memory to thereby
`eliminate any requirement for copying of data, to thereby
`significantly increase the data throughput capability of the
`portal. In the concentrator embodiment, a plurality of Eth
`ernet controllers, each of which is connected to its own
`associated Ethernet segment, is multiplexed through the
`concentrator to an ATM network to thereby provide remote
`connectivity for each of the Ethernet segments.
`
`13 Claims, 16 Drawing Sheets
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`AACA/WEA0A.
`- - - - - - - - - - - - - - - - - - - - - -- f'-
`Al/AWAZ)
`A727
`(AZA
`206A660
`
`2M%
`424/70/AAA
`
`amme won an as aa am - men amala aaa -- mo mua sp/04
`
`9AAAA
`
`A2AMWA7
`
`; 37%. 1
`2.Éi;
`M/ Wi? ?/ 2
`
`5AA2AA
`
`A7272/A7
`
`5AAAA
`A7A/A7
`A6 %2; 36,277 3
`
`was a
`
`
`
`9AAA
`M/7ZZACA
`AZAp72?
`
`A7A/WA7
`%2/74/7 W
`
`Comcast, Ex. 1249
`
`1
`
`
`
`US. Patent
`
`Oct. 10, 1995
`
`Sheet 1 of 16
`
`5,457,681
`
`
`
`F/GJ.
`
`
`
`63/?- Gal/[RM flaw (0/2/7701 (#6223)
`VPI- yaw/4x PAW/9’ //VD£/V7/F/[fl (595223)
`ycr - [#370,471 (‘flAA/A/Eé Ami/Wf/maé‘éz 2‘s)
`P7 ‘pfl/Z MD 725% (312475)
`at,0- (£14 M55 mam/W77 (1,625)
`
`2
`
`
`
`U.S. Patent
`US. Patent
`
`Oct. 10, 1995
`Oct. 10, 1995
`
`Sheet 2 of 16
`Sheet 2 of 16
`
`5,457,681
`5,457,681
`
`
`
`77-79
`wwww
`
`&(299.770%?
`
`MNVQRRms“
`
`\w®\\.w\\
`
`Q‘memeh.
`
`xNNWwkfiéEww
`
`
`
`KN§¢$¢RN
`
`._x_xwxwfifiw
`§§§<§
`93anI$§§§x
`
`V§§w
`
`3
`
`
`
`
`
`U.S. Patent
`US. Patent
`
`Oct. 10, 1995
`Oct. 10, 1995
`
`Sheet 3 of 16
`Sheet 3 of 16
`
`5,457,681
`5,457,681
`
`Sfik3%\xEK
`
`.V.bfik
`
`§§§kg§§x
`
`\\
`
`
`
`$§§§§§§w§m§§§\§§§N§w§§w§Qw
`
`$§§N§§§§w
`
`
`
`wxhuhxhx§%\k\\%\%m\\h\\V\km¢\\u\
`
`
`
`\mexwxxwawhwR
`
`\
`/
`
`T\NM;\I
`
`
`
`
`
`kwx§m§\93*meme§Q§K
`
`4
`
`
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 4 of 16
`
`5,457,681
`
`i 5 žZÁº
`
`|--º--9 2 g 9 (2...º
`
`
`
`
`
`77@ZZZ7777777ZZZZ
`
`5
`
`
`
`US. Patent
`
`Oct. 10, 1995
`
`Sheet 5 of 16
`
`5,457,681 '
`
`
`
`wquxhQNVNR
`
`Q\m‘\<\
`
`
`
`Sawwaaw$kamksa.\\
`
`S.$$§§S§
`
`$§me
`
`S$§§§
`
`MSVN§$§\\
`
`Q33.?
`
`Nu&\.\muéc\m.m\\qQEuR,SQ
`
`
`
`4Nunxék\Em§k§
`
`3s
`
`EQ§b§3EG
`
`wuxamux
`
`Skwtkfiax
`
`.mummxbwwuQvaQ\§W§QW%
`
`‘figmiy_::-l!-:.....
`
`mum,»bxkfimwwu8..&N\\\R$Q\
`
`§§§§m§\gfifixfia
`‘gkkakmwm
`
`9S5«3%
`
`QVGV1$x0\
`
`6
`
`
`
`
`
`
`
`U.S. Patent
`
`
`
`Oct. 10, 1995
`
`Sheet 6 of 16
`
`5,457,681
`
`_Z_7%)/72/7
`
`Z/7/4/979?
`
`//Ø///////Z/4/276°
`
`7
`
`
`
`US. Patent
`
`Oct. 10, 1995
`
`Sheet 7 of 16
`
`5,457,681
`
`EMIR/VET
`
`(cm/w 029504.816
`
`
`5///4/?£D
`
`A’f/‘f A’fflflfflj flA/DWJZW/W
`flflfldflfl flap #542595
`fl/Ff/‘D/PECOA/HGW/Pfl F02?
`
`/V£/Y7' [7795?”[7 F/Pfl/VE
`
`
`
`
`D/Vfl Mflflf/[fl 74
`297/? (’[[1 1990655154?
`
`
`
`f/A’é‘lf .D/Vfl 1’!40!.57
`
`7/94/7555? [WT/x?!
`.31 car 0/577” 05215
`
`
`
`
`
`
`magma
`Hag/V5
`30/5/55?
`n DIEM/Pw/vs
`
` ffifflA/f’f
`
`(DA/$90140?
`
`
`D/flff‘fl)’ 35045773
`
`
`mm xA/ra “mm
`”amp/g5 mom
`
`
`‘
`A/U COP/7W6
`
`
`/5 iPéWU/flib
`
`
`
`
`
`ӣ17 6sz
`fflfifffléz’f
`
`FIG. 8.
`
`8
`
`
`
`US. Patent
`
`Oct. 10, 1995
`
`Sheet 8 of 16
`
`5,457,681
`
`fly/mm
`Mf/‘M/K’)’
`
`[Mi/Mir
`(Gal/V670 maxi)
`
`”500/965 SY/Vf/i/PO/V-
`
`
`
`
`
`
`
`2:)
`
`
`/Z£D”D/V/4 721M551?
`FRO/‘1 ’47” CEZZ
`PPOCESfO/i’
`
`[fflf/P/VHM/MM/f
`E‘Z/ffffl flESM/flféflf
`FM 564”! 1
`
`
`
`[73/69/1457 60/107901 [-
`5? D/RECVZ)’ #714275
`13,459 5%” slew/Va
`
`. MEMO/WA; wow/V.
`
`/vo (0,027/1/6 /5‘
`PfOU/A’ED
`
`
`
`F/G..9.
`
`9
`
`
`
`US. Patent
`
`Oct. 10, 1995
`
`Sheet 9 of 16
`
`5,457,681
`
`“50w?“ swamp/v-
`/Z£’D ”3/794 mam-'5?
`Fm” fl7/V caz
`Pflflcfé‘fflf?
`
`
`
`
`
`EMF/91M}? fflflA/ffl/f
`30/959? DEF/P/flffl/T’S .
`Fax? x999”! 1
`
`
`
`
`
`
`
`[TX/£19147 Wfl/KS‘M/f
`
`BOP/‘5? Diff/P/Pfflflf
`
`fZ/P fflfl/‘f! Z
`
`
`
`[7/5/66fo [0/1/7-
`K’OLZI/P D/xf’fc’flf
`
`G‘flfl’éflf 2/7/74
`
`
`
`
`
`7-}?0/‘7 fflfl/EFD
`Mf/m/P/flfi’f/MM/V
`A/d (WOW/VG /5‘
`figga/fifz
`
`
`F/G./0.
`
`10
`
`10
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 10 of 16
`
`5,457,681
`
`(9)
`
`
`
`
`
`
`
`
`
`AA377M/47702//
`AZZDAA55
`
`11
`
`
`
`U.S. Patent
`US. Patent
`
`Sheet 11 of 16
`6
`
`5,457,681
`5,457,681
`
`Ck§N§§wfi
`hwEwayxxN
`SESS§§§8
`
`@VRNW
`
`xwkkwfiw_
`
`Swag?
`
`.I.llllllllI.
`
`
`V§§m.MkasG‘wmRm»1wuvxkxwxbukbfiNQKbVSuWm.m%xm§$m§~
`
`
`lllllllllLESQEN_m_.l
`
`
`“(MNQeQ‘
`
`SKlelllllllllllllllllI.
`
`
`
`
`m,NkSfixmwmMfimwmwv§§§§xI.ma_ES\éumbfififiwifiwkwkfifiw\_Illllllll._
`kaVbAWNWRNKQQQV\NWQRKRNN>§NMm_Il.hlllhHHHHHHHh._._thMWQQ§RQWQ§x
`
`
`
`12
`
`I I r
`
`.
`-
`
`§MMW§§RR
`
`NEG
`
`RNNmQQQvSQ
`
`Swmxgxm.
`
`\R\&\N\\
`
`I I I I I I I | I I I I I I I I I I L
`
`L
`
`.N\.w\n\
`
`
`
`12
`
`
`
`
`
`
`
`
`
`U.S. Patent
`Mai Fessing
`LOOp
`
`Oct. 10, 1995
`Sheet 12 of 16
`FIG 13
`
`5,457,681
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`If Ethernet frames.
`have arrived and Xmit
`FIFOs are empty
`
`
`
`Yes
`
`Process incoming Ethernet
`frames.
`
`If DMA Xfer from
`dual E. tO XIN it FIFOS
`as COmpleted
`
`Yes Start transfer of Cells in XMit
`FIFO. Recycle ATM cell buffers
`USed for received frame.
`
`If ATM Cells have
`Yes
`arrived from ATM network
`(i.e. WPXRP) 2
`
`
`
`Es incoming ATM
`S.
`
`If Ethernet controller SNYS
`Xmit Unit is idle
`
`Recycle ATM cell buffers used
`for XMitted frame(s) and
`advance SP (if possible). Xmit
`new frames in waiting queue.
`
`f free dual port
`Memory is running low
`WP approaches SP)
`
`Yes | Discard old, partially
`reassembled Ethernet frames
`and advance SP.
`
`If all deCiSiOnS above,
`1-5, were negative
`
`Fill in Sequence numbers, set
`Yes Segment type identifier, and
`Set Size fields in next
`frame's CellS.
`
`
`
`13
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 13 of 16
`
`5,457,681
`
`PrOCeSS inCOming
`Ethernet frames
`
`
`
`
`
`
`
`
`
`
`
`If frame's destination
`address is that Of
`
`Yes Process Ethernet Control
`frame, return,
`
`O
`
`
`
`If ATM CellS have
`been preformatted
`for next frame
`
`Fill in frame identifiers,
`Sequence numbers, Set
`Segment type identifier. and
`Set Size fields in frame's CellS.
`
`
`
`Fill in segment type identifier
`and size in ATM Cell of last
`Segment.
`
`Initiate DMA transfer t0
`Move Cells from dual port
`tO Xmit FIFO return.
`
`FIG 14
`
`14
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 14 of 16
`
`5,457,681
`
`PrOCess Ethernet
`COntrol frame.
`
`
`
`
`
`
`
`If frame COntains
`Signaling Cells, to be
`Sent intO netWOrk
`
`
`
`Process COMMand (e.g. Set
`VPI/VCI, Set portal
`address, etc.), return.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`Wait for previously initiated
`DMA transfer to Complete
`(if any is pending).
`
`Compute address and size
`Of next block Of data to
`be transmitted.
`
`Initiate OMA transfer to
`MOWe block Of data to XMit
`FIFO.
`
`If this block is the
`last block-Of data to be
`transferred
`
`
`
`F TG 15
`
`15
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 15 of 16
`
`5,457,681
`
`Process incoming
`ATM Cells.
`
`
`
`
`
`
`
`If Cell iS COntrol Cell
`
`
`
`Yes
`
`PrOCeSS ATM COntrol Cell,
`increment RP, retUrn.
`
`O
`
`
`
`LOOkUp Up reassembly
`reCOrd for this Cell
`Create new record if
`nOneXistent.
`
`Increment packets-received.
`
`
`
`
`
`
`
`Set frame-size to the
`yes
`Of Ethernet frame - this Cell.
`
`
`
`O
`
`If frame-Size equals
`packets received
`
`Yes
`
`PUt reaSSembled Ethernet
`frame into waiting queue
`(i.e., Schedule for
`tranSNiSSiOn).
`
`
`
`O
`
`Increment RP, return.
`
`FIG 16
`
`16
`
`
`
`U.S. Patent
`
`Oct. 10, 1995
`
`Sheet 16 of 16
`
`5,457,681
`
`PrOCeSS ATM
`COntrOl Cell.
`
`
`
`Get reserved Ethernet COntrol
`frame header (wait, checking
`Xmit Unit, if none available).
`
`Link ATM Cell to the Ethernet
`COntrol frame header.
`
`Put COntrol Ethernet frame
`into waiting queue, return.
`
`FIG 1 7
`
`17
`
`
`
`1.
`ATM-ETHERNET PORTAL/CONCENTRATOR
`
`5,457,681
`
`10
`
`15
`
`20
`
`25
`
`BACKGROUND AND SUMMARY OF THE
`INVENTION
`With the increasing need for the transfer of data over large
`distances, and the increasing use of Ethernet networks for
`local area networks (LANs), there has arisen a need for
`greater connectivity between LANs which provide greater
`data transfer rates and lower overhead, i.e. processing opera
`tions, for connecting LANs. In the prior art, devices known
`as bridges, routers, and gateways are available and well
`known for connecting LANs such as Ethernet segments over
`wide area networks (WANs). However, these prior art
`devices all have shortcomings. For example, bridges gener
`ally connect two Ethernet segments and are frequently
`limited to relatively short distances there between. While
`gateways and routers offer greater connectivity of Ethernet
`segments, these systems utilize network or transport level
`protocols to route data cells over LANs to their intended
`destination and must explicitly copy data cells destined for
`multiple destinations, as is required in broadcast application.
`Therefore, these prior art devices have limited usefulness
`and applicability.
`In order to solve these and other problems in the prior art,
`the inventors herein have succeeded in designing and devel
`oping an ATM-Ethernet portal which conveniently connects
`disjoint Ethernet segments over an ATM/BISDN network
`(Asynchronous Transfer Mode/Broadband Integrated Ser
`vices Digital Network), creating one large logical Ethernet
`segment. The portal of the present invention utilizes the
`ATM network transparently, with low overhead, and at
`speeds which exceed those of Ethernet segments such that
`operator usability and data transfer rates are non-limited.
`Each Ethernet frame transmitted on any one of the Ethernet
`segments is fragmented into a sequence of ATM cells, which
`are then transmitted by the local portal over the ATM
`network and delivered to the interconnected portals. When
`40
`ATM cells are received at a portal, the cells are reassembled
`into Ethernet frames for transmission over their local Eth
`ernet segments. The high level protocols used by the Eth
`ernet hosts (that is, those protocols located above the data
`link layer in the ISO-OSI model) are not interpreted by the
`portal or by the ATM network. This contributes to the low
`overhead of the portal.
`One of the significant advantages of the portal of the
`present invention is that it utilizes a dual port memory and
`a DMA transfer controller for moving either Ethernet or
`ATM data directly into this shared memory where header
`data is appropriately associated or disassociated therewith,
`and then out again to its destination under control of a
`microprocessor. With this hardware and methodology, the
`need to copy data cells is eliminated thereby dramatically
`decreasing the processing required by the portal and increas
`ing the data throughput rate. As mentioned above, the rapid
`data throughput rate of the portal renders the ATM network
`connection transparent between Ethernet segments which
`may be separated by large distances. Of course this is a
`highly desirable feature and, in some applications, a require
`ment for the ATM network connection to be a useful
`interconnection scheme.
`The hardware implementations of the present invention
`may be configured in either one of two contemplated
`arrangements as presently considered by the inventors
`herein. The first of these is a "standalone” implementation
`
`30
`
`35
`
`45
`
`50
`
`55
`
`60
`
`65
`
`2
`where all of the components are integrated on one or more
`custom design circuit boards to provide a custom portal
`device. Secondly, an "off the shelf" implementation may be
`utilized where the commonly available subsystems are com
`prised of purchased parts which are then integrated with a
`custom design ATM cell processor. This results in essentially
`a “PC” version which may be implemented through a
`commercially available PC with extra hardware added. The
`inventors have chosen the "off the shelf" strategy in imple
`menting a prototype. However, for cost and size reduction,
`the "standalone” implementation would perhaps be more
`desirable in some applications.
`With only minor modifications, the portal of the present
`invention may be extended to function as an ATM-Ethernet
`concentrator. As a concentrator, the device will multiplex a
`plurality of Ethernet controllers, each of which is associated
`with its own Ethernet segment, and provide connectivity
`between the plurality of Ethernet controllers and other
`Ethernet controllers/segments through an ATM network.
`Essentially, instead of a single Ethernet controller as is found
`in the portal, a common bus interconnects a plurality of
`Ethernet controllers to the concentrator which multiplexes
`their output and demultiplexes data being input. For larger
`concentrators for use with more Ethernet controllers, a wider
`bus and faster control microprocessor are utilized. The
`concentrator of the present invention, as with the portal,
`permits a transparent interconnection between local and
`remote Ethernet controllers/segments, and its operation is
`enhanced through the use of a dual port shared memory,
`DMA controller, and control microprocessor as is included
`in the portal design.
`While the principal advantages and features of the present
`invention have been described above, a more complete and
`thorough understanding of the invention may be attained by
`referring to the drawings and description of the preferred
`embodiment which follow.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`FIG. 1 is a schematic diagram of an extended Ethernet
`segment utilizing the ATM-Ethernet portal;
`FIG. 2 is a chart of an ATM user network interface (UNI)
`header format;
`FIG. 3 is a schematic block diagram of the ATM Ethernet
`portal hardware architecture;
`FIG. 4 is a block diagram of the ATM cell processor
`architecture;
`FIG. 5 is a block diagram of an Ethernet frame segmen
`tation into ATM cells;
`FIG. 6 is a block diagram demonstrating the segmentation
`and reassembly (SAR) header;
`FIG. 7 is a block diagram illustrating the end-to-end CRC
`propagation;
`FIG. 8 is a block diagramming illustrating the processing
`of incoming Ethernet frames;
`FIG. 9 is a block diagram illustrating the processing of
`incoming ATM cells wherein cells arrive contiguously with
`no lost cells;
`FIG. 10 is a block diagram illustrating the processing of
`incoming ATM cells in the general case requiring reordering
`and multiple frame assembly;
`FIG. 11 is a block diagram illustrating the format of
`control cells addressed to the portal; and
`FIG. 12 is a block diagram of an ATM-Ethernet concen
`
`18
`
`
`
`5,457,681
`
`3
`trator utilizing components from the portal invention;
`FIG. 13 is a flow chart illustrating the main processing
`loop;
`FIG. 14 is a flow chart illustrating the process for incom
`ing Ethernet frames;
`FIG. 15 is a flow chart illustrating the processing of
`Ethernet control frames;
`FIG. 16 is a flow chart illustrating the processing of
`incoming ATM cells; and
`10
`FIG. 17 is a flow chart illustrating the processing of ATM
`control cells.
`
`5
`
`15
`
`4
`speeds than the links used in conventional WANs.
`The ATM protocol standard is currently under develop
`ment by CCITT for communicating over Broadband Inte
`grated Services Digital Networks (BISDNs). Clients of ATM
`networks communicate by creating connections to one
`another, then exchanging data over these connections. As
`mentioned above, ATM connections can be either: 1) point
`to-point, 2) one-to-many, or 3) many-to-many. Since some
`ATM networks may not support all three connection types,
`the portal has been designed to work with all three. How
`ever, the software in our prototype has been implemented to
`use many-to-many connections since this type of connection
`is supported in the prototype ATM network and since it is
`considered that this type of connection is the best suited to
`the operation of the portal.
`Using a many-to-many connection, several Ethernet seg
`ments are connected into one logical segment by creating a
`multipoint ATM connection and configuring each portal to
`be an endpoint of the connection. The ATM network con
`nection can be set up by hand, via network management, or
`by signaling the network using a connection management
`protocol. When a connection management protocol is used,
`the protocol is not embedded in the portal. Rather, one (or
`more) hosts on the Ethernet contain the necessary software
`to do this in Connection Processors (CPs). The CPs send
`ATM signaling packets to the portal, which sends them,
`unaltered, into the ATM network. The signaling packets
`instruct the ATM network to establish the desired multipoint
`connection. The CPs must know the addresses (on the ATM
`network) of the portals participating in the connection. It is
`not specified how this information is obtained (it might, for
`example, be obtained through the ATM network via a
`routing service, or by network administrators exchanging
`information over the telephone). An endpoint is added to an
`existing multipoint ATM connection either by requesting to
`be added or by invitation from another endpoint. This means
`that one CP could configure the entire multipoint connection
`by first adding the portal on its local Ethernet segment, then
`adding other portals to this connection.
`In addition to the interconnection pattern of connections
`(point-to-point, one-to-many, or many-to-many), the ATM
`standard also provides two routing mechanisms for connec
`tions: Virtual Path (VP) and Virtual Channel (VC). With a
`VP connection, clients set the VP identifier (VPI) field of the
`ATM header (FIG. 2) when sending cells. The network then
`uses the VPI for routing, possibly remapping this field at
`every switching node within the network, until the cells
`reach their destinations. With VC connections, clients set the
`VC identifier (VCI) and the VPI when sending cells and the
`network uses both the VCI and VPI for routing. For VP
`connections, the VCI is preserved by the network-what
`ever value the sending client places in this field is delivered
`to the destination client(s) and, therefore, is available for use
`by the client as a multiplexing field. With VC connections,
`the VPI is not necessarily preserved by the network and may
`have to be set to a particular value (such as zero). Therefore,
`VC connections do not allow the client to use the ATM
`header for multiplexing. The portal has been designed so
`that it can use both VP and VC connections. However, in the
`prototype ATM network, only VC connections are imple
`mented. Therefore, the software in the prototype portal has
`been implemented to use only VC connections. As shown in
`FIGS. 13 to 17, a flow chart for the software is illustrated
`which may be used to implement the microprocessor of the
`present invention.
`FIG. 3 shows the basic hardware elements used in the
`portal. The main components are a control microprocessor
`
`20
`
`DETALED DESCRIPTION OF THE
`PREFERRED EMBODIMENT
`The ATM-Ethernet portal (portal) of the present invention
`connects disjoint Ethernet segments over an ATM/BISDN
`network (Asynchronous Transfer Mode/Broadband Inte
`grated Services Digital Network), creating one large logical
`Ethernet segment, as shown in FIG. 1. Each Ethernet frame
`transmitted on any of the Ethernet segments is fragmented
`into a sequence of ATM cells, which are then transmitted by
`the local portal over the ATM network and delivered to the
`interconnected portals. When ATM cells are received, the
`portals reassemble the cells into Ethernet frames, then
`transmit the frames over their Ethernet segments. The high
`level protocols used by the Ethernet hosts (that is, those
`protocols located above the data link layer in the ISO-OSI
`model) are not interpreted by the portal or by the ATM
`30
`network.
`A group of N portals can be interconnected over the ATM
`network via: 1) N-(N-1)/2 point-to-point connections,
`where each connection interconnects exactly two portals, 2)
`N one-to-many connections, where each connection links
`the transmitter of one portal to the receivers of the other N-1
`portals, or 3) one many-to-many multipoint connection,
`where the single connection links the transmitters and
`receivers of all portals. Other interconnector topologies are
`possible as well, such as N-to-one. One network supporting
`all three types of connections, for which the prototype
`version of the portal was designed, is the experimental fast
`packet network designed by Turner.
`By exploiting the properties of ATM networks, our portal
`offers the following two primary benefits over existing
`technologies: 1) relatively low overhead connectivity over
`Wide Area Networks (WANs), and 2) data stream replication
`by the network in hardware (when one-to-many or many
`to-many ATM connections are used). Existing bridges, rout
`ers, and gateways connect Ethernet segments over WANs.
`However, bridges generally connect only two Ethernet seg
`ments and are frequently limited to relatively short dis
`tances, whereas portals can connect many segments sepa
`rated by thousands of miles. Gateways and routers provide
`the connectivity of portals, but use network or transport level
`protocols to route cells over WANs to their intended desti
`nation and must explicitly copy cells destined for multiple
`destinations. With portals, hosts communicate using the data
`link level protocols of Ethernet, without higher level proto
`col processing required at intermediate hosts. This improves
`latency since no routing is required at these hosts. When
`one-to-many or many-to-many connections are used, the
`ATM network routes and copies cells internally, in hard
`ware, at the data link layer. This method is considerably
`more efficient than that used by gateways and routers, where
`packets must be explicitly copied. Additionally, ATM net
`works are based on fiber optic technology with much higher
`
`25
`
`35
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`19
`
`
`
`5,457,681
`
`20
`
`5
`20, an Ethernet controller 22, an ATM cell processor, a
`dual-ported shared memory 24, and a DMA controller 26.
`All components, except for portions of the ATM cell pro
`cessor, can be implemented using off the shelf, commer
`cially available VLSI circuits. A variety of choices are
`available for each component. For example, the Intel 80486
`microprocessor could be used with the Intel 82596CA
`Ethernet controller. Alternatively, the Motorola 68030
`microprocessor could be used with the AMD 7990 Lance
`Ethernet controller. The design only requires that the chosen
`components have certain basic capabilities common to a
`variety of readily available devices. Additionally, while FIG.
`3 shows 32bitbusses (implying 32 bit components), 8 or 16
`bit components could also be used, with a loss of perfor
`mance likely under peak loads.
`15
`The architecture of FIG. 3 suggests two implementation
`options: 1) a "standalone” implementation, where all com
`ponents are integrated on one or more custom designed
`circuitboards, or 2) an "off the shelf' implementation, where
`the commonly available subsystems of FIG. 3 (the control
`microprocessor and Ethernet controller blocks) are pur
`chased and integrated with a custom designed ATM cell
`processor. The latter strategy has been chosen in the proto
`type to reduce the development time and aid in debugging.
`However, for cost and size reduction, the former strategy
`would perhaps be preferable.
`When a frame is transmitted on the Ethernet, the Ethernet
`controller, operating in promiscuous mode, deposits the
`frame into shared memory. If no errors occur in frame
`reception, the Ethernet controller interrupts the control
`30
`microprocessor. The microprocessor then initiates a DMA
`transfer, prompting the DMA controller to move the ATM
`cells (containing segments of the frame) from shared
`memory to the ATM cell processor. Because of the manner
`in which the incoming Ethernet frames are stored in shared
`memory (see FIG. 8), no copying of data is required using
`the no copy segmentation and reassembly algorithms
`described below. The ATM cell processor serially transmits
`the ATM cells on the outgoing fiber optic link. Going the
`other direction, when a cell arrives from the fiber optic link,
`the ATM cell processor prompts the DMA controller to
`initiate a DMA transfer, moving the cell from its internal
`buffer into shared memory. The microprocessor periodically
`examines the DMA control registers to see if new cells have
`arrived. If the microprocessor recognizes the arrival of new
`cells, it reassembles the incoming ATM cells into an Ether
`net frame (once again, without copying). Once all cells of a
`frame have arrived, the microprocessor instructs the Ether
`net controller to transmit the frame.
`The microprocessor acts primarily as a high level buffer
`manager, while the Ethernet controller performs the low
`level data transfers to and from buffers. In other words, the
`microprocessor decides to where the controller should write
`incoming Ethernet frames and from where the controller
`should read outgoing frames.
`Looking at the Ethernet-to-ATM path, the microprocessor
`initially partitions the shared memory into a collection of
`receive buffers so that incoming Ethernet frames will auto
`matically be segmented into ATM cells. The microprocessor
`then formats and passes a list of receive buffer descriptors
`(which point to these buffers) to the controller. As frames are
`received, the controller places the data into buffers. After a
`frame has been received, the controller passes the buffers
`back to the microprocessor. The microprocessor then ini
`tiates a DMA transfer of the ATM cells to the cell processor.
`Once the transfer is complete, the microprocessor instructs
`the cell processor to transmit the cells, then recycles the
`
`40
`
`45
`
`50
`
`55
`
`60
`
`65
`
`10
`
`25
`
`35
`
`6
`receive buffers, passing them back to the controller.
`Going the other direction, the microprocessor performs an
`analogous operation. As cells are received, the data is
`written into shared memory. The microprocessor analyzes
`the cells and figures out how they need to be arranged and
`reordered so that a complete Ethernet frame can be correctly
`reconstructed. The microprocessor then prepares a list of
`transmit buffer descriptors which point to the segments of
`the Ethernet frame. The transmit descriptors are ordered
`such that, when the controller retrieves and transmits the
`data, it will go out in the correct order. The microprocessor
`passes the buffers to the controller and instructs the control
`ler to begin the transmission. Once the controller has com
`pleted transmiting the frame, it passes the buffers back to the
`microprocessor. At this point, the microprocessor recycles
`the transmit buffers so that they can be reused to store
`subsequent arriving ATM cells.
`As stated above, many commercially available Ethernet
`controllers provide the necessary functionality. The particu
`lar capabilities that are required for the prototype imple
`mentation described in this document (that are not univer
`sally available) are the following.
`A promiscuous mode, where all frames transmitted on the
`Ethernet are received by the controller.
`Frame "scatter/gather', where incoming frames are auto
`matically separated (scattered) into several noncontigu
`ous buffers and outgoing frames are automatically
`assembled (gathered) from several noncontiguous buff
`ers. These capabilities are used in the no copy segmen
`tation and reassembly (SAR) algorithms.
`Cyclic Redundancy Check (CRC) suppression on trans
`mission and capture on reception. This capability is
`used to provide an end-to-end check on the Ethernet
`frame (from the originating Ethernet host to the desti
`nation Ethernet host). On Ethernet frame receipt, the
`CRC is captured and transmitted along with the rest of
`the frame over the ATM connection. When the ATM
`cells arrive at the destination portals, the original CRC
`is placed at the end of the frame and normal CRC
`generation is suppressed when the frame is transmitted
`onto the Ethernet. This causes the original CRC to be
`used by hosts that receive the Ethernet frame instead of
`a newly generated CRC.
`The only one of these capabilities that is essential to the
`design is the promiscuous mode of operation. Without this
`capability, the portal would not be able to intercept all
`packets sent on a local Ethernet segment. Ethernet control
`lers without the other two capabilities could be used with a
`loss in performance. In particular, if frame scatter/gather is
`not provided, the no copy SAR algorithm would need to be
`replaced with an algorithm that does copy. If CRC suppres
`sion and capture is not provided, an end-to-end check at the
`data linklevel is not possible, but CRCs could be inserted on
`each ATM cell or on segmented Ethernet frames to detect
`errors that occur within the ATM network.
`The design uses a dual-port memory (FIG. 3) to reduce
`bus contention and thereby improve performance. Shared
`memory arbitration prevents the simultaneous access of
`more than one component to the same memory location,
`while allowing simultaneous access to different memory
`locations. With this memory, the Ethernet controller can
`write an incoming frame or read an outgoing frame, or the
`microprocessor can access data, concurrent with DMA
`transfers to and from the ATM cell processor. Since these
`operations occur on different busses, they do not interfere
`with one another. The design does not preclude use of a
`
`20
`
`
`
`7
`single-port memory in place of the dual-port memory. How
`ever, degraded performance may result under peak loads
`(due to bus contention) if such a component is chosen.
`A block diagram of the ATM cell processor is shown in
`FIG. 4. At the top right, cells arrive from the fiber optic link
`and enter a fiber optic data link (FODL) device, which
`converts the light signal into an electrical signal. Next, the
`serial signal enters a Parallel/Serial Converter (PSC), where
`the serial signal is converted into an 8bit wide signal. The
`8 bit values are transferred to a Header Error Check (HEC)
`check circuit. If the HEC indicates an error, the cell is
`discarded. Next, the external fiber optic link rate is synchro
`nized to the internal clock rate, then the bytes of the cell are
`transferred into the top four First In First Out buffers
`(FIFOs) in round-robin fashion, from top to bottom. Once
`each of the FIFOs has one byte, a Direct Memory Access
`(DMA) transfer request is issued, causing the DMA con
`troller (FIG.3) to move the 32bit word into shared memory.
`Subsequent bytes of the incoming cell are transferred into
`the FIFOs and to shared memory in the same fashion.
`Several commercially available, FODL, PSC, and FIFO
`components provide the required functionality. For example,
`the AT&T 1352CFODL receiver, the AMD Am7969 Trans
`parent Asynchronous Xmitter/Receiver Interface (TAXI)
`receiver, and the IDT 7202AFIFO could be used for these
`functions.
`Going the other direction, after the Ethernet controller has
`received a frame into shared memory, the microprocessor
`issues a DMA request so that the cells of the frame are
`transferred to the bottom set of FIFOs. After the transfer is
`complete, the ATM cell processor is signaled so that it
`k