throbber
A ~ ~ T R A C T
`To cope with the growth in the Internet and corporate IP networks, we require IP routers capable of much higher performance than is
`possible with existing architectures This article examines t w o approaches to the design of a high-performance router, the gigabit router
`and the IP switch, and then provides some detail on the implementation of an IP switch and the protocols associated with IP switching.
`
`Peter Newman, Greg Minshall, Tom Lyon, and Larry Huston, lpsilon Networks Inc.
`
`he Internet is growing rapidly. The number of hosts on
`the Internet has doubled approximately every 56 weeks
`since 1989 [l], and the number of web servers has doubled at
`least every 23 weeks for the last three years [2]. As such
`growth persists, and as common access line speeds increase,
`we require Internet Protocol (IP) routing capacity of many
`gigabits per second of aggregate traffic. Existing bus- and cen-
`tral-processor-based architectures can handle a maximum load
`in the region of 1 Gb/s and a few hundred thousand packets
`per second, but to get much beyond this requires alternative
`architectures. This article examines two such approaches, the
`gigabit router and the IP switch, and then provides some
`detail on the implementation of an IP switch and the proto-
`cols associated with IP switching.
`
`number of projects to provide high-speed routing are
`underway; of these, information is available for the Multi-
`gigabit Router [3], IP/ATM [4], the Cell Switch Router (CSR)
`[5]. and IP switching [6]. Also, the NetStar GigaRouter is a
`commercial implementation of a gigabit router [7]. These will
`serve to illustrate the basic approaches to designing for high-
`speed routing. (For a more general discussion of routing and
`bridging see [8].)
`All the designs use the same functional components illus-
`trated in Fig. 1. The line card contains the physical-layer com-
`ponents necessary to interface the external data link to the
`switc5 fabric. The switch fabric is used to interconnect the
`various components of the gigabit router. The forwarding
`engine inspects packet headers, determines to which outgoing
`line card they should be sent, and rewrites the header. The
`network processor runs the routing protocols and computes
`the routing tables that are copied into each of the forwarding
`engines. It handles network management and housekeeping
`functions, and may also process unusual packets that require
`special handling.
`A switch fabric is used for interconnection because it offers
`a much higher aggregate capacity than that available from the
`more conventional backplane bus. The Multigigabit Router
`will use a 15-port crossbar switch with each port operating at
`
`Ipsilon Nehvorks Inc. can be found at http:/lwww.lpsilon.com.
`
`3.3 Gbls. The NetStar GigaRouter also uses a crossbar switch
`fabric with 16 ports, each operating at 1 Gb/s. The IP/ATM,
`CSR, and IP switch solutions use asynchronous transfer mode
`(ATM) for the switch fabric. In the case of the IP switch a
`complete ATM switch, not just a fabric, may be used. This
`allows use of more highly integrated switch solutions, which,
`for example, integrate line card and switch fabric functionali-
`ty. The advantage of an ATM switch is that the hardware is
`standardized and available in many different sizes from differ-
`ent vendors with different costlfunctionality trade-offs. Addi-
`tionally, advanced features such as hardware quality of service
`(QoS) support and hardware multicast are typically available
`in ATM switches. The disadvantage of an ATM switch is that
`it is cell-oriented, not packet-oriented, and connection-orient-
`ed, unlike the connectionless network protocols that are the
`subject of high-speed routing.
`The forwarding engine may be a physically separate com-
`ponent or integrated with either the line card or network pro-
`cessor. If the forwarding engine is a separate component, the
`packet forwarding rate may be varied independent of the
`aggregate capacity by adjusting the ratio of forwarding
`engines to line cards. This is the approach taken in the Multi-
`gigabit Router and is an option in the IP/ATM solution. How-
`ever, separating the line card and forwarding engine creates
`additional overhead across the switch fabric. The NetStar
`GigaRouter integrates a forwarding engine with each line
`card. In the current realization of an IP switch the forwarding
`engine is combined with the network processor, although
`combination with the line card or a separate implementation
`is not prohibited by the architecture.
`A key difference between the router approach and the IP
`switching architecture is that IP switching allows most data
`between ATM ports to traverse the switch without being han-
`dled at all by a forwarding engine, whereas a router approach
`always requires use of at least one forwarding engine.
`Measurements from the Internet indicate that Measure-
`ments from the Internet indicate that the average packet size
`is now about 2000 bits [6]. This has increased from an average
`of 1000 bits just over five years ago because of the increase in
`large transfers due to web usage, which now represents almost
`50 percent of the Internet’s traffic. Thus, at present we need a
`forwarding rate of about 500,000 packetsls (500 kp/s) for each
`1 Gbis of traffic, although this may change as the traffic pro-
`file changes. Two approaches have been proposed to achieve
`
`64
`
`0163-6804/97/$10.00 0 1997 IEEE
`
`IEEE Communications Magazine 0 January 1997
`
`CISCO Exhibit 1003
`Cisco v. Bockstar
`Trial IPR2014 - 1
`
`

`

`packet-forwarding rates of this magnitude: the silicon forward-
`ing engine, and a high-speed general-purpose processor with
`destination address caching using an on-chip cache.
`DESIGN OF THE FORWARDING ENGINE
`To build a forwarding engine in silicon we need a tree-struc-
`tured routing table in memory and a tree-walking application-
`specific integrated circuit (ASIC) [9]. Each IPv4 route in the
`table requires a minimum of about 16 bytes; thus, for a large
`table of, say, 250,000 routes, we require about 4 Mbytes of
`memory. This is within the realm of possibility for current
`SRAM. The number of memory accesses per route lookup is
`about 1 + logN, where N is the total number of routes in the
`table. Thus, if we assume 10 ns SRAM, one full route lookup
`every 200 ns is possible. This gives us a forwarding engine
`capable of forwarding 5 million packets/s (Mp/s), enough for
`an average of about 10 Gb/s of traffic. In the worst case it will
`handle 1.6 Gb/s of 40-byte packets at wire speed. In addition,
`for large routing tables techniques exist that can significantly
`reduce the number of memory references required.
`The same hardware can be extended to handle multicast
`forwarding and more complex policy-based forwarding if some
`flexibility is provided in the fields from the packet header
`used as the key for the lookup.
`The forwarding engine of the Multigigabit Router uses a
`415 MHz general-purpose processor with destination address
`caching using an internal (on-chip) cache. The internal cache
`is a least recently used cache of 9000 IPv4 destination address-
`es. An external memory of 8 Mbytes holds a complete routing
`table of several hundred thousand routes. This forwarding
`engine is capable of forwarding about 11 Mp/s if all the
`requested destinations are available in the cache. Multicast
`packetsare handled by the full routing table rather than the
`cache since they require additional processing because the
`forwarding decision is based on the source address as well as
`the destination (multicast) address. Additional processing is also
`required to offer firewall filtering or other policy-based fonvard-
`ing decisions. Packets with unusual options are sent to the net-
`work processor.
`The design of the Multigigabit Router requires a performance
`from the forwarding engine of 6.5 Mp/s at full speed for aver-
`age traffic. It is estimated that the forwarding engine can per-
`form at full speed with a minimum cache hit rate of about 80
`percent. Under worst-case conditions, where every packet
`receives a cache miss, the forwarding performance for average
`traffic degrades to about 50 percent of best-case performance.
`There is an ongoing debate in the research community on
`the use of caching in a forwarding engine designed for a giga-
`bit router, The question concerns whether there is sufficient
`locality in a stream of packets in the Internet for caching with
`a moderate-sized cache to be useful. The silicon forwarding
`engine can maintain its maximum forwarding rate regardless
`of the past history of destination addresses in the traffic
`stream. For the caching solution in the Multigigabit Router to
`perform at full rate there must be at least a 60 percent chance
`that any packet destination has already been seen in the
`recent past and that the entry is still in the cache. A study of a
`recent traffic trace taken from the Internet gave a 95 percent
`cache hit rate with a 6000-entry cache [lo]. However, this
`study was based on a packet trace from a 37 Mb/s traffic
`stream. It is debatable whether the same amount of locality
`would be observed in traffic streams in excess of 1 Gb/s. Also,
`the traffic profile of the Internet changes over time; so the
`debate continues.
`The NetStar GigaRouter includes a forwarding engine on
`each line card with a 1 Gb/s connection to the switch fabric.
`The forwarding engine is based on a SPARC microprocessor
`
`W Figure 1. General structure of a high-speed router.
`
`~~
`
`with hardware-assisted route lookups so that no route caching
`is required. It supports a routing table of up to 150,000 routes.
`With a full routing table the route lookup takes 3 ps, but the
`highest packet forwarding performance of currently available
`line cards is 136 kp/s. This is well below the 500 kp/s target
`rate needed for a I Gb/s port.
`The argument against a silicon-based forwarding engine is
`that a hardware solution is fixed. Applications within the
`Internet, and thus the traffic profile, change over time, and a
`fixed forwarding engine may not be able to track these
`changes. For example, multicast traffic may become much
`more important from multimedia applications, and a move to
`IPv6 may occur sooner than expected, both of which could
`invalidate a fixed implementation.
`A forwarding engine designed to perform a high-speed
`destination address to outbound interface lookup is sufficient to
`offer a simple best-effort packet forwarding service, but addi-
`tional functionality will be required of the next generation of
`routers. This includes: multicast, QoS differentiation, firewall
`filtering, and more complex policy-based routing. To offer
`such functionality one must base the routing decision on more
`fields in the packet header than just the destination address.
`
`IP SWITCHING
`I switch maps the forwarding functions onto a hardware
`P switching is an alternative to the Gigabit Router. An IP
`
`switch, such as an ATM switch. A similar idea occurred inde-
`pendently, at about the same time, to three groups. The
`devices based on this idea are IP/ATM [4], the IP switch [6],
`and the CSR [5, 111. Another mechanism for binding forward-
`ing functions to an ATM virtual channel identifier (VCI) is
`also discussed in [12]. In addition, the Cisco Tag Switching
`proposal also appears to be similar to these earlier works [13].
`Unlike some approaches, IP switching may be used with
`any higher-level IP functionality; it is not restricted to particu-
`lar IP routing protocols or routing domains, and may be used,
`for example, between an Internet service provider (ISP) and
`its customers or between ISPs.
`Each approach uses the concept of aflow. A flow is defined
`as a sequence of packets that are treated identically by the
`possibly complex routing function. An example of a flow is a
`sequence of packets sent from a particular source to a particu-
`lar destination (unicast or multicast) that are forwarded
`through particular ports with a particular QoS. The forward-
`ing and handling of each flow is determined by the first pack-
`ets in the flow. Once the flow is classified these decisions may
`be cached, and further packets on the flow may be processed
`according to the cache entry without requiring the full flow
`classification.
`Each of the above three solutions uses an ATM switch as
`the switch fabric for a high-speed router. Incoming flows are
`mapped onto ATM virtual channels (VCs) established across
`the ATM switch. Only one or a few packets from each flow
`need be inspected to perform the mapping and establish an
`ATM virtual channel. Once the virtual channel is established
`
`IEEE Communications Magazine January 1997
`
`65
`
`CISCO Exhibit 1003
`Cisco v. Bockstar
`Trial IPR2014 - 2
`
`

`

`the network at the cost of slightly increased per-flow work at
`the edge of the IP switch network. If the device on the edge
`of the network is a directly connected host, the classification
`and labeling operations can be trivially integrated into the
`host protocol software.
`The set of VCs across the switch (or, equivalently, the VCI
`table on each link of the switch) may be regarded as a cache
`of flow forwarding decisions. In this sense it uses caching
`similar to the Multigigabit Router, in which the routing deci-
`sion for every incoming packet is cached. However, in IP
`switches only selected flows are cached in the ATM hard-
`ware - the cache i s explicitly managed.
`The debate regarding whether caching is a good idea is
`equally applicable to IP switching; in fact, even more so,
`since it takes more work to establish a switched flow. How-
`ever, since the forwarding of switched flows is implemented
`within the ATM switching hardware, the forwarding engine
`of an IP switch only has to deal with the classification and
`forwarding of new flows and the forwarding of packets
`belonging to flows that are not switched. This allows some
`flexibility in the dimensioning of the forwarding engine
`depending on the anticipated ratio of flows to be switched
`and packets to be forwarded, and the traffic characteristics of
`the switched flows.
`In summary, the IP switch provides high-speed routing by
`low-level switching of flows (equivalent to cached routing
`decisions). It defines a protocol to indicate these flows, and to
`associate a link-layer label with each flow, to the upstream
`network node. This enables the switching. All flows are classi-
`fied, and the forwarding engine is optimized for flow classifi-
`cation and for forwarding packets on those flows that are
`decided should not be cached in the switch fabric.
`IP SWITCH IMPLEMENTATION
`W mentation and the two protocols required to support IP
`e now turn our attention to the Ipsilon IP Switch imple-
`
`switching. The IP switch is constructed from two components,
`an ATM switch and the IP switch controller (Fig. 2). The IP
`switch controller is a high-end Pentium Pro machine running
`an operating system that continues to bear some resemblance
`to UNIX. One of the ports on the ATM switch is connected
`to an ATM interface on the IP switch controller and is used
`for both control and data transfer. The control protocol used
`between the switch and the controller is the General Switch
`Management Protocol, GSMP (RFC1987) [14], which has
`been designed to give the IP switch controller full control of
`an ATM switch. The Ipsilon Flow Management Protocol,
`lFMP (RFC1953) [ 151, is the flow-forwarding-cache distribu-
`tion protocol. It runs between the IP switch controller and its
`peers across each external link. In comparison with Fig. 1,
`the line cards are part of the ATM switch, and the fonvard-
`ing engine is implemented in software within the IP switch
`controller
`The IP switch is implemented in two components to allow
`a separation between hardware and software. Thus, any ATM
`switch that supports RFC1987 may be used for the switching
`component. Different ATM switches are designed with differ-
`ent size, cost, and functionality trade-offs, so it makes sense to
`support a choice. This choice goes both ways. GSMP can also
`support a standard ATM Forum control protocol stack instead
`of the 1P switch controller software. Thus, a choice of network
`control software i s possible for the same hardware.
`GENEML SWITCH MANAGEMENT
`PROTOCOL (GSMP)
`The design goal for the GSMP interface is to be as close to the
`actual switch hardware as possible and yet capable of control-
`
`E Figure 2. Structure of an IP switch.
`
`for a flow, all further traffic on that flow can be switched
`directly through the ATM switch, greatly reducing the load on
`the forwarding engine(s).
`The IP/ATM solution uses a pool of pre-established per-
`manent virtual channels (PVCs) that are taken by active
`incoming flows. Packets on a new flow are not forwarded until
`a PVC has been activated. The IP switch uses a protocol,
`IFMP (RFC1953), to propagate the mapping between flow
`and VCI upstream, and forwards packets using the forwarding
`engine until the cut-through connection is established across
`the ATM switch. The CSR attempts to be more general than
`the IP switch in permitting entire classical IP-over-ATM
`(RFC1577) subnets between CSRs. It proposes using the
`RSVP protocol to propagate the mapping between flows and
`VCIs. To examine this class of device we will discuss the
`Ipsilon IP switch in detail.
`FLOW CLASSIFICATION
`An important function of the flow classification operation is
`to select those flows to be switched in the ATM switch and
`those that should be forwarded packet by packet in the for-
`warding engine, Clearly, one wishes to select long-duration
`flows with a lot of traffic for switching. Multimedia traffic
`(voice, image, videoconferencing, etc.) offers an example of
`long-duration flows where there is a good probability of a
`high traffic volume. Many multimedia applications also
`require multicast, which makes it very suitable for switching
`across an ATM switch making use of ATM’s hardware multi-
`cast capability. Short-duration flows consisting of a small num-
`ber of packets should be handled directly by the forwarding
`engine. Name server queries and brief clienthewer transac-
`tions are examples of traffic probably not worth the effort of
`establishing a switched connection.
`For the flows selected for switching, a VC must be estab-
`lished across the ATM switch. ATM switching requires that
`all arriving traffic be labeled with a VCI to indicate the VC to
`which it belongs, so IP switching requires a protocol to dis-
`tribute the association of flow and VCI label upstream across
`each incoming link.
`Every packet on a flow that is switched through a network
`of IP switches must be labeled with a VCI, but the task of
`cache lookup and packet labeling is propagated upstream to
`the edge of the network. The task of labeling each packet typ-
`ically involves more effort than simple forwarding because it
`must examine more fields than the destination address. How-
`ever, once a VC is established, this flow labeling need only be
`performed at a single location within the IP switch network; a
`traditional router network would need to perform the route
`lookup at every hop. Another advantage is that the rate of
`packet arrival is typically much lower at the edge of the net-
`work than in the center. Thus, for switched flows, per-packet
`work is offloaded from the forwarding engines in the center of
`
`66
`
`IEEE Communications Magazine * January 1997
`
`CISCO Exhibit 1003
`Cisco v. Bockstar
`Trial IPR2014 - 3
`
`

`

`ling all (reasonable) ATM switch designs. These are conflicting
`requirements. GSMP is a simple master-slave, request-
`response protocol. The master (switch controller) sends
`requests, and the switch issues a positive or negative response
`when the operation is complete. Virtual paths and VCs are
`assumed to be unidirectional (a requirement of RFC1953).
`Unreliable message transport is assumed between controller
`and switch for speed and simplicity. (The link between switch
`and controller will be either very reliable or broken, in which
`case the overhead of adding error detection and retransmis-
`sion through a protocol like SSCOP is unnecessary. All GSMP
`messages are acknowledged, and the implementation handles
`its own retransmission.)
`GSMP runs on a single, well-known VC (VPI 0, VCI 15).
`All messages use an ATM adaptation layer version 5 (AM,-5)
`link-layer control (LLC)/SNAP encapsulation, but the most
`frequent messages (connection management) are designed to
`be small enough to be single-cell AAL-5 packets (Fig. 3). The
`LLC/SNAP encapsulation was cho-
`sen to allow protocols other than
`GSMP to be multiplexed onto the
`link by using a different “Ether-
`type” in the SNAP header. For
`example, while GSMP offers some
`simple network management fea-
`tures, the Simple Network Man-
`agement Protocol (SNMP) will be
`required between the controller
`and switch to offer full-service net-
`work management. (While SNMP
`can be used to establish connec-
`tions in an ATM switch, it was con-
`sidered far too heavyweight a
`the design goals
`protocol to
`of GSMP.)
`An adjacency protocol is used
`to synchronize state across the control link, to discover the
`identity of the entity at the far end of the link, and to detect
`when it changes. No GSMP messages other than the adjacen-
`cy protocol may be sent across a link until adjacency has been
`established. Once established, five types of message may be
`sent: configuration, connection management, port manage-
`ment, statistics, and events.
`The configuration messages are used by the controller to
`discover the capabilities of the ATM switch. Beyond name,
`rank, and serial number, each ATM switch port can report:
`the incoming virtual path identifier (VPI) and VCI ranges it
`can support, its interface type and cell rate, its administrative
`and line status, and the number of priority levels it supports in
`its output queue. The current version of GSMP (RFC1987)
`assumes simple strict-priority output queues of which any
`number of priority queues per port may be specified. (Queue
`structures other than output queuing may be mapped into this
`model.) The protocol will need to be extended to support the
`next generation of ATM queuing and scheduling hardware
`currently in development. Traffic policing (usage parameter
`control) is also not supported in this version. (It is unlikely to
`be required in IP switching until RSVP signaling is more
`widely deployed, and will be rendered unnecessary by imple-
`mentations with per-VC queuing and scheduling.)
`Once the configuration of the switch has been discovered,
`the controller can begin issuing connection management
`messages. These are the most common messages. They
`enable the controller to establish and remove connections
`across the switch. No distinction is made between unicast
`and multicast connections - the “Add Branch” and “Delete
`Branch” messages are used for both. The first Add Branch
`
`message on a new incoming VCI defines a new unicast con-
`nection; the second Add Branch message on an existing
`incoming VCI converts the connection to a point-to-multi-
`point connection with two branches; and so on. This was
`intentional because no distinction is made in IFMP; howev-
`er, in hindsight it would be better to give a hint if a multicast
`connection is being established since many switches use com-
`pletely different data structures to implement unicast and
`multicast connections. A Delete Tree message is available to
`delete an entire multicast connection. A Move Branch mes-
`sage allows a single output branch of a multicast connection
`to be moved from one output port and VCI to another. The
`Move Branch message is used in the cut-through operation
`where an IP flow is moved from connectionless forwarding
`to direct switching.
`Of the remaining GSMP messages, the Port Management
`message is used to reset, bring up, take down, and loopback
`switch ports. The statistics message permits various per-VC
`and per-port performance counts
`to be requested. The event mes-
`sages allow a switch to asyn-
`chronously alert the controller to
`significant events such as loss (or
`detection) of carrier on a port, loss
`(or detection) of port interfaces (so
`hot-swap hardware may be sup-
`ported), and arrival of cells with
`invalid VPI/VCIs. A simple flow
`control protocol is applied to the
`event messages to prevent the con-
`troller being flooded.
`At the time of writing, RFC1987
`has been implemented on at least
`eight different ATM switches. The
`code for the GSMP slave is about
`2000 lines. A reference implemen-
`tation is available; it typically takes one or two weeks to get
`GSMP up and running on a new switch design. The measured
`performance of the GSMP slave on Ipsilon’s IP switch is cur-
`rently just under 1000 connection setups/s. This could be
`improved considerably if an ATM segmentation and reassem-
`bly (SAR) device were added to the switch to offload many of
`the packet-handling and AAL processing functions currently
`performed in software by the embedded processor on the
`ATM switch.
`IPSILON FLOW MANAGEMENT PROTOCOL (IFMP)
`
`IFMP runs independently across each link in a network of IP
`switches that connects IFMP peers: IP switches, directly
`attached hosts, or IFMP-capable edge routers. On ATM links
`it uses the default VC (VPI 0, VCI 15) [16]. The purpose of
`IFMP is to inform the transmitting end of a link of the VCI
`that should be associated with a particular IP flow. The VCI is
`selected by the receiving end of the link.
`All packets belonging to flows that have not yet been
`switched are forwarded hop-by-hop between IP switch con-
`trollers using the default VC on each link. When a new flow
`arrives at an IP switch it is classified. One of the results of
`flow classification is a decision as to if or when the flow
`should be switched and the granularity, or flow type, at which
`it should be switched. Currently we have defined two flow
`types: a host-pair flow type (flow type 2) and a port-pair flow
`type (flow type 1). The host-pair flow type is for traffic flow-
`ing between the same source and destination IP addresses.
`The port-pair flow type is for traffic flowing between the same
`source and destination Transmission Control/User Datagram
`Protocol (TCP/UDP) ports on the same source and destina-
`
`GSMP message body
`
`Figure 3. GSMP message fomat.
`
`IEEE Communications Magazine January 1997
`
`67
`
`CISCO Exhibit 1003
`Cisco v. Bockstar
`Trial IPR2014 - 4
`
`

`

`a permitted destination or ser-
`vice behind a firewall and then
`submitting paGkets with a dif-
`ferent header to gain access to
`a prohibited destination.
`The Time to Live (TTL) field
`
`Destination ~ o r t
`
`I
`
`
`
`-
`
`the correct TTL may be includ-
`ed in a switched flow. Thus, at
`the end of a switched flow the
`TTL of packets on that flow
`must be correct since the TTL
`field is not transmitted in the
`packet but is recovered from
`information stored at the desti-
`nation. In order to preserve the
`value of the header checksum, the value of the TTL field is
`subtracted from the header checksum of packets at the origin
`of a switched flow. The value of the I T L field is added to the
`header checksum at the end of a switched flow when the packet
`header is reconstructed, This operation is necessary because
`the number of upstream IP switch nodes is unknown at the
`destination of a switched flow and may indeed change over
`time, if, for example, more upstream IP switches decide to
`switch a particular flow.
`Each IP switch controller periodically examines every flow.
`IC a flow has received traffic siiice the last refresh period the
`controller sends another redirect message upstream to refresh
`the flow. The upstream IP switch controller continues to send
`packets on the redirected VCI until a timeout intemal expires
`during which it has not received any redirects. Once the flow
`has timed out, the upstream controller removes the associated
`state. The same is true for the controller that issued the redi-
`rect.
`Alternatively, the downstream IP switch controller may
`reclaim the VCI by issuing an IFMP Reclaim message. The
`downstream flow state is deleted after the IFMP Reclaim Ack
`message is received. Normally the flows are allowed to time-
`out, but in some cases they need
`to be explicitly deleted, such as
`routing changes and lack of
`receive VCI resources.
`
`tion IP addresses. The port-pair
`flow type allows QoS differenti-
`ation among flows between the
`same pair of hosts and also sup-
`ports simple flow-based firewall
`security features.
`Before a
`flow can be
`switched it must first be labeled.
`A free VCI is first selected by
`the receiver on the incoming
`link. Ai IFMP redirect message
`is then sent upstream to inform
`the transmitter at the other end
`of the link of the association
`between flow and VCI. The flow
`is identified by a flow identifier
`(Fig. 4). The flow identifier gives
`the values of the set of fields
`from the packet header that a
`packet must match to belong to this flow. The redirect mes-
`sage also contains a lifetime field that specifies the length of
`time for which this association of flow and VCI is valid. The
`flow redirection must be refreshed by another IFMP redirect
`message before the lifetime expires, or the association of flow
`and VCI is deleted.
`This flow labeling process occurs independently and con-
`currently on each link in an IP switching network, but we may
`assume that the flow classification policy is consistent within
`an administrative domain. Thus, if one node decides to label a
`flow, its neighbors within the same domain will very likely
`make the same flow classification and switching decision.
`When an IP switch controller sends an IFMP redirect message
`it checks to see if the flow is labeled yet on the downstream
`link. Also, when it receives a redirect request it checks to see
`if the flow is labeled yet on the upstream link. When upstream
`and downstream links are both labeled for a given flow, that
`flow is switched directly through the ATM switch.
`When an IP switch accepts a redirection message it also
`changes the encapsulation it uses for packets belonging to
`the redirected flow. The encapsulation used for IP packets
`on the default channel is the standard LLC/SNAP encapsu-
`lation over AAL-5. The encan-
`sulation used for each IP packkt
`on a flow redirected to a specif-
`ic VC does not use an
`LLC/SNAP
`header
`and
`removes all the IP header fields
`specified by the flow identifier
`from the header of each packet
`(Fig. 5). The IP packet with the
`resulting compressed header is
`then encapsulated in AAL-5
`and transmitted on the speci-
`fied VC. The IP switch issuing
`the redirection keeps a copy of
`the removed fields and associ-
`ates them with the specified
`ATM VCI. The switch may
`reconstruct the complete head-
`er using the stored fields. This
`approach is taken for security
`reasons. It allows an IP switch
`to act as a simple flow-based
`firewall without having to
`inspect the contents of each
`packet. It prevents a user from
`establishing a switched flow to
`
`Figure 4. IFMPflow identifiers.
`
`CONCLUSION
`T architecture to the gigabit
`he IP switch is an alternative
`
`router for providing high-speed
`routing. It uses low-level switch-
`ing of flows (equivalent to
`cached routing decisions) and
`includes a cooperative protocol
`to allow explicit use and man-
`agement of this cached informa-
`tion, on a link-by-link basis,
`throughout an IP switching net-
`work. We have presented an
`overview of the protocols devel-
`oped to support Ipsilon's I P
`switch implementation. In an IP
`switch, all flows are classified.
`The flow classification process
`dynamically selects flows to be
`forwarded in the switch fabric
`
`Figure 5. IFMPpacket encapsulation.
`
`68
`
`IEEE Communications Magazine January 1997
`
`CISCO Exhibit 1003
`Cisco v. Bockstar
`Trial IPR2014 - 5
`
`

`

`while the remaining flows are forwarded hop by hop. This
`flow classification allows an IP switch to intrinsically support
`multicast, QoS differentiation, simple firewall filtering, and
`complex policy-based routing decisions for each switched flow.
`These features can be difficult to support in a gigabit router
`with the fast forwarding path optimized for only destination
`address lookup. The forwarding engine in an IP switch is opti-
`mized for flow classification and for forwarding packets on
`those flows that are decided should not be cut through the
`switch fabric.
`
`ACKNOWLEDGMENT
`We would like to thank Bob Hinden, Fang-Ching Liaw, and
`Eric Hoffman for their contrib

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket