`on the Field Programmable Port Extender (FPX)
`
`∗
`, Naji Naufel, Jon S. Turner, David E. Taylor
`John W. Lockwood
`Department of Computer Science
`Applied Research Lab
`Washington University
`1 Brookings Drive
`Saint Louis, MO 63130
`http://www.arl.wustl.edu/arl/projects/fpx/
`
`ABSTRACT
`A prototype platform has been developed that allows pro-
`cessing of packets at the edge of a multi-gigabit-per-second
`network switch. This system, the Field Programmable Port
`Extender (FPX), enables packet processing functions to be
`implemented as modular components in reprogrammable
`hardware. All logic on the on the FPX is implemented in two
`Field Programmable Gate Arrays (FPGAs). Packet process-
`ing functions in the system are implemented as dynamically-
`loadable modules.
`Core functionality of the FPX is implemented on an FPGA
`called the Networking Interface Device (NID). The NID con-
`tains the logic to transmit and receive packets over a net-
`work, dynamically reprogram hardware modules, and route
`individual traffic flows. A full, non-blocking, switch is imple-
`mented on the NID to route packets between the networking
`interfaces and the modular components. Modular compo-
`nents of the FPX are implemented on a second FPGA called
`the Reprogrammable Application Device (RAD). Modules
`are loaded onto the RAD via reconfiguration and/or partial
`partial reconfiguration of the FPGA.
`Through the combination of the NID and the RAD, the
`FPX can individually reconfigure the packet processing func-
`tionality for one set of traffic flows, while the rest of the
`system continues to operate. The platform simplifies the
`development and deployment of new hardware-accelerated
`packet processing circuits. The modular nature of the sys-
`tem allows an active router to migrate functionality from
`softare plugins to hardware modules.
`
`∗
`This research is supported by NSF: ANI-0096052 and Xil-
`inx Corp.
`
`Permission to make digital or hard copies of all or part of this work for
`personal or classroom use is granted without fee provided that copies are
`not made or distributed for profit or commercial advantage and that copies
`bear this notice and the full citation on the first page. To copy otherwise, to
`republish, to post on servers or to redistribute to lists, requires prior specific
`permission and/or a fee.
`FPGA 2001 February 11-12, 2001, Monterey, CA USA
`Copyright 2001 ACM 1-58113-341-3/01/0002 ..$5.00
`
`Keywords
`FPGA, reconfiguration, hardware, modularity, network, rout-
`ing, packet, Internet, IP, ATM, processing
`
`Categories and Subject Descriptors
`B.4.3 [Hardware]: Input/Output and Data Communica-
`tions—Interconnections (Subsystems); C.2.1 [ Computer
`Systems Organization]: Computer-Communication Net-
`works—Network Architecture and Design
`
`1. BACKGROUND
`Internet routers and firewalls need high performance to
`keep pace with the the growing demands for bandwidth. At
`the same time, however, these devices need flexibility so that
`they can be reprogrammed to implement new features and
`functionality. Most of the functionality of a router occurs
`at the ingress and egress ports of a switch. Routers become
`bottlenecked at these locations when they are required to
`perform extensive packet processing operations.
`Hardware-based routers and firewalls provide high through-
`put by including optimized packet-processing pipelines and
`parallel computation circuits. By using Application-Specific
`Integrated Circuits (ASICs), traditional routers are able to
`implement the performance-critical features at line speed.
`The static nature of an ASIC circuit, however, limits the
`functionality of these performance-critical features to only a
`fixed set of the system’s functionality.
`Software-based based routers and firewalls excel in their
`ability to implement reprogrammable features. New fea-
`tures can be added or removed in an active router by load-
`ing software new software modules. The sequential nature of
`the microprocessor that executes that code, however, limits
`the throughput of the system. Routers that solely use soft-
`ware to process packets typically archive throughputs that
`are several orders of magnitude slower than their hardware-
`based counterparts.
`Routers and firewalls that utilize FPGAs can implement
`a desirable balance between performance and flexibility [1].
`They share the performance advantage of ASICs in that
`customized pipelines can be implemented and that parallel
`logic functions can be performed over the area of the device
`They also share the flexibility found in software systems to
`be reconfigured [2].
`
`87
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 1 of 7
`
`
`
`Figure 1: FPX Module
`
`Data
`SDRAM
`
`Data
`SRAM
`
`RAD
`
`Module
`
`Module
`
`VC
`
`VC
`
`VC
`
`VC
`
`NID
`
`EC
`
`EC
`
`Switch
`
`LineCard
`
`Data
`SDRAM
`
`Data
`SRAM
`
`RAD
`Program
`SRAM
`
`Figure 3: NID and RAD Configuration
`
`2.1 Logical Configuration
`The FPX implements all logic using two FPGA devices:
`the Network Interface Device (NID) and the Reprogramma-
`ble Application Device (RAD). The interconnection of the
`RAD and NID to the network and memory components is
`shown in Figure 3.
`The RAD contains the modules that implement the mod-
`ule-specific functionality. Each module on the RAD con-
`nects to one Static Random Access Memory (SRAM) and
`to one, wide Synchronous Dynamic RAM (SDRAM). In to-
`tal, the modules implemented on the RAD have full con-
`trol over four independent banks of memory. The SRAM is
`typically used for applications that need to implement table
`
`2. THE FPX SYSTEM
`The Field-programmable Port Extender enable the rapid
`prototype and deployment of hardware components for mod-
`ern routers and firewalls [3]. The system is intended to allow
`researchers or hardware developers to quickly prototype new
`functionality in hardware, then download that functionality
`into one or more nodes in a network. The architecture of
`the FPX makes it well suited to implement applications like
`IP routing, per-flow queuing [4], and flow control algorithms
`[5] in hardware.
`Components of the FPX include two FPGAs, five banks
`of memory, and two high-speed network interfaces. Net-
`working interfaces on the FPX were optimized to enable the
`simultaneous arrival and departure of data cells at SONET
`OC48 rates. This is the equivalent bandwidth of multiple
`channels of Gigabit Ethernet.
`A photograph of the FPX module is shown in Figure 1.
`The FPX circuit itself is implemented as a 12-layer printed
`circuit board with the dimensions 20 cm × 10.5 cm. The
`FPX has two Xilinx Virtex FPGAs: a XCV600Efg676 and
`a XCV1000Efg680. Future versions of the FPX will utilize
`a larger, pin-compatible XCV2000Efg680 to provide addi-
`tional logic resources for user-defined functionality.
`SRAM components on the FPX are mounted above and
`below the RAD. SDRAM memories are mounted on the back
`of the FPX in sockets. Reconfiguration data for the FPGAs
`are stored both in non-volatile Flash memory for the NID
`and SRAM memory for the RAD.
`The FPX integrates with another open hardware platform
`called the Washington University Gigabit Switch (WUGS)
`[6]. Figure 2 shows an FPX mounted in one port of a WUGS.
`By inserting one or more FPX modules at each port of the
`switch, parallel FPX units are used to simultaneously pro-
`cess packets as they pass through the network on all ports
`of the switch.
`
`88
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 2 of 7
`
`
`
`Figure 2: Photo of FPX mounted in WUGS Switch
`
`lookup operations (like the Fast IP lookup algorithm), while
`the SDRAM interface is typically used for applications like
`packet queuing that transfer bursts of data and can tolerate
`a higher memory latency.
`The RAD communicates with the NID using a Utopia-like
`interface. Packets on this interface are segmented into a se-
`quence of fixed-size cells that are formatted as IP over ATM.
`Each interface has a small amount of buffering and imple-
`ments flow control. A Start of Cell (SOC) signal is asserted
`at the input of a module to indicate the arrival of data.
`The Transmit Cell Available (TCA) signal is asserted back
`towards an incoming data source to indicate downstream
`congestion.
`
`3. NETWORK INTERFACE DEVICE
`The Network Interface Device (NID) on the FPX controls
`how packet flows are routed to and from modules. It also
`provides mechanisms to dynamically load hardware modules
`over the network and into the router. The combination of
`these features allows these modules to be dynamically loaded
`and unloaded without affecting the switching of other traffic
`flows or the processing of packets by the other modules in
`the system.
`As shown in Figure 4, The NID has several components,
`all of which are implemented in FPGA hardware. It contains
`a four-port switch to transfer data between ports; Virtual
`Circuit lookup tables (VC) on each port in order to selec-
`tively route flows; a Control Cell Processor (CP), which
`is used to process control cells that are transmitted and
`received over the network; logic to reprogram the FPGA
`hardware on the RAD; and synchronous and asynchronous
`interfaces to the four network ports that surround the NID.
`3.1 Per Flow Routing
`The NID routes flows among the modules on the RAD
`and the network interfaces to the switch and line card using
`a four-port switch. Each traffic flow that arrives on any
`incoming port can be forwarded to any destination port.
`Each of the NID’s four interfaces provide a small amount
`
`SelectMap
`Programming
`Interface
`
`Control Cell
`Processor
`
`ccp
`
`RAD
`Synch, Buffered
`Interface
`
`Virtual Circuit
`Lookup Table
`
`VC
`
`Four Port
`
`VC
`
`RAD
`Program
`SRAM
`
`Error Check
`Circuit
`
`Switch
`
`VC
`
`VC
`
`NID
`
`EC
`
`EC
`
`Asynchronous
`Interface
`
`Switch
`
`LineCard
`
`Figure 4: Network Interface Device Configuration
`
`NID
`Flow
`Routing
`Table
`
`VPI 1023
`...
`VPI 1
`VCI 1023+
`VCI 1022
`...
`VCI 0
`
`d1
`
`d1
`d1
`d1
`
`d1
`
`d2
`
`d2
`d2
`d2
`
`d2
`
`d3
`
`d3
`d3
`d3
`
`d3
`
`d4
`
`d4
`d4
`d4
`
`d4
`
`Aggregate
`Flow Routing
`Table
`
`Per−Flow
`Routing
`Table
`
`Figure 5: Per-flow routing on the NID
`
`of buffering for short-term congestion. Buffers on the NID
`are implemented using on-chip memory. When packets con-
`tend for transmission to the same destination port, the NID
`performs arbitration. For longer term congestion, the NID
`avoids data loss by sending a back-pressure signal to the
`previous module or network interface along the that net-
`work flow’s path. The design of the four-port switch and
`scheduling algorithm used to arbitrate among flows is based
`on the design of the iPOINT switch [7] [8].
`IP Packets are routed through the FPX and switch based
`on assignment of cell headers that transport that packet.
`
`89
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 3 of 7
`
`
`
`D_MOD_IN[31:0]
`SOC_MOD_IN
`
`TCA_MOD_OUT
`
`Data
`Interface
`
`D_MOD_OUT[31:0]
`SOC_MOD_OUT
`
`TCA_MOD_IN
`
`Data
`
`Intrachip Module Switching
`
`...
`
`FPXModule
`
`FPXModule
`
`FPXModule
`
`SRAM
`
`SDRAM
`
`SRAM_GR
`SRAM_D_IN[35:0]
`
`SDRAM_GR
`SDRAM_DATA[63:0]
`
`CLK
`RESET_L
`ENABLE_L
`
`Module
`Logic
`
`SRAM
`Interface
`
`SDRAM
`Interface
`
`Module
`Interface
`
`SRAM_REQ
`SRAM_D_OUT[35:0]
`SRAM_ADDR[17:0]
`SRAM_RW
`
`SDRAM_REQ
`SDRAM_DATA[63:0]
`SRAM_ADDR[17:0]
`SRAM_RW
`
`READY_L
`
`Figure 7: Modular Component of FPX
`
`The NID supports forwarding for both aggregate traffic flows
`and individual traffic flows. The NID’s Virtual Circuit look-
`up table (VC) maps these flows into next-hop destinations
`at each of the four ports. As shown in Figure 5, the NID’s
`flow routing table contains entries for each of the four ports
`in the switch that identify the destination port (d1 . . . d4) of
`each flow. The table has sufficient entries to support 1024
`virtual paths for aggregated traffic and another 1024 virtual
`circuits for individual flows.
`Examples that illustrate the NID’s switching functionality
`are shown in Figure 6. By default, cells are simply passed
`between the line card interface and the switch. To imple-
`ment egress flow processing (i.e., process packets as they
`exit the router), the NID routes a flow from the switch,
`to a RAD module, then out to the line card. Likewise, to
`implement ingress cell processing, the NID routes a virtual
`circuit from the line card, to a RAD module, then out to the
`switch. Full RAD processing occurs when data is processed
`in both directions by both modules on the RAD. Loopback
`and partial loopback testing can be programmed on the NID
`to debug experimental modules. Modules can implement
`selective packet forwarding by reassignment of the headers
`that transport each packet.
`
`3.2 Control functions
`The NID implements a Control Cell Processor (CCP) in
`hardware to manage the operation of the FPX and to com-
`municate over the network. On the ingress interface from the
`switch, the CCP listens and responds to commands that are
`sent on a specific virtual circuit. The NID processes com-
`mands that include: (1) modification of per-flow routing
`entries; (2) reading and writing of hardware status regis-
`ters, (3) reading and writing of configuraton memory, and
`(4) commands that cause the logic on the RAD to be repro-
`grammed. After executing each command, the NID returns
`a response in a control cell.
`
`3.3 FPX Reprogrammability
`In order to reprogram the RAD over the network, the
`NID implements a reliable protocol to fill the contents of
`the on-board RAM with configuration data that is sent over
`the network. As each cell arrives, the NID uses the data
`and the sequence number in the cell to write data into the
`RAD Program SRAM. Once the last cell has been correctly
`received, and the FPX holds an image of the reconfiguration
`
`Figure 8: Larger Configuration of FPX Modules
`
`bytestream that is needed to reprogram the RAD. At that
`time, another control cell can be sent to NID to initiate
`the reprogramming of RAD using the contents of the RAD
`Program SRAM.
`The FPX supports partial reprogramming the RAD by
`allowing configuration streams to contain commands that
`only program a portion of the logic on the RAD. Rather
`than issue a command to reinitialize the device, the NID
`just writes the frames of reconfiguration data to the RAD’s
`reprogramming port. This feature enables the other mod-
`ule on the RAD to continue processing packets during the
`partial reconfiguration. Similar techniques have been imple-
`mented in other systems using software-based controllers [9]
`[10].
`
`3.4 Modular Interface
`Application-specific functionality is implemented on the
`RAD as modules. A modular interface has been developed
`that provides a standard interface to access packet content
`and to interface with off-chip memory.
`Hardware plugin modules on the RAD consist of a re-
`gion of FPGA gates and internal memory, bounded by a
`well-defined interface to the network and external memory.
`Currently, those regions are defined as one half of an FPGA
`and a fixed set of I/O pins.
`The modular interface of an FPX component is shown in
`Figure 7. Data arrives at and departs from a module over
`a 32-bit wide, Utopia-like interface. Data passes through
`modules as complete ATM cells. Larger IP datagrams pass
`through the interface in multiple cells.
`The module provides two interfaces to off-chip memory.
`The SRAM interface supports transfer of 36-bit wide data to
`and from off-chip SRAM. The Synchronous Dynamic RAM
`(SDRAM) interface provides a 64-bit wide interface to off-
`chip memory. In the implementation of the IP lookup mod-
`ule, the off-chip SRAM is used to store the data structures
`of the fast IP Lookup algorithm [3].
`
`3.5 Larger Configurations
`As the capacity of FPGAs increases, it is possible to in-
`tegrate more modules together onto the same FPGA. In
`
`90
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 4 of 7
`
`
`
`RAD
`
`NID
`
`Module
`
`One
`Module
`Installed
`
`One
`Module
`Installed
`
`Module
`
`No Modules Installed
`
`RAD
`
`RAD
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`NID
`
`NID
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`ccp
`EC
`
`EC
`
`ccp
`EC
`
`EC
`
`ccp
`EC
`
`EC
`
`LineCard
`Switch
`Default Flow Action
`(Bypass)
`
`LineCard
`Switch
`Egress (SW) Processing
`(Per−flow Output Queueing)
`
`Switch
`LineCard
`Ingress (LC) Processing
`(IP Routing)
`
`RAD
`
`NID
`
`Module
`
`Module
`
`Module
`
`Module
`
`Module
`
`Module
`
`RAD
`
`RAD
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`NID
`
`NID
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`VC
`
`ccp
`EC
`
`EC
`
`ccp
`EC
`
`EC
`
`ccp
`EC
`
`EC
`
`Switch
`LineCard
`Full RAD Processing
`(Packet Routing and Reassembly)
`
`Switch
`LineCard
`Full Loopback Testing
`(System Test)
`
`Switch
`LineCard
`Dual Egress Processing
`(Chained Modules)
`
`Figure 6: NID Switching Functionality
`
`91
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 5 of 7
`
`
`
`Line
`Card
`OC3/
`OC12/
`OC48
`
`Line
`Card
`OC3/
`OC12/
`OC48
`
`SPC
`
`Smart
`Port
`Card
`
`SPC
`
`Smart
`Port
`Card
`
`FPX
`
`Field−
`programmable
`Port
`Extender
`
`FPX
`
`Field−
`programmable
`Port
`Extender
`
`IPP
`
`IPP
`
`IPP
`
`IPP
`
`WUGS
`Gigabit
`Switch
`Fabric
`
`OPP
`
`OPP
`
`OPP
`
`OPP
`
`Figure 9: FPX/SPC Physical Configuration
`
`the current system, each module occupies one-half of an
`XCV1000E. In future systems, a larger number of modules
`can fit into a higher-capacity FPGA.
`The interface that is used by hardware modules in the
`existing system is designed to remain unchanged in a larger
`configuration. Applications retain the same interfaces and
`timing relationships to the data bus, memory buses, and
`control lines. Hardware modules need only be resynthesized
`so that they fit into a fixed region of the new device.
`A configuration of larger configuration of FPX modules is
`shown in Figure 8. Physically, each module occupies a fixed
`region of the FPGA device. Intra-chip routing interconnects
`the data paths between the modules. Common signal lines
`internconnect the modules with SRAM and SDRAM.
`3.6 Combined Hardware/Software Modules
`Complex packet processing algorithms can require both
`hardware and software components [14] [15].Through an-
`other research project, an active processing node called the
`Smart Port Card (SPC) has been built that contains a rout-
`ing chip called the APIC and an embedded Pentium proces-
`sor that runs the NetBSD kernel [16] [17]. When combined
`with the FPX, interesting applications can be implemented
`which perform active processing in both hardware and soft-
`ware. The physical configuration of the combined system
`is shown in Figure 9. Physically, the FPX and SPC stack
`between the ports of the WUGS switch.
`The logical configuration of the combined system is shown
`in Figure 10. Packets enter and exit the combined system
`at the interfaces to the switch fabric and line card. Data
`flows can be selectively forwarded between the network in-
`terfaces, the hardware modules on the FPX, and the soft-
`ware modules on the SPC by configuration of virtual circuits
`and virtual paths on the NID and APIC.
`As an example of an application that can utilize both
`hardware and software modules to process packets, consider
`the implementation of an Internet router in the combined
`system. Packet flows would enter the line card and be for-
`warded by the APIC and NID to a hardware module on
`the FPX. This hardware module, in turn, performs a fast
`IP lookup on the packet, then forwards standard packets
`on a pre-established virtual circuit that leads through the
`NID and switch fabric to the appropriate egress port [3].
`For packets that require non-standard processing, such as
`those with IP options, the hardware module can forward
`the packet to a software module on the SPC for further pro-
`cessing. By using both hardware and software, the combined
`system can provide high-throughput for the majority of the
`packets and full processing functionality for exceptions.
`
`3.7 Networking Testbed
`The FPX provides an open-platform environment that can
`be used for the rapid prototype of a broad range of hardware-
`based networking components. Through a National Science
`Foundation grant, FPX hardware can be made available to
`researchers at Universities interested in developing network-
`ing hardware components [11].
`The implementation of a sample FPX module, including
`the VHDL source code and I/O pin mappings on the RAD,
`can be downloaded from the web [12]. Other modules have
`been developed for the FPX to perform IP routing, packet
`buffering, and packet content modification. Details about
`the FPX are available on the project website [13].
`
`4. CONCLUSIONS
`The Field Programmable Port Extender enables customized
`packet processing functions to be implemented as modules
`which can be dynamically loaded into hardware over a net-
`work. The Networking Interface Device implements the core
`functionality of the FPX. It allows the FPX to individually
`reconfigure the packet processing functionality for a set of
`traffic flows, while not disrupting the packet processing func-
`tions for others. The modular design of the FPX has made
`the system of interest for active networking systems, as it
`allows customized applications to achieve the higher perfor-
`mance via hardware acceleration.
`
`5. ACKNOWLEDGMENTS
`The authors would like to thank Dave Parlour of Xilinx
`for his support of this project and for his research on the
`mechanisms to implement interfaces between on-chip mod-
`ules.
`
`6. REFERENCES
`[1] S. Hauck, “The roles of FPGAs in reprogrammable
`systems,” Proceedings of the IEEE, vol. 86,
`pp. 615–638, Apr. 1998.
`[2] W. Marcus, I. Hadzic, A. McAuley, and J. Smith,
`“Protocol boosters: Applying programmability to
`network infrastructures,” IEEE Communications
`Magazine, vol. 36, no. 10, pp. 79–83, 1998.
`[3] J. W. Lockwood, J. S. Turner, and D. E. Taylor,
`“Field programmable port extender (FPX) for
`distributed routing and queuing,” in FPGA’2000,
`(Monterey, CA), pp. 137–144, Feb. 2000.
`[4] H. Duan, J. W. Lockwood, S. M. Kang, and J. Will,
`“High-performance OC-12/OC-48 queue design
`prototype for input-buffered ATM switches,” in
`INFOCOM’97, (Kobe, Japan), pp. 20–28, Apr. 1997.
`[5] M. Bossardt, J. W. Lockwood, S. M. Kang, and S.-Y.
`Park, “Available bit rate architecture and simulation
`for an input-buffered and per-vc queued ATM switch,”
`in GLOBECOM’98, (Sydney, Australia),
`pp. 1817–1822, Nov. 1998.
`[6] J. S. Turner, T. Chaney, A. Fingerhut, and M. Flucke,
`“Design of a Gigabit ATM switch,” in INFOCOM’97,
`1997.
`[7] J. W. Lockwood, H. Duan, J. J. Morikuni, S. M.
`Kang, S. Akkineni, and R. H. Campbell, “Scalable
`optoelectronic ATM networks: The iPOINT fully
`functional testbed,” IEEE Journal of Lightwave
`Technology, pp. 1093–1103, June 1995.
`
`92
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 6 of 7
`
`
`
`FPX
`
`SPC
`
`Line Card
`
`Intel Embedded
`Module
`Pentium
`
`Data
`SRAM
`
`North
`Bridge
`
`South
`Bridge
`
`PCI Bus
`
`PCI Interface
`
`Three Port
`
`Switch
`
`APIC
`
`Data
`SDRAM
`
`Data
`SRAM
`
`Module
`
`Module
`
`temp
`
`ccp
`
`VC
`
`Four Port
`
`VC
`
`RAD
`Synch, Buffered
`Interface
`
`Virtual Circuit
`Lookup Table
`
`NID
`
`Switch
`
`VC
`
`VC
`
`Data
`SDRAM
`
`Data
`SRAM
`
`SelectMap
`Programming
`Interface
`
`Control Cell
`Processor
`
`RAD
`Program
`SRAM
`
`EC
`
`EC
`
`Error Check
`Circuit
`
`Asynchronous
`Interface
`
`3.2 Gbps
`
`3.2 Gbps
`
`Switch Fabric
`
`Hardware−based
`Packet Processing
`
`Software−based
`Packet Processing
`
`Figure 10: Combined FPX/SPC Logical configuration
`
`[8] “Illinois Pulsar-based Optical Interconnect (iPOINT)
`Homepage.” http://ipoint.vlsi.uiuc.edu, Sept.
`1999.
`[9] W. Westfeldt, “Internet reconfigurable logic for
`creating web-enabled devices.” Xilinx Xcell, Q1 1999.
`[10] S. Kelem, “Virtex configuration architecture advanced
`user’s guide.” Xilinx XAPP151, Sept. 1999.
`[11] J. S. Turner, “Gigabit Technology Distribution
`Program.” http://www.arl.wustl.edu/
`gigabitkits/kits.html, Aug. 1999.
`[12] J. Lockwood and D. Lim, “Hello World: A simple
`application for the field programmable port extender
`(FPX),” tech. rep., WUCS-00-12, Washington
`University, Department of Computer Science, July 11,
`2000.
`[13] “Field Programmable Port Extender Homepage.”
`http://www.arl.wustl.edu/projects/fpx/, Aug.
`2000.
`
`[14] S. Choi, J. Dehart, R. Keller, J. Lockwood, J. Turner,
`and T. Wolf, “Design of a flexible open platform for
`high performance active networks,” in Allerton
`Conference, (Champaign, IL), 1999.
`[15] D. S. Alexander, M. W. Hicks, P. Kakkar, A. D.
`Keromytis, M. Shaw, J. T. Moore, C. A. Gunter,
`J. Trevor, S. M. Nettles, and J. M. Smith in The 1998
`ACM SIGPLAN Workshop on ML / International
`Conference on Functional Programming (ICFP), 1998.
`[16] J. D. DeHart, W. D. Richard, E. W. Spitznagel, and
`D. E. Taylor, “The smart port card: An embedded
`Unix processor architecture for network management
`and active networking,” IEEE/ACM Transactions on
`Networking, Submitted July 2000.
`[17] D. Decasper, G. Parulkar, S. Choi, J. DeHart, T. Wolf,
`and B. Plattner, “Design issues for high performance
`active routers,” IEEE Network, vol. 13, Jan. 1999.
`
`93
`
`Ex.1031
`CISCO SYSTEMS, INC. / Page 7 of 7
`
`