`Kalkunte et al.
`
`(10) Patent No.:
`(45) Date of Patent:
`
`US 7,830,892 B2
`Nov. 9, 2010
`
`US007830892B2
`
`(54) VLAN TRANSLATION IN A NETWORK
`DEVICE
`
`(75) Inventors: Mohan Kalkunte, Sunnyvale, CA (US);
`Venkateshwar Buduma, San Jose, CA
`(US); Song-Huo Yu, Saratoga, CA (US);
`Gurumurthy Yelewarapu, Santa Clara,
`CA (US)
`(73) Assignee: Broadcom Corporation, Irvine, CA
`(US)
`
`(*) Notice:
`
`Subject to any disclaimer, the term of this
`patent is extended or adjusted under 35
`U.S.C. 154(b) by 956 days.
`
`(21) Appl. No.: 11/289,370
`
`(22) Filed:
`
`Nov.30, 2005
`
`(65)
`
`Prior Publication Data
`US 2006/O114915A1
`Jun. 1, 2006
`
`Related U.S. Application Data
`(60) Provisional application No. 60/631,548, filed on Nov.
`30, 2004, provisional application No. 60/686,402,
`filed on Jun. 2, 2005.
`
`51) Int. Cl.
`(2006.01)
`(51) H04L 2/56
`(2006.01)
`G06F 15/I 77
`(52) U.S. Cl. ........ grgr: 370/395.53; 709/220
`(58) Field of Classification Search ............... ... None
`See application file for complete search history.
`References Cited
`
`(56)
`
`U.S. PATENT DOCUMENTS
`6,041,042 A
`3, 2000 Bussiere
`6,335,932 B2
`1/2002 Kadambi et al.
`6.496. 502 B1
`12/2002 Fite, Jr. et al.
`6.5355 to B.
`32003 Kaikunteerial.
`6,674,743 B1
`1/2004 Amara et al.
`6,792,502 B1
`9/2004 Pandya et al.
`
`10/2004 Congdon et al.
`6,804.233 B1
`10/2004 Kanuri et al.
`6,807, 179 B1
`6,963,921 B1 1 1/2005 Yang et al.
`6,993,026 B1
`1/2006 Baum et al.
`7,020,139 B2
`3/2006 Kalkunte et al.
`7,031.304 B1
`4/2006 Arberg et al.
`T.054315 B2
`5, 2006 Liao
`7,089,240 B2
`8, 2006 Basso et al.
`7,127,566 B2 10/2006
`akrishnan et al.
`7,139,753 B2 11/2006 Bass et al.
`7,161,948 B2
`1/2007 Sampath et al.
`7,292,567 B2 11/2007 Terrell et al.
`7,292,573 B2 11/2007 LaVigne et al.
`7,313,135 B2 12/2007 Wyatt
`7,327,748 B2
`2/2008 Montalvo et al.
`
`(Continued)
`OTHER PUBLICATIONS
`
`Non-Final Office Action received for U.S. Appl. No. 12/135,720
`mailed on Mar. 19, 2009, 8 pages.
`(Continued)
`Primary Examiner Ricky Ngo
`Assistant Examiner—Clemence Han
`
`(57)
`
`ABSTRACT
`
`A network device for implementing VLAN translation on a
`packet. The network device includes a user network interface
`port for receiving and transmitting packets to customers of a
`network. The network device also includes a network to net
`work interfaceport for communicating with a second network
`device in the network. A packet received at the user network
`interface port is classified, translated based on a predefined
`provider field associated with the packet, and encapsulated
`with a tag that is removed when the packet is transmitted from
`the user network interface port to a customer.
`
`19 Claims, 6 Drawing Sheets
`
`Destination
`Address 602
`
`Source
`Address 604
`
`OTPD 702
`
`SPWD 704
`
`TPD 606
`
`CVD 608
`
`
`
`Packet 700
`
`Ex.1013
`VERIZON / Page 1 of 15
`
`
`
`US 7,830,892 B2
`Page 2
`
`U.S. PATENT DOCUMENTS
`
`4/2008 Wakumoto et al.
`7,359,383 B2
`6, 2008 Barnes et al.
`7,382,787 B1
`8/2008 Kouinavis et al.
`7,408,932 B2
`8, 2008 Geet al.
`7,408,936 B2
`8, 2008 Ikeda et al.
`7,417,990 B2
`3/2009 De Silva et al.
`7,499.456 B2
`4/2009 Amagai et al.
`7,515,610 B2
`4/2009 Matsui et al.
`7,525,919 B2
`8, 2009 Kalkunte et al.
`7,570,639 B2
`3/2010 Kalkunte
`7,680,107 B2
`1/2002 Kalkunte et al.
`2002/0010791 A1
`9, 2002 Chow et al.
`2002/O126672 A1
`1/2005 Chen et al. .................. 370,360
`2005, 0008009 A1
`2005, OO13306 A1* 1/2005 Albrecht .....
`... 370,395.53
`2005/00 18693 A1
`1/2005 Dull ........................... 370,396
`2005, 0083885 A1
`4/2005 Ikeda et al.
`2005/0129019 A1
`6, 2005 Cheriton
`2005, 0138149 A1*
`6/2005 Bhatia ........................ TO9.220
`7/2005 Higashitaniguchi et al. . 370/351
`2005, 01631 O2 A1*
`2005/018O391 A1*
`8/2005 Shimada ...........
`... 370,351
`9/2005 Yang et al. ............. 370,395.53
`2005/O190773 A1*
`1/2006 Lappinet al.
`2006, OOO2393 A1
`2006, OO39383 A1*
`2/2006 Ge et al. ..................... 370,396
`2006.0114876 A1
`6, 2006 Kalkunte
`2006.0114901 A1
`6/2006 Kalkunte et al.
`2006.0114908 A1
`6/2006 Kalkunte et al.
`2006.0114938 A1
`6/2006 Kalkunte et al.
`2006, O1401.30 A1
`6/2006 Kalkunte et al.
`2006, O182034 A1
`8, 2006 Klinker et al.
`2007/0110078 A1*
`5, 2007 De Silva et al. ........ 370,395.53
`2008/0095062 A1
`4/2008 Shankar et al.
`2008.0117913 A1
`5/2008 Tatar et al.
`
`
`
`OTHER PUBLICATIONS
`Notice of Allowance Received for U.S. Appl. No. 1 1/289,499, mailed
`on Apr. 3, 2009, 16 pages.
`
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289,499,
`mailed on Oct. 15, 2008, 12 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289.368,
`mailed on Mar. 19, 2009, 9 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289,369,
`mailed on Mar. 18, 2009, 19 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289.366,
`mailed on May 11, 2009, 9 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289.366,
`mailed on Oct. 27, 2008, 11 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289,497,
`mailed on Oct. 15, 2008, 13 pages.
`Final Office Action Received for U.S. Appl. No. 1 1/289,497, mailed
`on Mar. 18, 2009, 13 pages.
`Notice of Allowance Received for U.S. Appl. No. 1 1/289,497, mailed
`on Jun. 12, 2009, 12 pages.
`Final Office Action Received for U.S. Appl. No. 1 1/289,687, mailed
`on Jun. 30, 2009, 11 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289,687,
`mailed on Dec. 24, 2008, 9 pages.
`Non-Final Office Action Received for U.S. Appl. No. 1 1/289,687,
`mailed on Aug. 5, 2008, 10 pages.
`Notice of Allowance received for U.S. Appl. No. 1 1/289.368, mailed
`on Sep. 15, 2009, 17 pages.
`Office Action received for U.S. Appl. No. 1 1/289,369, mailed on Oct.
`13, 2009, 33 pages.
`Office Action received for U.S. Appl. No. 1 1/289,497, mailed on Oct.
`15, 2008, 13 pages.
`Office Action received for U.S. Appl. No. 1 1/289,497, mailed on Mar.
`18, 2009, 13 pages.
`Supplemental Notice of Allowability received for U.S. Appl. No.
`11/289,497, mailed on Sep. 21, 2009, 17 pages.
`Supplemental Notice of Allowability received for U.S. Appl. No.
`1 1/289,497, mailed on Dec. 24, 2009, 17 pages.
`U.S. Appl. No. 1 1/289.366 Non-Final Office Action mailed Jul. 6,
`2010, 18 pages.
`* cited by examiner
`
`Ex.1013
`VERIZON / Page 2 of 15
`
`
`
`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 1 of 6
`
`US 7,830,892 B2
`
`
`
`High Speed Port
`108a
`
`High Speed Port
`108x
`
`ES5
`102
`t
`
`Memory
`Management
`Unit
`(MMU)
`104.
`
`9.
`
`CPU
`Processing
`Module
`111
`
`Device 100
`
`Figure 1
`
`Ex.1013
`VERIZON / Page 3 of 15
`
`
`
`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 2 of 6
`
`US 7,830,892 B2
`
`Register
`202
`
`Register
`204
`
`
`
`Main
`Arbiter
`2O7
`
`Auxilary
`Arbiter
`209
`Arbiter
`206
`
`Configuration
`Stage 208
`
`Discard
`Stage
`212
`
`First Switch Stage 214
`
`Switch Stage
`213
`Second Switch Stage 216
`
`Ingress Pipeline 200
`Figure 2
`
`Ex.1013
`VERIZON / Page 4 of 15
`
`
`
`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 3 of 6
`
`US 7,830,892 B2
`
`
`
`High Speed Port
`108a.
`
`High Speed Port
`108X
`
`Arbiter 3O2
`
`Initial Packet Buffer
`304
`
`Decision
`Stage
`31O
`
`Modification
`Stage
`312
`
`Module 106
`
`Figure 3
`
`Ex.1013
`VERIZON / Page 5 of 15
`
`
`
`U.S. Patent
`
`NOV .9, 2010
`
`Sheet 4 of 6
`
`US 7,830,892 B2
`
`
`
`
`
`
`
`r Nip
`410
`
`VLAN Table
`414
`
`CAM
`418
`
`
`
`Table 412
`
`416
`
`520
`
`
`
`
`
`IP Tunnel
`Table
`522
`
`L3 Module
`402
`
`
`
`St.
`E.
`
`m
`
`VLAN
`Translation
`Stage
`
`4O6
`
`Figure 4
`
`Ex.1013
`VERIZON / Page 6 of 15
`
`
`
`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 5 of 6
`
`US 7,830,892 B2
`
`
`
`Network Device 502
`
`Pot 508a.
`
`Inteface
`Pot 508
`
`Network Device 504
`
`User Network
`Interface Port
`506a
`
`User Network
`interface Port
`506b
`
`Customer
`51Oa
`
`Customer
`51Ob
`
`Custoner
`51 Oc
`
`Customer
`51Od
`
`Customer
`51 Oe
`
`Service Provider Network 5 OO
`
`Figure 5
`
`Ex.1013
`VERIZON / Page 7 of 15
`
`
`
`U.S. Patent
`
`Nov. 9, 2010
`
`Sheet 6 of 6
`
`US 7,830,892 B2
`
`Figure 6a
`
`
`
`Source
`Destination
`Address 6O2 Address 604
`
`TPD 606
`
`CVD 608
`
`Packet 600
`
`Figure 6b
`
`Source
`Destination
`Address go2 Address G04 | OFD 702
`
`SPWD 704
`
`TPD 606
`
`CVO 608
`
`Packet 700
`
`Figure 6c
`
`Destination
`Source
`A2 Ado OTPD 702
`
`SPVD 704
`
`Packet 700
`
`Ex.1013
`VERIZON / Page 8 of 15
`
`
`
`US 7,830,892 B2
`
`1.
`VLAN TRANSLATION IN ANETWORK
`DEVICE
`
`CROSS-REFERENCE TO RELATED
`APPLICATIONS
`
`This application claims priority of U.S. Provisional Patent
`Application Ser. No. 60/631,548, filed on Nov.30, 2004, and
`U.S. Provisional Patent Application Ser. No. 60/686,402,
`filed on Jun. 2, 2005. The subject matter of these earlier filed
`applications is hereby incorporated by reference.
`
`BACKGROUND OF THE INVENTION
`
`Field of the Invention
`
`10
`
`15
`
`2
`inserting the service provider identifier, the packet that is
`forwarded to the destination port is indeed a modified packet.
`This can be a problem if a copy of the received packet is what
`is needed at a given destination port, such as the mirrored-to
`port.
`
`BRIEF DESCRIPTION OF THE DRAWINGS
`
`The accompanying drawings, which are included to pro
`vide a further understanding of the invention and are incor
`porated in and constitute a part of this specification, illustrate
`embodiments of the invention that together with the descrip
`tion serve to explain the principles of the invention, wherein:
`FIG. 1 illustrates a network device in which an embodi
`ment of the present invention may be implemented;
`FIG. 2 illustrates a centralized ingress pipeline architec
`ture, according to one embodiment of the present invention;
`FIG.3 illustrates a centralized egress pipeline architecture
`of an egress stage, according to one embodiment of the
`present invention;
`FIG. 4 illustrates a table lookup stage in the egress pipeline
`architecture;
`FIG. 5 illustrates an embodiment of the invention in which
`VLAN translation is implemented on at least two devices;
`FIG. 6a illustrates a packet that is transmitted between
`customers in a service provider network;
`FIG. 6b illustrates one embodiment of a packet that is
`translated in a service provider network; and
`FIG. 6c illustrates another embodiment of a packet that is
`translated in the service provider network.
`
`DETAILED DESCRIPTION OF PREFERRED
`EMBODIMENTS
`
`Reference will now be made to the preferred embodiments
`of the present invention, examples of which are illustrated in
`the accompanying drawings.
`FIG. 1 illustrates a network device, such as a switching
`chip, in which an embodiment the present invention may be
`implemented. Device 100 includes an ingress module 102, a
`MMU 104, and an egress module 106. Ingress module 102 is
`used for performing Switching functionality on an incoming
`packet. MMU 104 is used for storing packets and performing
`resource checks on each packet. Egress module 106 is used
`for performing packet modification and transmitting the
`packet to an appropriate destination port. Each of ingress
`module 102, MMU 104 and Egress module 106 implements
`multiple cycles for processing instructions generated by that
`module. Device 100 implements a pipelined approach to pro
`cess incoming packets. The device 100 has the ability of the
`pipeline to process, according to one embodiment, one packet
`every clock cycle. According to one embodiment of the inven
`tion, the device 100 includes a 133.33 MHZ core clock. This
`means that the device 100 architecture is capable of process
`ing 133.33M packets/sec.
`Device 100 may also include one or more internal fabric
`high speed ports, for example a HiGigTM, high speed port
`108a-108x, one or more external Ethernet ports 109a–109x,
`and a CPU port 110. High speed ports 108a-108x are used to
`interconnect various network devices in a system and thus
`form an internal Switching fabric for transporting packets
`between external source ports and one or more external des
`tination ports. As such, high speed ports 108a-108x are not
`externally visible outside of a system that includes multiple
`interconnected network devices. CPU port 110 is used to send
`and receive packets to and from external Switching/routing
`control entities or CPUs. According to an embodiment of the
`
`25
`
`30
`
`35
`
`40
`
`45
`
`The present invention relates to a network device in a data
`network and more particularly to a system and method of
`mapping multiple packets for multiple customers associated
`with a service provider to a single tunnel.
`A packet Switched network may include one or more net
`work devices, such as an Ethernet Switching chip, each of
`which includes several modules that are used to process infor
`mation that is transmitted through the device. Specifically, the
`device includes an ingress module, a Memory Management
`Unit (Mh4I-J) and an egress module. The ingress module
`includes Switching functionality for determining to which
`destination port a packet should be directed. The MMU is
`used for storing packet information and performing resource
`checks. The egress module is used for performing packet
`modification and for transmitting the packet to at least one
`appropriate destination port. One of the ports on the device
`may be a CPU port that enables the device to send and receive
`information to and from external Switching/routing control
`entities or CPUs.
`A service provider may use one or more network devices to
`provide services to multiple customers, wherein each cus
`tomer transmits packets requiring one or more services. As
`part of the management of the network device, packets
`requesting the same services need to be classified and pro
`cessed in a way that reduces network bottleneck. Prior net
`work devices associated a unique service provider identifier
`with each classification. This enabled the network devices to
`map several packets to the service provider identifier and to
`provide the appropriate services to the packets without indi
`vidually examining each packet. As the packets entered into
`the network device, the service provider identifier was
`inserted into each packet.
`In order to provide the proper services to customers of
`service providers, often the packet flow needs to be monitored
`50
`to determine if the network device is functioning properly. In
`prior art devices, the packets being sent to a given port could
`be “mirrored to another port where the packet flow could be
`examined. The mirroring process is important in that the flow
`of the packets to a given destination port need not be inter
`rupted to examine the flow to that destination port. Therefore,
`in these devices, the packets that were received by a “mir
`rored-to' port were examined at the latter port with no dis
`ruption to the flow of packets to the actual destination port.
`As such, in devices where the service provider identifier is
`inserted into the packet, the packet that is mirrored is modified
`as a consequence of the inserted service provider identifier.
`Thus, if a packet received at a given port of a network device
`is forwarded to another port, the header is modified by the
`receiving port when the service provider identifier is inserted
`into the packet. However, if the receiving port is Supposed to
`transmit the unmodified packet to a destination port, after
`
`55
`
`60
`
`65
`
`Ex.1013
`VERIZON / Page 9 of 15
`
`
`
`US 7,830,892 B2
`
`3
`invention, CPU port 110 may be considered as one of external
`Ethernet ports 109a–109x. Device 100 interfaces with exter
`nal/off-chip CPUs through a CPU processing module 111,
`such as a CMIC, which interfaces with a PCI bus that con
`nects device 100 to an external CPU.
`Network traffic enters and exits device 100 through exter
`nal Ethernet ports 109a-109x. Specifically, traffic in device
`100 is routed from an external Ethernet source port to one or
`more unique destination Ethernet ports 109a–109x. In one
`embodiment of the invention, device 100 supports physical
`Ethernet ports and logical (trunk) ports. A physical Ethernet
`port is a physical port on device 100 that is globally identified
`by a global port identifier. In an embodiment, the global port
`identifier includes a module identifier and a local port number
`that uniquely identifies device 100 and a specific physical
`port. The trunk ports are a set of physical external Ethernet
`ports that act as a single link layer port. Each trunk port is
`assigned a global a trunk group identifier (TGID). According
`to an embodiment, device 100 can support up to 128 trunk
`ports, with up to 8 members per trunk port, and up to 29
`external physical ports. Destination ports 109a–109x on
`device 100 may be physical external Ethernet ports or trunk
`ports. If a destination port is a trunk port, device 100 dynami
`cally selects a physical external Ethernet port in the trunk by
`using a hash to select a member port. The dynamic selection
`enables device 100 to allow for dynamic load sharing
`between ports in a trunk.
`Once a packet enters device 100 on a source port 109a
`109x, the packet is transmitted to ingress module 102 for
`processing. Packets may enter device 100 from a XBOD or a
`GBOD. In this embodiment, the XBOD is a block that has one
`10GE/12G MAC and supports packets from high speed ports
`108a-108x. The GBOD is a block that has 12 10/100/1G
`MAC and supports packets from ports 109a–109x.
`FIG. 2 illustrates a centralized ingress pipeline architecture
`200 of ingress module 102. Ingress pipeline 200 processes
`incoming packets, primarily determines an egress bitmap
`and, in some cases, figures out which parts of the packet may
`be modified. Ingress pipeline 200 includes a data holding
`register 202, a module header holding register 204, an arbiter
`206, a configuration stage 208, a parser stage 210, a discard
`stage 212 and a switch stage 213. Ingress pipeline 200
`receives data from the XBOD, GBOD or CPU processing
`module 111 and stores cell data in data holding register 202.
`Arbiter 206 is responsible for scheduling requests from the
`GBOD, the XBOD and CPU. Configuration stage 208 is used
`for setting up a table with all majorport-specific fields that are
`required for Switching. Parser stage 210 parses the incoming
`packet and a high speed module header, if present, handles
`tunnelled packets through Layer 3 (L3) tunnel table lookups,
`generates user defined fields, verifies Internet Protocol ver
`sion 4 (IPv4) checksum on outer IPv4 header, performs
`address checks and prepares relevant fields for downstream
`lookup processing. Discard stage 212 looks for various early
`discard conditions and either drops the packet and/or prevents
`it from being sent through pipeline 200. Switching stage 213
`performs all Switch processing in ingress pipeline 200,
`including address resolution.
`Ingress module 102 then transmits the packet to MMU 104
`which applies all resource accounting and aging logic to
`packet 200. Specifically MMU 104 uses a source port number
`to perform resource accounting. Thereafter, MMU 104 for
`wards the packet to egress module 106.
`Upon receiving the packet from MMU 104, egress module
`106 Supports multiple egress functions for a 72 gigabyte port
`bandwidth and a CPU processing bandwidth. According to
`one embodiment, the egress module 106 is capable of han
`
`40
`
`45
`
`4
`dling more than 72 Gig of traffic, i.e., 24 one GE port, 4 high
`speed ports (12G) and a CPU processing port of 0.2 GE. The
`egress module 106 receives original packets, as inputted from
`Ethernet ports 109a–109x, from MMU 104, and may either
`transmit modified or unmodified packets to destination ports
`109a–109x. According to one embodiment of the invention,
`all packet modifications within device 100 are made in egress
`module 106 and the core processing of egress module 106 is
`capable of running faster than the processing of destination
`ports 109a–109x. Therefore, egress module 106 provides a
`stall mechanism on a port basis to prevent ports 109a–109x
`from becoming overloaded and thus services each port based
`on the speed of the port.
`In an embodiment of the invention, the egress module 106
`is connected to the MMU 104 by a 1024bit data interface and
`all packets transmitted from the MMU 104 pass through
`egress module 106. Specifically, the MMU 104 passes
`unmodified packet data and control information to egress
`module 106. The control information includes the results of
`table lookups and Switching decisions made in ingress mod
`ule 102. The data bus from MMU 106 is shared across all
`ports 108 and 109 and the CPU processing 111. As such, the
`bus uses a “request based' Time Division Multiplexing
`(TDM) scheme, wherein each Gig port has a turn on the bus
`every 72 cycles and each high speed Port 108 has a turn every
`6 cycles. CPU processed packet data is transmitted over
`bubbles—free spaces occurring on the bus. Upon receiving
`the information for the MMU 104, the egress module 106
`parses the packet data, performs table lookups, executes
`Switch logic, modifies, aligns and further buffers the packet
`before the data is transmitted to the appropriate destination
`port 109a–109x.
`In an embodiment of the invention, egress module 106 is
`connected to CPU processing module 111 through a 32 bit
`S-bus interface which the CPU uses to send requests to egress
`module 106. The requests are typically for reading the egress
`module’s resources, i.e., registers, memories and/or stat
`counters. Upon receiving a request, the egress module 106
`converts the request into a command and uses a mechanism,
`described in detail below, for storing and inserting CPU
`instructions into a pipeline wherever there is an available slot
`on the pipeline.
`FIG.3 illustrates a centralized egress pipeline architecture
`of egress stage 106. The egress pipeline includes an arbiter
`302, parser 306, a table lookup stage 308, a decision stage
`310, a modification stage 312 and a data buffer 314. The
`arbiter 302 provides arbitration for accessing egress pipeline
`resources between packet data and control information from
`MMU 104 and information from the CPU. Parser 306 per
`forms packet parsing for table lookups and modifications.
`Table lookup stage 308 performs table lookups for informa
`tion transmitted from parser 306. Decision stage 310 is used
`for deciding whether to modify, drop or otherwise process the
`packet. Modification stage 312 makes modification to the
`packet data based on outputs from previous stages of the
`ingress module.
`All incoming packet data from MMU 104 is transmitted to
`an initial packet buffer 304. In an embodiment of the inven
`tion, the initial packet buffer is 104.4 bits wide and 18 words
`deep. Egress pipeline 300 receives two inputs, packet data
`and control information from MMU 104 and CPU operations
`from the s-bus. Initial packet buffer 304 stores packet data and
`keeps track of any empty cycles coming from MMU 104.
`Initial packet buffer 304 outputs its write address and parser
`306 passes the latest write address with pipeline instructions
`to modification stage 314.
`
`10
`
`15
`
`25
`
`30
`
`35
`
`50
`
`55
`
`60
`
`65
`
`Ex.1013
`VERIZON / Page 10 of 15
`
`
`
`US 7,830,892 B2
`
`5
`
`10
`
`15
`
`25
`
`30
`
`35
`
`5
`Arbiter 302 collects packet data and control information
`from MMU 104 and read/write requests to registers and
`memories from the CPU and synchronizes the packet data and
`control information from MMU 104 and writes the requests
`from the CPU in a holding register. Based on the request type
`from the CPU, arbiter 302 generates pipeline register and
`memory access instructions and hardware table initialization
`instructions. After arbiter 302 collects packet data, CPU
`requests and hardware table initialization messages, it gener
`ates an appropriate instruction which is transmitted to parser
`306.
`After receiving an instruction from arbiter 304, parser 306
`parses packet data using control information and a configu
`ration register value transmitted from arbiter 306. According
`to an embodiment, the packet data is parsed to obtained L4
`and L3 fields which appear in the first 148 bytes of the packet.
`Table lookup stage 308 then receives all packet fields and
`register values from parser 306. FIG. 4 further illustrates table
`lookup stage 308. Table lookup stage 308 includes a L3
`Module 402, a VLAN stage 404, a VLAN translation stage
`406, IP tunneling lookup stage 408. In an embodiment of the
`invention, L3 Module 402 includes an 8 k deep Next Hop
`Table 410 and a 4K deep Interface table 412. Next Hop table
`410 is indexed based on a 13 bit wide next hop index from
`MMU 104 and Next Hop table 410 provides a MAC Address
`and an Interface Number that is used, depending on the type
`of packet, to index Interface table 412. For all Memory Read
`Operation and Memory Write Operation instructions, table
`lookup stage 308 decodes the address and writes or reads data
`from corresponding tables.
`VLAN stage 404 is used to obtain VLAN related informa
`tion and a spanning tree state of an outgoing port. VLAN
`stage 404 includes a VLAN table 414 and a stage (STG) table
`416. VLAN table 414 is indexed based on the VLAN IDs from
`either the packet or Interface table 412. If a VLAN table
`lookup results in a “miss’, i.e., an invalid VLAN, then the
`packet may be dropped. If the VLAN entry is valid but the
`outgoing port is not a member of the VLAN, then the packet
`may be also dropped. The VLAN table outputs a VLAN
`membership, untagged bitmap, and a STG group number
`40
`which is used to index STG table 416. STG table 416 outputs
`an STG vector which contains the spanning tree state of the
`outgoing ports. VLAN stage 404 also determines whether the
`packet should be modified in egress pipeline 300 for CPU and
`ingress mirroring cases.
`VLAN translation stage 406 translates the incoming
`VLAN to a new one and searches various tables. VLAN
`translation stage 406 includes a Content Addressable
`Memory (CAM) 418 and an associated Data Random
`Addressable Memory (RAM)520. CAM 418 is searched with
`the VLAN ID and the destination port number and if an
`associated entry is found, an address is obtained from CAM
`418 to access the associated Data RAM 520.
`IP tunneling lookup stage 408 obtains a partial Tunnel IP
`header from appropriate tables, registers and parsed packet
`fields. IP tunneling lookup stage 408 includes an IP tunnel
`table 522 that is indexed issuing a tunnel index from interface
`table 412 and outputs tunnel type, among other information,
`which is used to distinguish among tunnel protocols that are
`implemented in egress pipeline 300.
`Information from table lookup stage 308 is then transmit
`ted to decision stage 310 where a decision is made as to
`whether to modify, drop or otherwise process the packet. For
`example, decision stage 310 first looks for flush bits at the
`beginning of the packet transmission and if the flush bits are
`set, the packets are marked “dropped. In an embodiment of
`the invention, if a flush bit for a packet is set for a packet
`
`50
`
`45
`
`55
`
`60
`
`65
`
`6
`already in transmission, the packet is completely transmitted
`and the next packet is flushed. In another example, MMU 104
`may mark packets as Purge, Aged or Cell Error and decision
`stage 310 may either drop or transmit these packets but mark
`them as erroneous. In another example, if a VLAN translate
`feature is enabled, but there was a miss in CAM 418 lookup,
`the decision stage 310 may drop the packet if certain fields are
`set. Decision stage 308 also determines if the packet need to
`be L4 switched or L3 routed and the type of mirroring func
`tions that need to be performed on the packets.
`Modification stage 312 thereafter constructs a Tunnel IP
`Header and a module header for the packet, makes replace
`ment changes in the packet and computes an IP checksum for
`outer and inner IP headers. Modification stage 312 receives a
`packet data interface from the initial buffer 304 which enables
`modification stage 312 to provide a read address to initial
`buffer 304 and in response obtain the packet data and basic
`control data. Modification stage 312 then generates Middle of
`Packet and End of Packet instructions based on the data
`received from initial buffer 304 and makes changes based on
`these commands. Modification stage 312 also receives all
`packet decisions and pipeline commands decision stage 310
`and uses this information to make further changes to the
`packet. Specifically, all fields of the tunnel IP header, which
`need to be filled by incoming packet fields, are filled. Further
`more, an IP checksum for the tunnel IP header is computed in
`parallel with the header construction. Modification stage 312
`further reads back packets and control information from ini
`tial buffer 304 and performs all packet modifications and
`replacements of fields. It outputs Cpu operations and hard
`ware commands and data and addresses associated with them
`on one bus and outputs packet data and control information on
`another bus. Additionally, modification stage 312 performs
`physical encapsulation and decapsulation of headers and tag
`removal and insertions. If a packet is going to a high speed
`port, modification stage 312 converts the packet from Ether
`net format to high speed format. Modification stage 312 also
`aligns the packet by padding packets Smaller than 64 bytes
`and removes holes by aligning data to a 1314 bit boundary.
`Thereafter, a 1314 bit “complete' data word is output from
`modification stage 312 to the data buffer 314.
`Data buffer 314 stores completed data words from modi
`fication stage 312 in memory. Before the egress pipeline
`sends packets out to destination ports 109a–109x, the packet
`data are stored in the data buffer 314 for pipeline latency and
`port speed matching. Data buffer 314 is capable ofrequesting
`data from MMU 104 whenever it has a free space.
`FIG. 5 illustrates an embodiment of the invention in which
`VLAN translation is implemented on at least two network
`devices, as described above, used by a service provider net
`work. The service provider network 500 includes first device
`502 and a second device 504 each of which includes a user
`network interface port 506a and 506b for receiving and/or
`transmitting packets to customers 510a-510e of service pro
`vider network 500. Each of first and second devices 502 and
`504 also includes a network to network interface 508a and
`508b for communicating with each other. FIG. 6a illustrates a
`packet 600 that is transmitted between customers 510a-510e
`connected to service provider network 500. Packet 600
`includes a destination address 602, a source address 604, an
`inner Internet type identifier (ITPID) 606 and a customer
`identifier (CVID) 608. According to FIG.5, when packet 600
`enters first device 502, packet 600, as shown in figure 6a, is
`translated and encapsulated with a tag that is stripped off
`when packet 600 is forwarded to a receiving customer. So if
`packet 600 is transmitted from customer 510a to customer
`510e, upon receipt of packet 600 by first device 502, packet
`
`Ex.1013
`VERIZON / Page 11 of 15
`
`
`
`7
`600 is translated based on an associated service provider
`identifier, classified and transmitted to second device 504
`where packet 500 is restored to its original state before it is
`transmitted out of user network interface port 506b to cus
`tomer 510e. As such, neither of customers 510a nor 510e
`knows anything about the translation.
`According to an embodiment of the invention, in order to
`properly classify the packet, a double tag mode must be
`enabled in the entire system. Thereafter, upon receipt of
`packet 600 on incoming user network interface port 506a,
`first device 502 obtains ITPID 606 from the incoming packet
`and compares ITPID 606 with a configured ITPID. If there is
`a match, first device 502 provides predefined map or transla
`tion services to the packet.
`Specifically, first device 502 indexes a VLAN Translation
`Table, with CVID 608 and the ingress port number. First
`device 502 then obtains a service provider identifier and
`depending on the action associated with the service provider
`identifier, an outer Internet type identifier (OTPID) 702 and
`service provider identifier 704 are either added to packet 600,
`as shown in FIG. 6b, or used to replace ITPID 606 and CVID
`508, as shown in FIG. 6c. As shown in FIG. 6c, for those
`service providers that want to save bandwidth on packets that
`are transmitted in their service provider network 500, OTPID
`702 and service provider identifier 704 may replace ITPID
`606 and CVID 608, instead of being added to packet 600. It
`should be noted that each device 502 and 504 may include
`multiple service provider identifiers based on different
`parameters, for example based on protocols or ports, wherein
`each of the service provider iden