throbber
(19) 9)
`
`Européiisches Patentamt
`
`European Patent Office
`
`Office européen des brevets
`
`(11)
`
`EP 1 128 609 A2
`
`(12)
`
`EUROPEAN PATENT APPLICATION
`
`(43) Date of publication:
`29.03.2001 Bulletin 2001/35
`
`(51) |ntC|.7: H04L 12/56
`
`(21) Application number: 00310753.!)
`
`(22) Date of filing: 04.12.2000
`
`(84) Designated Contracting States:
`AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU
`MC NL PT SE TR
`Designated Extension States:
`AL LT LV MK RO SI
`
`(72) Inventors:
`- Hebb, Andrew T.
`Hudson, Massachusetts 01749 (US)
`- Cherian, Sanjay G.
`Brrokline, New Hampshire 03033 (US)
`
`(30) Priority: 13.12.1999 US 459448
`
`(71) Applicant: ASCEND COMMUNICATIONS, INC.
`Alameda, CA 94502 (US)
`
`(74) Representative:
`Watts, Christopher Malcolm Kelway, Dr.
`Lucent Technologies (UK) Ltd,
`5 Mornington Road
`Woodford Green Essex, IG8 OTU (GB)
`
`rule memory, retrieve the criterion memory entry identi-
`fied by the criterion memory pointer in the rule memory
`entry, and perform the operation specified by the oper-
`ator in the rule memory entry on the values in the crite-
`rion memory entry and corresponding values included
`in the classification request. This procedure is repeated
`for a sequence of rule memory entries until an ending
`condition is encountered, whereupon a packet classifi-
`cation result is generated reflecting the result of the clas-
`sification operations. This result is provided to a packet
`processor to take the appropriate action based on the
`classification result.
`
`'l
`‘T’
`CRITERION 3,,
`MEMORY —
`
`(54)
`
`Packet classification engine
`
`Packet classification apparatus includes a rule
`(57)
`memory and a criterion memory. One type of rule mem-
`ory entry contains an operator and a pointerto a criterion
`memory entry. The operator defines a comparison op-
`eration to be performed, such as EQUAL (exact match)
`or LESS THAN. The criterion memory entry contains
`one or more values to be used as comparands on one
`side of the comparison, where corresponding values
`from a received packet appear on the other side of the
`comparison. Control logic responds to packet classifi-
`cation requests to retrieve a rule memory entry from the
`
`T0/FROM
`IOP 26
`
`TO/FROM
`QUEUES
`70,72
`TO/FROM
`IOP 26
`FROM
`REQUEST
`QUEUE 70
`
`EP1128609A2
`
`Printed by Jouve, 75001 PARIS (FR)
`
`1
`
`ARISTA 1107
`
`

`
`EP 1 128 609 A2
`
`Description
`
`BACKGROUND OF THE INVENTION
`
`[0001] The present invention is related to the field of data communication networks.
`[0002]
`In data communication networks, network devices such as switches are used to route packets through the
`network. Each switch typically has a number of line interfaces, each connected to a different network segment. When
`a packet is received at a given line interface, fonlvarding logic determines which line interface the packet should be
`transmitted from, and the packet is transferred to the appropriate outgoing line interface to be sent toward its destination
`in the network.
`[0003]
`It is known to perform packet filtering in network devices such as switches. Packet filtering can be used to
`achieve various network management goals, such as traffic monitoring and security goals. Filtering criteria are estab-
`lished by network administrators, and provided to the switches or other devices that carry out the filtering operation.
`Packets received by the switches are examined to determine whether their characteristics match the criteria for any
`of the established filters. For packets that satisfy the criteria for one or more filters, predetermined actions associated
`with those filters are carried out. For example, under certain circumstances it may be desirable that packets originating
`from a given network node be discarded rather than being fowvarded in the network. A filter can be defined in which
`the criterion is that a packet source address exactly match a specific value, which is the address of the node whose
`packets are to be discarded. The action associated with the filter is the discarding of the packet. When a packet is
`received whose source address satisfies this criterion, it is discarded rather than being fonrvarded in the normal fashion.
`[0004] There are a number of different kinds of criteria that may be used to filter packets. These criteria include exact
`matches as well as range checking, i.e., checking whether a value in a packet falls in some range of values. Numerous
`packet parameters can be used as criteria, such as source address, destination address, port identifiers, type of service,
`and others. To be useful, packet filtering processes must allow filters to be flexibly defined using different combinations
`of these and other criteria.
`[0005] Because of this complexity inherent in packet filtering, it has traditionally been performed largely or exclusively
`in software within switches or other network devices supporting packet filtering. Software-based filtering, however,
`presents a bottleneck when high packet fonlvarding performance is required. Network administrators have had to make
`undesirable tradeoffs between network responsiveness and network security, for example, because previous systems
`have not been capable of robust packet filtering at line rates.
`
`BRIEF SUMMARY OF THE INVENTION
`
`In accordance with the present invention, packet processing logic in a network device is disclosed that provides
`[0006]
`high-speed packet classification for packet filtering purposes. The architecture of the classification apparatus provides
`substantial flexibility in the definition of complex filter criteria. Robust filtering can be performed at a sufficiently high
`rate to avoid degrading packet forwarding performance.
`[0007] The packet classification apparatus includes a rule memory and a criterion memory. One type of rule memory
`entry contains an operator and a pointer to a criterion memory entry. The operator defines a comparison operation to
`be performed, such as EQUAL (exact match) or LESS THAN. The criterion memory entry contains one or more values
`to be used as comparands on one side of the comparison, where corresponding values from a received packet appear
`on the other side of the comparison. For example, one comparand from criterion memory may represent a source
`address. This value is compared with the value appearing in the source address field of received packets.
`[0008] Control logic responds to packet classification requests to retrieve a rule memory entry from the rule memory,
`retrieve the criterion memory entry identified by the criterion memory pointer in the rule memory entry, and perform the
`operation specified by the operator in the rule memory entry on the values in the criterion memory entry and corre-
`sponding values included in the classification request. This procedure is repeated for a sequence of rule memory
`entries until a certain ending condition is encountered, whereupon a packet classification result is generated reflecting
`the result of the classification operations. This result is provided to a packet processor to take the appropriate action
`based on the classification result.
`[0009] Other aspects, features, and advantages of the present invention are disclosed in the detailed description
`that follows.
`
`BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
`
`Figure 1 is a block diagram of a network switch incorporating a packet classification engine in accordance
`[0010]
`with the present invention;
`
`

`
`EP 1 128 609 A2
`
`Figure 2 is a block diagram of a line interface in the network s witch of Figure 1;
`Figure 3 is a block diagram of a packet fonivarding engine on the line interface of Figure 2;
`Figure 4 is a block diagram of a packet header distributor application-specific integrated circuit (ASIC) in the for-
`warding engine of Figure 3;
`Figure 5 is a block diagram of a route and classification engine in the packet header distributor ASIC of Figure 4;
`Figure 6 is a diagram of the structure of a route and classification request passed to the route and classification
`engine of Figure 5;
`Figure 7 is a diagram of the structure of a route and classificat ion result provided by the route and classification
`engine of Figure 5;
`Figure 8 is a diagram of the structure of a status indication provided by the route and classification engine of Figure
`5;
`Figure 9 is a block diagram ofa packet classification engine (P CE) in the route and classification engine of Figure 5;
`Figure 10 is a diagram of the structure of entries in a rule memory in the packet classification engine of Figure 9;
`Figure 11 is a diagram of the structure of entries in a first criteri on memory in the packet classification engine of
`Figure 9;
`Figure 12 is a diagram of the structure of entries in a second criterion memory in the packet classification engine
`of Figure 9;
`Figure 13 is a diagram of the structure of entries in a third criter ion memory in the packet classification engine of
`Figure 9;
`Figure 14 is a block diagram of a comparison logic block for a b ank of criterion memory in the packet classification
`engine of Figure 9;
`Figure 15 is a block diagram of a comparator logic block used i n the comparison logic block of Figure 14; and
`Figure 16 is a diagram illustrating how packet filtering information is created, distributed, and used by different
`processing elements in the switch of Figure 1.
`
`DETAILED DESCRIPTION OF THE INVENTION
`
`In Figure 1, a network switch 10 is shown as including a number of line interfaces 12 connected to respective
`[0011]
`network segments 14. The line interfaces 12 are connected to a switch fabric 16 used to provide connections among
`the line interfaces 12 for packet forwarding. The overall operation of the switch 10, including the dynamic configuration
`of the switch fabric 16, is controlled by a switch control 18. In general, the various network segments 14 may be of
`different types. For example, certain of the network segments 14 may be optical links operating at any of a variety of
`standard signalling rates, such as OC-3/STM-1 and OC-12/STM-4. Others of the network segments 14 may be non-
`optical links employing coaxial cable, for example, and carrying signals of different formats.
`[0012] Each line interface 12 is of course designed for operation with the specific type of network segment 14 to
`which it connects. The primary tasks of each line interface 12 are to transfer packets or frames received from an
`attached network segment 14 to another line interface 12 via the switch fabric 16 for fonivarding on a network segment
`14 attached to the other line interface 12, and to receive packets from the other line interfaces 12 via the switch fabric
`16 for fonivarding on the attached network segment 14.
`[0013]
`Figure 2 shows the structure of one type of line interface 12. This interface contains four separate optical
`interface ports, each including physical input/output and framing circuitry 20 and a fonwarding engine 22. The fonivarding
`engines 22 are all connected to switch fabric interface logic 24, which interfaces with the switch fabric 16 of Figure 1.
`The fonivarding engines also interface with a line interface I/O processor (IOP) 26. Timing control logic 28 and DC
`power circuitry 30 are also included.
`[0014] Each fonivarding engine 22 provides a bidirectional data path between a connected physical I/O block 20 and
`the switch fabric interface 24. Received packets are segmented into multiple fixed-size ATM-like cells for transfer
`through the switch fabric 16 of Figure 1 to another line interface 12. Cells received from the switch fabric 16 via the
`switch fabric interface 24 are reassembled into packets for outgoing transfer to the connected physical I/O block 20.
`[0015] The IOP 26 is a general-purpose processor that performs background functions, i.e. functions that support
`the fonivarding of packets that are not carried out on a per-packet basis. One function performed by the IOP 26 is
`receiving packet fonivarding information and packet filtering information from the switch control 18 of Figure 1, and
`distributing the information to the forwarding engines 22. This process is described below.
`[0016]
`Figure 3 shows a block diagram of a forwarding engine 22. An inbound segmentation-and-reassembly (SAR)
`logic block 40 provides a data path from a physical I/O block 20 to the switch fabric 16 of Figure 2, and an outbound
`SAR logic block 42 provides a data path from the switch fabric 16 to the respective physical I/O block 20. Each SAR
`40, 42 is coupled to a respective control memory 44, 46 and packet memory 48, 50 used in performing the segmentation
`or reassembly function.
`[0017] The SAR devices 40 and 42 are both connected to a packet header distributor (PHD) application-specific
`
`

`
`EP 1 128 609 A2
`
`integrated circuit (ASIC) 52 via a 64-bit PCI bus 54. As described in more detail below, the PHD ASIC 52 provides
`FIFO queue interfaces between the PCI bus 54 and a separate 64-bit bus 56. The bus 56 connects the PHD ASIC 52
`with a fonivarding processor (FP) 58 and fonivarding processor memory 60. The PHD ASIC 52 is also connected to the
`IOP 26 of Figure 2 by a separate bus 62.
`[0018]
`Figure 4 shows the structure of the PHD 52 of Figure 3. A set of receive queues or RX queues 64 is used for
`temporary buffering of packet headers and other messages bound for the FP 58. As shown, there are four RX queues
`64, two queues for high-priority traffic and two queues for low-priority traffic. An example of high-priority traffic is traffic
`having a high Quality of Service (QOS) guarantee, such as a committed rate. Low-priority traffic is traffic having a lower
`QOS or no QOS guarantee, such as best-efforts. For each priority level, there is one queue (labeled "0") for traffic
`originating from the inbound SAR 40, and another queue (labeled "1") for traffic originating from the outbound SAR 42.
`A set of transmit queues or TX queues 66 is used for temporary buffering of packet headers and other messages bound
`for the SARs 40, 42 from the FP 58. A route and classification engine 68 performs a route lookup and various packet
`filtering checks on behalf of the FP 58. The packet filtering operation is described below. The route and classification
`engine 68 receives status information from the queues 64, 66 via signal lines 69, and makes this information available
`to the FP 58 in a manner described below.
`[0019] The overall operation of a forwarding engine 22 will be described with reference to Figure 3 and Figure 4.
`Packets are received by the inbound SAR 40 from the associated physical-layer circuitry 20 of Figure 2, and are stored
`in the packet memory 48. The inbound SAR 40 transfers the packet headers to an appropriate one of the RX queues
`64 in the PHD 52. The FP 58 polls the PHD 52 to determine queue status, and retrieves the packet headers from the
`RX queues 64 as appropriate. As part of the header processing, the FP 58 sends certain information elements from
`each header to the route and classification engine 68 in a route and classification request. The route and classification
`engine 68 performs a route lookup and various packet filtering checks against the header elements in the request, and
`places the results of these checks into a result queue (described below). The FP 58 obtains the route lookup and
`classification results from the result queue, and uses these results to create a new header for the packet. The new
`header is transferred back to the PHD 52 via one of the TX queues 66, along with information identifying the internal
`circuit on which the packet should be fonIvarded after segmentation. The inbound SAR 40 retrieves the new header,
`places it in the packet memory 48 with the payload portion of the received packet, segments the new packet and
`transfers the resulting cells to the switch fabric 16 of Figure 1 on the internal circuit specified by the FP 58.
`[0020]
`In the outbound direction, the outbound SAR 42 receives packets from the switch fabric 16 of Figure 1, and
`reassembles these packets into the packet memory 50. Packet headers are sent to the PHD 52, and retrieved from
`the PHD 52 by the FP 58. For most packets, the route lookup and filtering checks will have already been performed
`during inbound processing, so these operations are not repeated. Some protocols, however, do require lookups and
`filtering for both inbound and outbound packets, and therefore these operations are optionally performed by the FP 58
`in conjunction with the route and classification engine 68. If appropriate, the FP 58 formulates a new header for the
`packet, based in part on the identity of the internal circuit on which the segmented outbound packet is received. This
`new header is written to the PHD 52, along with transmit circuit information. The PHD 52 transfers the new header to
`the outbound SAR 42. The outbound SAR 42 places the new header in the packet memory 50 along with the packet
`payload, and transmits the packet to the associated physical layer interface 20 of Figure 2.
`[0021]
`Figure 5 shows the structure of the route and classification engine 68. Requests from the FP 58 of Figure 3
`are placed into a single request queue 70, and results are returned in a single result queue 72. Each queue 70 and 72
`holds up to 16 request/result entries. A route lookup engine (RLE) 74 performs route lookups, typically based on a
`destination address (DA) included in the header. A packet classification engine (PCE) 76 performs packet filtering
`checks, based on specified information included in the packet header. The operation of the PCE 76 is described in
`more detail below. Input FIFO buffers 78 are placed between the request queue 70 and the RLE 74 and PCE 76, and
`output FIFO buffers 80 are placed between the RLE 74 and PCE 76 and the result queue 72. The FIFOs 78 and 80
`provide a measure of decoupling between the processing performed by the RLE 74 and the processing performed by
`the PCE 76. A multiplexer 81 enables the FP 58 to read either the result queue 72, or status information including
`status from the request queue 70, the result queue 72, and the status appearing on the signal lines 69 of Figure 4. The
`structure of these entries is described below.
`[0022]
`Figure 6 shows the structure of the route and classification request that is passed to the PCE 76 and RLE 74
`via the request queue 70 of Figure 5. The size of the request is four 64-bit words. The various fields are defined as
`follows:
`
`FIELD NAME
`
`DESCRIPTION
`
`RLE Entry type: 0=Node, 1=Leaf
`
`

`
`EP1128609A2
`
`(continued)
`
`FIELD NAME
`
`DESCRIPTION
`
`Ind.
`
`Res.
`
`Order
`
`RLE Ptr.
`
`PCE Root 0
`
`PCE Root 1
`
`Req. ID
`
`IP TOS
`
`IP Protocol
`
`TCP Flags
`
`RLE Indirect route:
`1=Indirect, 0=Direct
`Unused reserved bit
`
`No. of DA bits to add to RLE pointer
`
`Base address of RLE entry to which DA is added (based on Order)
`
`Starting address for PCE rule 0
`
`Starting address for PCE rule 1
`
`Set to zero, used for alignment checking
`
`Request identifier, copied to result to enable matching with request
`
`The contents of the Type of Service (TOS) field of the received packet
`
`The contents of the Protocol field of the received packet
`
`The contents of the TCP Flags field of the received packet
`
`IP Source Address
`
`The IP Source Address of the received packet
`
`IP Dest. Addr.
`
`The IP Destination Address of the received packet
`
`TCP/UDP Source Port
`
`The identifier of the TCP/UDP port on which the packet was received
`
`TCP/UDP Dest. Port
`Reserved
`
`The identifier of the TCP/UDP port for which the received packet is destined
`Unused reserved bits
`
`[0023] As shown in the above table, there is a provision for two separate sets of classification checks, one beginning
`at an address labeled "PCE Root 0" and the other as "PCE Root 1". The significance of these separate starting ad-
`dresses is described below.
`[0024] As previously noted, the appropriate fields of the request are provided to the respective input F|FOs 78 for
`the RLE 74 and PCE 76 of Figure 5. Some of the fields, such as the Req. ID and the IP Dest. Addr., are provided to
`both the RLE 74 and the PCE 76. Other fields are provided to only one or the other. The use of the fields routed to the
`PCE in particular is described below.
`[0025]
`Figure 7 and Figure 8 show the respective structures of the two different types of entries that are read from
`the route and classification engine 68 of Figure 4. Figure 7 shows a result entry, which is obtained from the result queue
`72 of Figure 5 and conveys the result of a classification search. Figure 8 shows a status entry used to convey status
`information to the FP 58 of Figure 3.
`[0026] The fields of the result entry shown in Figure 7 are defined as follows:
`
`FIELD NAME
`
`DESCRIPTION
`
`Req. ID
`
`Type: 0 = PCE Result, 1 = PCE Status
`Request Identifier (from the request)
`PCE Match NOT Found:
`0 = Match Found, 1 = Match NOT Found
`RLE Indirect Route:
`0 = Normal, 1 = Indirect
`
`RLE Long Search: 0 = Short, 1 = Long
`Error indicator: 0 = Normal, 1 = Error
`
`Zero padding
`
`Match in PCE Root 1 (valid only if P = 0): 0 = Match in root 0, 1 = Match in root1
`
`Depth of route Iookup search
`
`

`
`EP 1 128 609 A2
`
`(continued)
`
`FIELD NAME
`
`DESCRIPTION
`
`PCE Match Addr.
`
`Address of last rule checked in PCE
`
`RLE Flags
`
`Flags from RLE table entry
`
`RLE Next Hop Ptr.
`
`Pointer from RLE table entry
`
`[0027] The fields of the status entry shown in Figure 8 are defined as follows:
`
`FIELD NAME
`
`DESCRIPTION
`
`Zero
`
`Unused, set to zero
`
`TX Message
`
`Remaining space in forwarding-processor-to-IOP message queue
`
`RCE Results
`
`Number of pending entries in result queue 72. Normally zero, because status inserted only when
`queue is empty.
`
`RCE Requests Number of empty entries in request queue 70
`
`Tx-O
`
`Tx- 1
`Hi-0
`
`Hi-1
`
`Lo-0
`Lo-1
`
`1 Number of empty entries
`
`J in TX queues 66.
`1
`
`| Number of empty entries in
`
`| RX queues 64.
`J
`
`[0028] The general operation of the route and classification engine 68 will be described with reference to Figure 5
`through Figure 8. The FP 58 of Figure 3 writes lookup and classification requests to the request queue 70. When a
`request reaches the front of the request queue 70, different information elements from the request are written simul-
`taneously into the respective input F|FOs 78 for the RLE 74 and the PCE 76. The RLE 74 and PCE 76 operate on the
`separate pieces of each request independently, and in general finish their respective processing operations for a given
`request at different times. The results of these operations are written to the output FlFOs 80. When both sets of results
`for a given packet have reached the front of the output F|FOs 80, a single combined result is written to the result queue
`72. The combined results are read by the FP 58 and used to formulate new packet headers and circuit information for
`the SARs 40 and 42 of Figure 3, as discussed above.
`[0029] More particularly, the FP 68 uses the route and classification engine 68 in a batch fashion. When there is
`sufficient room in the request queue 70, a burst of requests are
`
`1.4 result entries
`2.3 result entries followed by 1 status entry
`3.2 result entries followed by 2 status entries
`4.1 result entry followed by 3
`5.4 status entries
`
`[0030] The FP 58 will generally issue read commands until the result queue 72 is empty, which is inferred whenever
`one or more status entries are included in the result block. The FP 58 then uses these results while the route and
`classification engine 68 processes the next batch of requests. The FP 58 uses the status information to manage the
`flow of requests, so that the RLE 74 and PCE 76 are kept busy and the queues 70 and 72 and F|FOs 78 and 80 are
`prevented from overflowing.
`[0031]
`It will be noted that in the illustrated embodiment, there is only one status entry that can be read, and the
`multiple status entries in a result block represent multiple reads of this single entry. In alternative embodiments it may
`be useful to provide additional, lower-priority information in the second through fourth status entries, for example for
`statistics gathering purposes or other background processing.
`[0032] One significant advantage of appending status information to results is improved efficiency in using the FP
`bus 56. Whenever the FP 58 issues a read for results, either useful results or useful status information is returned.
`
`

`
`EP 1 128 609 A2
`
`Additionally, the result block is returned in burst fashion, so that overhead associated with reading is reduced. Also,
`the FP 58 obtains information about the queues around the RLE 74 and PCE 76, and about the RX queues 64 and TX
`queues 66, in a single read transaction.
`[0033]
`Figure 9 shows the structure of the PCE 76 of Figure 5. Data representing filters and bindings (discussed
`below) are stored in a rule memory (RM) 82 and a criterion memory (CM) 84. The CM 84 includes three commonly
`addressed memories CMO 86, CM1 88 and CM2 90. Three comparison logic blocks 92, 94 and 96 are associated with
`respective ones of the criterion memories 86, 88 and 90. Addressing and control logic 98 decodes requests received
`from the request queue 70 of Figure 5, generates addresses for the RM 82 and the CM 84, sequences through multiple
`rules as required by each request, and generates results that are passed back to the result queue 72 of Figure 5. The
`addressing and control logic 98 also interfaces to the IOP 26 of Figure 2 to enable the reading and writing of the RM
`82 and CM 84 by the IOP 26. Bus transceivers 100 provide the necessary data path between the IOP 26 and the RM
`82 and CM 84. An AND gate 102 provides a single MATCH signal when corresponding MATCHn outputs from the
`comparison logic blocks 92, 94 and 96 are all true.
`[0034] Rule sets for packet filtering are typically originated by a Network Management Station (NMS), but can also
`be dynamically assigned by the FP 58 based on identified flows. Part or all of the following information is provided by
`the NMS or FP 58 for filters: IP Destination Address with mask; IP Source Address with mask; IP protocol identifier;
`TCP/UDP Source Port and Destination Port identifiers; IP Type of Service identifier and mask, and miscellaneous flags.
`The various information elements from a filter are compared with corresponding elements from each received packet
`in order to determine whether the packet matches the filter criteria. If so, some specific action for the filter is taken,
`such as intentionally discarding a packet. If not, some default action is typically taken, such as allowing the packet to
`proceed toward its destination.
`[0035] Traditionally, packet filters are represented as an ordered list of comparison sets that are searched linearly.
`In the PCE 76, the filter elements are divided into criteria-(the comparison values) and rules (the list itself and the
`operators to be used for each comparison). This separation of rules and criteria is reflected in the use of separate rule
`memory (RM) 82 and criterion memory (CM) 84. The memories 82 and 84 are separately optimized for their respective
`functions, thus enhancing efficiency and performance. Also, entries within the CM 84 can be referred to by multiple
`rules in the RM 82, further enhancing storage efficiency.
`[0036] The RM 82 contains an array of rule memory entries, each of which may be one of two types. A first type
`contains a set of operators and a pointer to a row of CM 84 that stores comparands for a corresponding filter. A second
`type contains a pointerto another rule memory entry. These entries are used to performjumps between non-contiguous
`Also, entries within the CM 84 can be referred to by multiple rules in the RM 82, further enhancing storage efficiency.
`[0037] The RM 82 contains an array of rule memory entries, each of which may be one of two types. A first type
`contains a set of operators and a pointer to a row of CM 84 that stores comparands for a corresponding filter. A second
`type contains a pointerto another rule memory entry. These entries are used to performjumps between non-contiguous
`segments in a set of rules being searched sequentially. In the illustrated embodiment, the RM 82 can contain up to
`16K entries.
`
`[0038] The CM 84 is segmented into three separate memories CMO 86, CM1 88 and CM2 90, each of which can
`contain up to 4K entries in the illustrated embodiment. The organization ofthe CM 84 exploits a hierarchy that is inherent
`in IP packet classification. Because filtering on certain fields is usually accompanied by filtering based on other fields
`as well,
`it is reasonable to restrict which fields are stored in the separate memories CM0’ CM1, and CM2. These
`restrictions further enhance storage efficiency. The most commonly filtered fields, Source Address and Destination
`Address, are supported in all three memories CMO 86, CM1 88 and CM2 90. As described below, other fields are
`supported only in CM1 88 and/or CM2 90. This architecture maximizes the flexibility with which space in the CM 84
`can be allocated, while at the same time enabling powerful parallel searches. The structure and use of CM 84 are
`described in more detail below.
`[0039]
`Figure 10 shows the structure ofthe entries in the RM 82 of Figure 9, which are also referred to as rule memory
`entries. Each 39-bit entry has a 1-bit Type field. If this field is 1, then bits 13-0 of the entry contain a pointer to another
`location in the RM 82, i.e., a pointer to another rule memory entry. If this field is 0, the entry contains information for
`performing a filter check. In this case, bits 11-0 contain an address of a row of CM 84 where operands for the check
`are to be found, and bits 35-12 contain encodings of operations to be performed on respective operands and fields
`from the request. These operations are described in more detail below. Bit 36 is a Carry bit used to form compound
`rules, for example to perform range checking. If the carry bit is zero, the rule is evaluated by itself. If the carry bit is
`one, the rule evaluates as true only if the next rule also evaluates as true. Bit 37 is a Done bit indicating that the last
`of a string of rules to be checked as part of a request has been reached.
`[0040] The criterion operator field contains eight 3-bit logical operator codes. Each operator code specifies an op-
`eration to be performed on corresponding comparands selected from the request and the criterion memory entry. The
`fields of the criterion memory entry are described below. The assignment of criterion operator bits to comparands is
`as follows:
`
`

`
`EP 1 128 609 A2
`
`35-33
`
`32-30
`
`CMO SA/DAfie|d
`
`CM1 Protocol field
`
`CM1 Source Port field
`
`CM1 SA/DA or DP field
`
`CM2 Protocol field
`
`CM2 TOS or TOS with mask field
`
`CM2 Source port or Flags with mask field
`CM2 SA/DA or SP or DP field
`
`14-12
`
`[0041] The operator code specifies a comparison to be performed, where the comparand from the request is on the
`lefl of the operator and the comparand from the criterion memory entry is on the right. For example, if the operator is
`then the expression evaluated is (request data > criterion data). The operator codes are as follows:
`
`Greater than
`
`Less than
`
`Not Equal
`
`Don't care (i.e. force TRUE regardless of comparand values)
`
`[0042] The criterion operators are used to configure logic within the comparison logic blocks 92, 94, and 96 in a
`manner described below.
`[0043]
`Figure 11 shows the structure of the entries in CMO 86 of Figure 9. Each entry is 38 bits wide. A single bit,
`bit 37, is used to distinguish between two possible configurations for the entry, as either a 32-bit source address (SA)
`or a 32-bit destination address (DA). Bits 31-0 contain an SA or DA value as required by a corresponding filter. Bits
`36-32 contain a 5-bit encoded mask value that is used to limit the extent of the comparison between the SA/DA in the
`entry and the SA/DA of the request. The use of the mask is described in more detail below.
`[0044]
`Figure 12 shows the structure of the entries in CM1 88 of Figure 9. Each entry is 47 bits wide. Four different
`configurations are possible, as indicated by bits 46-45. The PTCL field identifies an IP protocol in all four configurations.
`The 16-bit SP and DP fields in configurations 2 and 3 represent source port and destination port identifiers, respectively.
`The contents of bits 36-32 are undefined in configurations 2 and 3.
`[0045]
`Figure 13 shows the structure of the entries in CM2 90 of Figure 9. Each entry is 51 bits wide. Eight different
`configurations are possible, as indicated by bits 50-48. The TOS field of configurations 2 through 7 identifies an IP
`Type of Service. In configurations 3 through 7, the TOS Mask field contains an 8-bit mask used to limit the extent of
`the TOS comparison, as described below. The 8-bit FLAGS field contains flag values to be compared against corre-
`sponding flag bits from TCP/UDP packets. The 8-bit FLGS MSK field is used to limit the extent of the FLAGS compar-
`ison, as described below.
`[0046]
`Figure 14 shows the general structure of the comparison l

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket