`Defendant’s Preliminary Invalidity Contentions
`Orckit Corporation v. Cisco Systems, Inc., 2:22-cv-00276-JRG-RSP
`____________________________________________________________________________________________________________
`
`Chart for U.S. Patent 10,652,111 (“the ’111 Patent”)
`U.S. Patent Publication No. 2012/0300615 to Kempf et al. (“Kempf”)
`
`As shown in the chart below, all Asserted Claims of the ’111 Patent are invalid under (1) AIA-35 U.S.C. § 102 (a) because Kempf
`meets each element of those claims, and/or (2) 35 U.S.C. § 103 because Kempf renders those claims obvious either alone, or in
`combination with the knowledge of a person having ordinary skill in the art, and in further combination with the references
`specifically identified below and in the following claim chart and/or one or more references identified in Defendant’s Preliminary
`Invalidity Contentions. The following quotations and diagrams come from Kempf titled “Implementing EPC In A Cloud Computer
`With OpenFlow Data Plane”, which was filed on June 28, 2012, and published on November 29, 2012.
`
`Motivations to combine the disclosures in Kempf with disclosures in other publications known in the art, as explained in this chart,
`include at least the similarity in subject matter between the references to the extent they concern methods relating to routing certain
`network traffic to entities for further analysis and inspection. Insofar as the references cite other patents or publications, or suggest
`additional changes, one of ordinary skill in the art would look beyond a single reference to other references in the field.
`
`These invalidity contentions are based on Defendant’s present understanding of the Asserted Claims, and Orckit’s apparent
`construction of the claims in its November 3, 2022 Disclosure of Asserted Claims and Infringement Contentions Pursuant to P.R. 3-1,
`and Orckit’s January 19, 2023 First Amended Disclosure of Asserted Claims and Infringement Contentions Pursuant to P.R. 3-1
`(Orckit’s “Infringement Disclosures”), which is deficient at least insofar as it fails to cite any documents or identify accused
`structures, acts, or materials in the Accused Products with particularity. Defendant does not agree with Orckit’s application of the
`claims, or that the claims satisfy the requirements of 35 U.S.C. § 112. Defendant’s contentions herein are not, and should in no way
`be seen as, admissions or adoptions as to any particular claim scope or construction, or as any admission that any particular element is
`met by any accused product in any particular way. Defendant objects to any attempt to imply claim construction from this chart.
`Defendant’s prior art invalidity contentions are made in a variety of alternatives and do not represent Defendant’s agreement or view
`as to the meaning, definiteness, written description support for, or enablement of any claim contained therein.
`
`1
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 1 of 1100
`
`
`
`The following contentions are subject to revision and amendment pursuant to Federal Rule of Civil Procedure 26(e), the Local Rules,
`and the Orders of record in this matter subject to further investigation and discovery regarding the prior art and the Court’s
`construction of the claims at issue.
`
`
`ʼ111 Patent Claim 1
`No.
`1[preamble] A method for use with
`a packet network
`including a network
`node for transporting
`packets between first
`and second entities
`under control of a
`controller that is
`external to the network
`node, the method
`comprising:
`
`Kempf
`Kempf discloses a method for use with a packet network including a network node for
`transporting packets between first and second entities under control of a controller that is
`external to the network node, the method comprising.
`
`For example, Kempf discloses a method in which a network element such as a router,
`switch, or bridge communicatively interconnects other elements of a network for data
`packet transport. Kempf further discloses a method in which the network element is
`controlled by an external OpenFlow controller. Thus, at least under the apparent claim scope
`alleged by Orckit’s Infringement Disclosures, this limitation is met.
`
`Kempf at Abstract (“A method implements a control plane of an evolved packet core (EPC)
`of a long term evolution (LTE) network in a cloud computing system. A cloud manager
`monitors resource utili-zation of each control plane module and the control plane traffic
`handled by each control plane module. The cloud man-ager detects a threshold level of
`resource utilization or traffic load for one of the plurality of control plane modules of the
`EPC. A new control plane module is initialized as a separate virtual machine by the cloud
`manager in response to detecting the threshold level. The new control plane module signals
`the plurality of network elements in the data plane to establish flow rules and actions to
`establish differential routing of flows in the data plane using the control protocol, wherein
`flow matches are encoded using an extensible match structure in which the flow match is
`encoded as a type-length-value (TLV).”)
`
`Kempf at [0004] (“The GPRS tunneling protocol (GTP) is an important communication
`protocol utilized within the GPRS core net-work. GTP enables end user devices ( e.g.,
`cellular phones) in a GSM network to move from place to place while continuing to connect
`to the Internet. The end user devices are connected to other devices through a gateway
`GPRS support node (GGSN). The GGSN tracks the end user device's data from the end user
`device's serving GPRS support node (GGSN) that is handling the session originating from
`the end user device.”)
`
`
`2
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 2 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`Kempf at [0033] (“As used herein, a network element (e.g., a router, switch, bridge, etc.) is a
`piece of networking equipment, including hardware and software, that communicatively
`interconnects other equipment on the network (e.g., other network elements, end stations,
`etc.). Some network elements are "multiple services network elements" that provide sup-port
`for multiple networking functions (e.g., routing, bridg-ing, switching, Layer 2 aggregation,
`session border control, multicasting, and/or subscriber management), and/or provide support
`for multiple application services (e.g., data, voice, and video). Subscriber end stations ( e.g.,
`servers, worksta-tions, laptops, palm tops, mobile phones, smart phones, mul-timedia phones,
`Voice Over Internet Protocol (VOIP) phones, portable media players, GPS units, gaming
`systems, set-top boxes (STBs), etc.) access content/services provided over the Internet and/or
`content/services provided on virtual private networks (VPN s) overlaid on the Internet. The
`content and/or services are typically provided by one or more end stations ( e.g., server end
`stations) belonging to a service or content provider or end stations participating in a peer to
`peer service, and may include public web pages (free content, store fronts, search services,
`etc.), private web pages (e.g., username/pass-word accessed web pages providing email
`services, etc.), corporate networks over VPNs, IPTV, etc. Typically, sub-scriber end stations
`are coupled (e.g., through customer premise equipment coupled to an access network (wired
`or wirelessly)) to edge network elements, which are coupled (e.g., through one or more core
`network elements to other edge network elements) to other end stations (e.g., server end
`stations).”)
`
`Kempf at [0044] (“FIG. 1 is a diagram of one embodiment of an example network with an
`OpenFlow switch, conforming to the OpenFlow 1.0 specification. The OpenFlow 1.0
`protocol enables a controller 101 to connect to an OpenFlow 1.0 enabled switch 109 using a
`secure channel 103 and control a single forwarding table 107 in the switch 109. The
`controller 101 is an external software component executed by a remote computing device
`that enables a user to configure the Open-Flow 1.0 switch 109. The secure channel 103 can
`be provided by any type of network including a local area network (LAN) or a wide area
`network (WAN), such as the Internet.”)
`
`
`Kempf at Figure 1
`
`
`3
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 3 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`
`
`
`
`Kempf at Figure 2
`
`
`4
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 4 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`
`1[a]
`
`sending, by the
`controller to the
`network node over the
`packet network, an
`instruction and a
`packet-applicable
`criterion;
`
`
`
`
`Kempf discloses sending, by the controller to the network node over the packet network, an
`instruction and a packet-applicable criterion.
`
`For example, Kempf discloses sending by the OpenFlow controller to the network element a
`rule defining matches for fields in packet headers. Thus, at least under the apparent claim
`scope alleged by Orckit’s Infringement Disclosures, this limitation is met.
`
`Kempf at [0044] (“FIG. 1 is a diagram of one embodiment of an example network with an
`OpenFlow switch, conforming to the OpenFlow 1.0 specification. The OpenFlow 1.0
`5
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 5 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`protocol enables a controller 101 to connect to an OpenFlow 1.0 enabled switch 109 using a
`secure channel 103 and control a single forwarding table 107 in the switch 109. The
`controller 101 is an external software component executed by a remote computing device
`that enables a user to configure the Open-Flow 1.0 switch 109. The secure channel 103 can
`be provided by any type of network including a local area network (LAN) or a wide area
`network (WAN), such as the Internet.”)
`
`Kempf at [0045] (“FIG. 2 is a diagram illustrating one embodiment of the contents of a flow
`table entry. The forwarding table 107 is populated with entries consisting of a rule 201
`defining matches for fields in packet headers; an action 203 associated to the flow match;
`and a collection of statistics 205 on the flow. When an incoming packet is received a lookup
`for a matching rule is made in the flow table 107. If the incoming packet matches a
`particular rule, the associated action defined in that flow table entry is performed on the
`packet.”)
`
`Kempf at [0046] (“A rule 201 contains key fields from several headers in the protocol stack,
`for example source and destination Ethernet MAC addresses, source and destination IP
`addresses, IP protocol type number, incoming and outgoing TCP or UDP port numbers. To
`define a flow, all the available matching fields may be used. But it is also possible to restrict
`the matching rule to a subset of the available fields by using wildcards for the unwanted
`fields.”)
`
`Kempf at [0047] (“The actions that are defined by the specification of OpenFlow 1.0 are
`Drop, which drops the matching packets; Forward, which forwards the packet to one or all
`outgoing ports, the incoming physical port itself, the controller via the secure channel, or the
`local networking stack (if it exists). OpenFlow 1.0 protocol data units (PDU s) are defined
`with a set of structures specified using the C programming language. Some of the more
`commonly used messages are: report switch configuration message; modify state messages
`(in-cluding a modify flow entry message and port modification message); read state
`messages, where while the system is running, the datapath may be queried about its current
`state using this message; and send packet message, which is used when the controller wishes
`to send a packet out through the datapath.”)
`
`
`6
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 6 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`Kempf at [0050] (“FIG. 4 illustrates one embodiment of the processing of packets through
`an OpenFlow 1.1 switched packet pro-cessing pipeline. A received packet is compared
`against each of the flow tables 401. After each flow table match, the actions are
`accumulated into an action set. If processing requires matching against another flow table,
`the actions in the matched rule include an action directing processing to the next table in the
`pipeline. Absent the inclusion of an action in the set to execute all accumulated actions
`immediately, the actions are executed at the end 403 of the packet processing pipeline. An
`action allows the writing of data to a metadata register, which is carried along in the packet
`processing pipe-line like the packet header.”)
`
`Kempf at [0051] (“FIG. 5 is a flowchart of one embodiment of the OpenFlow 1.1 rule
`matching process. OpenFlow 1.1 contains support for packet tagging. OpenFlow 1.1 allows
`matching based on header fields and multi-protocol label switching (MPLS) labels. One
`virtual LAN (VLAN) label and one MPLS label can be matched per table. The rule
`matching process is initiated with the arrival of a packet to be processed (Block 501 ).
`Starting at the first table 0 a lookup is performed to determine a match with the received
`packet (Block 503). If there is no match in this table, then one of a set of default actions is
`taken (i.e., send packet to controller, drop the packet or continue to next table) (Block 509).
`If there is a match, then an update to the action set is made along with counters, packet or
`match set fields and meta data (Block 505). A check is made to determine the next table to
`process, which can be the next table sequentially or one specified by an action of a matching
`rule (Block 507). Once all of the tables have been processed, then the resulting action set is
`executed (Block 511). FIG. 6 is a diagram of the fields, which a matching process can
`utilize for identifying rules to apply to a packet.”)
`
`Kempf at [0074] (“The operation of the EPC cloud computer system as follows. The UE
`1317, E-NodeB 1317, S-GW-C 1307, and P-GW-C signal 1307 to the MME, PCRF, and
`HSS 1307 using the standard EPC protocols, to establish, modify, and delete bearers and
`GTP tunnels. This signaling triggers pro-cedure calls with the OpenFlow controller to
`modify the routing in the EPC as requested. The OpenFlow controller configures the
`standard OpenFlow switches, the Openflow S-GW-D 1315, and P-GW-D 1311 with flow
`rules and actions to enable the routing requested by the control plane entities. Details of this
`configuration are described in further detail herein below.”)
`
`
`7
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 7 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`Kempf at [0079] (“FIG. 16 is a diagram of one embodiment of a process for EPC peering
`and differential routing for specialized ser-vice treatment. The OpenFlow signaling,
`indicated by the solid lines and arrows 1601, sets up flow rules and actions on the switches
`and gateways within the EPC for differential routing. These flow rules direct GTP flows to
`particular loca-tions. In this example, the operator in this case peers its EPC with two other
`fixed operators. Routing through each peering point is handled by the respective P-GW-Dl
`and P-GW-D2 1603A, B. The dashed lines and arrows 1605 show traffic from a UE 1607
`that needs to be routed to another peering operator. The flow rules and actions to distinguish
`which peering point the traffic should traverse are installed in the OpenFlow switches 1609
`and gateways 1603A, B by the OpenFlow controller 1611. The OpenFlow controller 1611
`calculates these flow rules and actions based on the routing tables it maintains for outside
`traffic, and the source and destination of the packets, as well as by any specialized
`for-warding treatment required for DSCP marked packets.”)
`
`Kempf at [0080] (“The long dash and dotted lines and arrows 1615 shows a example of a
`UE 1617 that is obtaining content from an external source. The content is originally not
`formulated for the UE's 1617 screen, so the OpenFlow controller 1611 has installed flow
`rules and actions on the P-GW-Dl 1603B, S-GW-D 1619 and the OpenFlow switches 1609
`to route the flow through a transcoding application 1621 in the cloud computing facility.
`The transcoding application 1621 refor-mats the content so that it will fit on the UE's 1617
`screen. A PCRF requests the specialized treatment at the time the UE sets up its session with
`the external content source via the IP Multimedia Subsystem (IMS) or another signaling
`protocol.”)
`
`Kempf at [0081] (“In one embodiment, OpenFlow is modified to pro-vide rules for GTP
`TEID Routing. FIG. 17 is a diagram of one embodiment of the OpenFlow flow table
`modification for GTP TEID routing. An OpenFlow switch that supports TEID routing
`matches on the 2 byte (16 bit) collection of header fields and the 4 byte (32 bit) GTP TEID,
`in addition to other OpenFlow header fields, in at least one flow table ( e.g., the first flow
`table). The GTP TEID flag can be wildcarded (i.e. matches are "don't care"). In one
`embodiment, the EPC pro-tocols do not assign any meaning to TEIDs other than as an
`endpoint identifier for tunnels, like ports in standard UDP/ TCP transport protocols. In other
`embodiments, the TEIDs can have a correlated meaning or semantics. The GTP header flags
`field can also be wildcarded, this can be partially matched by combining the following
`8
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 8 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`bitmasks: 0xFF00- Match the Message Type field; 0xe0-Match the Version field; 0xl0-
`Match the PT field; 0x04-Match the E field; 0x02- Match the S field; and 0x0l-Match the
`PN field.”)
`
`Kempf at [0085] (“The OpenFlow controller instantiates a virtual port for each physical port
`that may transmit or receive packets routed through a GTP tunnel, prior to installing any
`rules in the switch for GTP TEID routing.)
`
`Kempf at [0089] (“n one embodiment, the system implements a GTP fast path encapsulation
`virtual port. When requested by the S-GW-C and P-GW-C control plane software running in
`the cloud computing system, the OpenFlow controller programs the gateway switch to
`install rules, actions, and TEID hash table entries for routing packets into GTP tunnels via a
`fast path GTP encapsulation virtual port. The rules match the packet filter for the input side
`of GTP tunnel's bearer. Typi-cally this will be a 4 tuple of: IP source address; IP destination
`address; UDP/TCP/SCTP source port; and UDP/TCP/SCTP destination port. The IP source
`address and destination address are typically the addresses for user data plane traffic, i.e. a
`UE or Internet service with which a UE is transacting, and similarly with the port numbers.
`For a rule matching the GTP-U tunnel input side, the associated instructions and are the
`following:
`
`Write-Metadata ( GTP-TEID, OxFFFFFFFF)
`Apply-Actions (Set-Output-Port GTP-Encap-VP)”)
`
`Kempf at [0092] (“In one embodiment, the system implements a GTP fast path
`decapsulation virtual port. When requested by the S-GW and P-GW control plane software
`running in the cloud computing system, the gateway switch installs rules and actions for
`routing GTP encapsulated packets out of GTP tunnels. The rules match the GTP header
`flags and the GTP TEID for the packet, in the modified OpenFlow flow table shown in FIG.
`17 as follows: the IP destination address is an IP address on which the gateway is expecting
`GTP traffic; the IP protocol type is UDP (17); the UDP destination port is the GTP-U
`destination port (2152); and the header fields and message type field is wildcarded with the
`flag 0XFFF0 and the upper two bytes of the field match the G-PDU message type (255)
`while the lower two bytes match 0x30, i.e. the packet is a GTP packet not a GTP' packet and
`the version number is 1.”)
`9
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 9 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`
`
`Kempf at [0094] (“In one embodiment, the system implements han-dling of GTP-U control
`packets. The OpenFlow controller programs the gateway switch flow tables with 5 rules for
`each gateway switch IP address used for GTP traffic. These rules contain specified values
`for the following fields: the IP des-tination address is an IP address on which the gateway is
`expecting GTP traffic; the IP protocol type is UDP (17); the UDP destination port is the
`GTP-U destination port (2152); the GTP header flags and message type field is wildcarded
`with 0xFFF0; the value of the header flags field is 0x30, i.e. the version number is 1 and the
`PT field is 1; and the value of the message type field is one of 1 (Echo Request), 2 (Echo
`Response), 26 (Error Indication), 31 (Support for Extension Headers Notification), or 254
`(End Marker).”)
`
`Kempf at [0097] (“In one embodiment, the system implements han-dling of G-PDU packets
`with extension headers, sequence numbers, and N-PDU numbers. G-PDU packets with
`exten-sion headers, sequence numbers, and N-PDU numbers need to be forwarded to the
`local switch software control plane for processing. The OpenFlow controller programs 3
`rules for this purpose. They have the following common header fields: the IP destination
`address is an IP address on which the gateway is expecting GTP traffic; and the IP protocol
`type is UDP (17); the UDP destination port is the GTP-U destination port (2152).”)
`
`Kempf at [0099] (“The instruction for these rules is the following:
`
`Apply-Actions (Set-Output-Port LOCAL_GTP _U_DECAP)”)
`
`Kempf at [0104] (“In one embodiment, the system implements han-dling of GTP-C and
`GTP' control packets. Any GTP-C and GTP' control packets that are directed to IP addresses
`on a gateway switch are in error. These packets need to be handled by the S-GW-C, P-GW-
`C, and GTP' protocol entities in the cloud computing system, not the S-GW-D and P-GW-D
`enti-ties in the switches. To catch such packets, the OpenFlow controller must program the
`switch with the following two rules: the IP destination address is an IP address on which the
`gateway is expecting GTP traffic; the IP protocol type is UDP (17); for one rule, the UDP
`destination port is the GTP-U destination port (2152), for the other, the UDP destination
`port is the GTP-C destination port (2123); the GTP header flags and message type fields are
`wildcarded.”)
`
`10
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 10 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`1[b]
`
`1[c]
`
`receiving, by the
`network node from the
`controller, the
`instruction and the
`criterion;
`receiving, by the
`network node from the
`first entity over the
`packet network, a
`packet addressed to
`the second entity;
`
`Kempf
`
`
`Kempf at [0108] (“A GTP-extended Openflow switch contains at least one flow table that
`handles rules matching the GTP header fields as in FIG. 17. The Openflow controller
`programs the GTP header field rules in addition to the other fields to per-form GTP routing
`and adds appropriate actions if the rule is matched. For example, the following rule matches
`a GTP-C control packet directed to a control plane entity (MME, S-GW-C, P-GW-C) in the
`cloud computing system, which is not in the control plane VLAN: the VLAN tag is not set
`to the control plane VLAN, the destination IP address field is set to the IP address of the
`targeted control plane entity, the IP protocol type is UDP (17), the UDP destination port is
`the GTP-C destination port (2123), the GTP header flags and message type is wildcarded
`with 0xF0 and the matched ver-sion and protocol type fields are 2 and 1, indicating that the
`packet is a GTPv2 control plane packet and not GTP'.”)
`
`Kempf discloses receiving, by the network node from the controller, the instruction and the
`criterion.
`
`See supra at 1[a].
`
`Kempf discloses receiving, by the network node from the first entity over the packet
`network, a packet addressed to the second entity.
`
`For example, Kempf discloses communication between electronic devices in which data
`packets are sent from one electronic device to another destination device.
`
`Kempf at [0003] (“The general packet radios system (GPRS) is a sys-tem that is used for
`transmitting Internet Protocol packets between user devices such as cellular phones and the
`Internet. The GPRS system includes the GPRS core network, which is an integrated part of
`the global system for mobile communi-cation (GSM). These systems are widely utilized by
`cellular phone network providers to enable cellular phone services over large areas.”)
`
`Kempf at [0004] (“The GPRS tunneling protocol (GTP) is an important communication
`protocol utilized within the GPRS core net-work. GTP enables end user devices ( e.g.,
`cellular phones) in a GSM network to move from place to place while continuing to connect
`to the Internet. The end user devices are connected to other devices through a gateway
`11
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 11 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`GPRS support node (GGSN). The GGSN tracks the end user device's data from the end user
`device's serving GPRS support node (GGSN) that is handling the session originating from
`the end user device.”)
`
`Kempf at [0032] (“The techniques shown in the figures can be imple-mented using code and
`data stored and executed on one or more electronic devices ( e.g., an end station, a network
`ele-ment, etc.). Such electronic devices store and communicate (internally and/or with other
`electronic devices over a net-work) code and data using non-transitory machine-readable or
`computer-readable media, such as non-transitory machine-readable or computer-readable
`storage media ( e.g., magnetic disks; optical disks; random access memory; read only
`memory; flash memory devices; and phase-change memory). In addition, such electronic
`devices typically include a set of one or more processors coupled to one or more other
`compo-nents, such as one or more storage devices, user input/output devices (e.g., a
`keyboard, a touch screen, and/or a display), and network connections. The coupling of the
`set of proces-sors and other components is typically through one or more busses and bridges
`(also termed as bus controllers). The stor-age devices represent one or more non-transitory
`machine-readable or computer-readable storage media and non-tran-sitory machine-readable
`or computer-readable communication media. Thus, the storage device of a given electronic
`device typically stores code and/or data for execu-tion on the set of one or more processors
`of that electronic device. Of course, one or more parts of an embodiment of the invention
`may be implemented using different combinations of software, firmware, and/or
`hardware.”)
`
`Kempf at [0033] (“As used herein, a network element (e.g., a router, switch, bridge, etc.) is
`a piece of networking equipment, including hardware and software, that communicatively
`interconnects other equipment on the network (e.g., other network elements, end stations,
`etc.). Some network elements are "multiple services network elements" that provide
`sup-port for multiple networking functions (e.g., routing, bridg-ing, switching, Layer 2
`aggregation, session border control, multicasting, and/or subscriber management), and/or
`provide support for multiple application services (e.g., data, voice, and video). Subscriber
`end stations ( e.g., servers, worksta-tions, laptops, palm tops, mobile phones, smart phones,
`mul-timedia phones, Voice Over Internet Protocol (VOIP) phones, portable media players,
`GPS units, gaming systems, set-top boxes (STBs), etc.) access content/services provided
`over the Internet and/or content/services provided on virtual private networks (VPN s)
`12
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 12 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`Kempf
`overlaid on the Internet. The content and/or services are typically provided by one or more
`end stations ( e.g., server end stations) belonging to a service or content provider or end
`stations participating in a peer to peer service, and may include public web pages (free
`content, store fronts, search services, etc.), private web pages ( e.g., username/pass-word
`accessed web pages providing email services, etc.), corporate networks over VPNs, IPTV,
`etc. Typically, sub-scriber end stations are coupled ( e.g., through customer premise
`equipment coupled to an access network (wired or wirelessly)) to edge network elements,
`which are coupled (e.g., through one or more core network elements to other edge network
`elements) to other end stations (e.g., server end stations).”)
`
`Kempf at [0040] (“The standard EPC architecture assumes a standard routed IP network for
`transport on top of which the mobile network entities and protocols are implemented. The
`enhanced EPC architecture described herein is instead at the level ofIP routing and media
`access control (MAC) switch-ing. Instead of using L2 routing and L3 internal gateway
`protocols to distribute IP routing and managing Ethernet and IP routing as a collection of
`distributed control entities, L2 and L3 routing management is centralized in a cloud facility
`and the routing is controlled from the cloud facility using the OpenFlow protocol. As used
`herein, the "OpenFlow proto-col" refers to the OpenFlow network protocol and switching
`specification defined in the OpenFlow Switch Specification at www.openflowswitch.org a
`web site hosted by Stanford Uni-versity. As used herein, an "OpenFlow switch" refers to a
`network element implementing the OpenFlow protocol.”)
`
`Kempf at [0079] (“FIG. 16 is a diagram of one embodiment of a process for EPC peering
`and differential routing for specialized ser-vice treatment. The OpenFlow signaling,
`indicated by the solid lines and arrows 1601, sets up flow rules and actions on the switches
`and gateways within the EPC for differential routing. These flow rules direct GTP flows to
`particular loca-tions. In this example, the operator in this case peers its EPC with two other
`fixed operators. Routing through each peering point is handled by the respective P-GW-Dl
`and P-GW-D2 1603A, B. The dashed lines and arrows 1605 show traffic from a UE 1607
`that needs to be routed to another peering operator. The flow rules and actions to distinguish
`which peering point the traffic should traverse are installed in the OpenFlow switches 1609
`and gateways 1603A, B by the OpenFlow controller 1611. The OpenFlow controller 1611
`calculates these flow rules and actions based on the routing tables it maintains for outside
`
`13
`
`Orckit Exhibit 2018
`Cisco Systems v. Orckit Corp.
`IPR2023-00554, Page 13 of 1100
`
`
`
`No.
`
`ʼ111 Patent Claim 1
`
`1[d]
`
`checking, by the
`network node, if the
`packet satisfies the
`criterion;
`
`Kempf
`traffic, and the source and destination of the packets, as well as by any specialized
`for-warding treatment required for DSCP marked packets.”)
`
`Kempf discloses checking, by the network node, if the packet satisfies the criterion.
`
`For example, Kempf discloses determining by the network element if the packet header
`field matches an associated action in the flow table.
`
`Kempf at [0044] (“FIG. 1 is a diagram of one embodiment of an example network with an
`OpenFlow switch, conforming to the OpenFlow 1.0 specification. The OpenFlow 1.0
`protocol enables a controller 101 to connect to an OpenFlow 1.0 enabled switch 109 using a
`secure channel 103 and control a single forwarding table 107 in the switch 109. The
`controller 101 is an external software component executed by a remote computing device
`that enables a user to configure the Open-Flow 1.0 switch 109. The secure channel 103 can
`be provided by any type of network including a local area network (LAN) or a wide area
`network (WAN), such as the Internet.”)
`
`Kempf at [0045] (“FIG. 2 is a diagram illustrating one embodiment of the contents of a flow
`table entry. The forwarding table 107 is populated with entries consisting of a rule 201
`defining matches for fields in packet headers; an action 203 associated to the flow match;
`and a collection of statistics 205 on the flow. When an incoming packet is received a lookup
`for a matching rule is made in the flow table 107. If the incoming packet matches a
`particular rule, the associated action defined in that flow table entry is performed on the
`packet.”)
`
`Kempf at [0046] (“A rule 201 contains key fields from several headers in the protocol stack,
`for example sourc