`
`Original Message
`
`Original Message
`
`Message
`ID
`
`<E186xfO-0006yt-00@wisbech.cl.cam.ac.uk>
`
`Created
`at:
`
`Wed, Oct 30, 2002 at 6:34 PM (Delivered after 50030275
`seconds)
`
`Andrew Warfield <Andrew.Warfield@cl.cam.ac.uk> Using exmh
`version 2.5+CL 07/13/2001 with nmh-1.0.4
`
`llp@cs.princeton.edu
`
`From:
`
`To:
`
`Subject:
`
`SPF:
`
`NONE with IP 0.0.0.0 Learn more
`
`Download Original
`
`Copy to clipboard
`
`Delivered-To: tim.harris@gmail.com
`Received: by 10.11.98.45 with SMTP id v45cs17998cwb;
` Mon, 31 May 2004 12:52:04 -0700 (PDT)
`Received: by 10.11.120.80 with SMTP id s80mr225116cwc;
` Mon, 31 May 2004 12:52:04 -0700 (PDT)
`Return-Path: <tim.harris@cl.cam.ac.uk>
`Received: from 128.232.0.15 (HELO mta1.cl.cam.ac.uk)
` by mx.gmail.com with SMTP id p77si638038cwc;
` Mon, 31 May 2004 12:52:03 -0700 (PDT)
`Received: from tempest.cl.cam.ac.uk ([128.232.32.126]
`helo=cl.cam.ac.uk ident=[+XyOQTlPYqkf7q53lczcc99LMedmd2KE]) by
`mta1.cl.cam.ac.uk with esmtp (Exim 3.092 #1) id 1BUspG-0006YK-00
`for tim.harris@gmail.com; Mon, 31 May 2004 20:52:02 +0100
`Received: from rhone.cl.cam.ac.uk ([128.232.8.183]
`helo=cl.cam.ac.uk ident=[PX+kB1qxl3UVZJrWdkahyEZIYyWTtlIF]) by
`wisbech.cl.cam.ac.uk with esmtp (Exim 3.092 #1) id 186xfO-0006yt-
`00; Wed, 30 Oct 2002 18:34:10 +0000
`X-Mailer: exmh version 2.5+CL 07/13/2001 with nmh-1.0.4
`To: llp@cs.princeton.edu
`cc: Ian.Pratt@cl.cam.ac.uk, Steven.Hand@cl.cam.ac.uk,
` Tim.Harris@cl.cam.ac.uk
`Subject:
`Mime-Version: 1.0
`Content-Type: multipart/mixed ; boundary="==_Exmh_-16020224590"
`Date: Wed, 30 Oct 2002 18:34:09 +0000
`From: Andrew Warfield <Andrew.Warfield@cl.cam.ac.uk>
`Message-Id: <E186xfO-0006yt-00@wisbech.cl.cam.ac.uk>
`Resent-To: tim.harris@gmail.com
`Resent-Date: Mon, 31 May 2004 20:52:01 +0100
`https://mail.google.com/mail/u/0?ik=9ded58fb16&view=om&permmsgid=msg-f%3A1138788268949881537
`
`1/2
`
`Microsoft Ex. 1031, p. 1
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`4/28/2021
`
`Original Message
`Resent-From: Tim Harris <Tim.Harris@cl.cam.ac.uk>
`Resent-Message-Id: <E1BUspG-0006YK-00@mta1.cl.cam.ac.uk>
`
`--==_Exmh_-16020224590
`Content-Type: text/plain; charset=us-ascii
`
`Hi Larry,
`
`As none of the XenoServers folk were able to be in
`
`Princeton this week, we
`thought it might be a good idea to pass along some notes on the
`directions we
`are taking regarding isolation... specifically with regard to
`network
`resources. I've attached a somewhat hastily prepared document
`that describes
`what we have been up to.
`
`I hope the workshop went well, rumors on this end are that
`
`it was very good.
`
`Best regards,
`andy.
`
`--==_Exmh_-16020224590
`Content-Type: application/pdf ; name="xeno-net-isolation.pdf"
`Content-Description: xeno-net-isolation.pdf
`Content-Transfer-Encoding: base64
`Content-Disposition: attachment; filename="xeno-net-isolation.pdf"
`
`
`--==_Exmh_-16020224590--
`
`https://mail.google.com/mail/u/0?ik=9ded58fb16&view=om&permmsgid=msg-f%3A1138788268949881537
`
`2/2
`
`Microsoft Ex. 1031, p. 2
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Isolation of Shared Network Resources in XenoServers
`
`Andrew War(cid:2)eld, Steve Hand, Tim Harris, Ian Pratt
`Computer Laboratory, University of Cambridge
`
`1 Introduction
`
`2 Overall System Architecture
`
`This document presents some issues involved in virtu-
`alizing network resources so that they may be shared
`across a set of isolated virtual machines (VMs). After
`discussing the issues in design, general details of Xen,
`the XenoServers hypervisor1 are presented as a speci(cid:2)c
`implementation example. We hope that this presenta-
`tion will encourage a discussion regarding the best ap-
`proach to these issues with other researchers involved
`with similar projects, in particular, designers of isola-
`tion architectures for PlanetLab.
`
`The contributions that may be most relevant to efforts
`to establish a general interface description for network
`resource isolation are as follows:
`
`1. The presentation of the hypervisor’s network sys-
`tem as being a virtualization of a local area net-
`work.
`
`2. The explicit use of a packet classi(cid:2)er with the hy-
`pervisor that may be con(cid:2)gured to appropriately
`manage traf(cid:2)c across virtual hosts.
`
`3. Efforts to move closer to a description of function-
`alities that may exist below the virtual network de-
`vice and the interface to those services being an
`extended API available to guest VMs.
`
`Figure 1 presents a generic hypervisor/virtual machine
`architecture. The hypervisor layer serves to virtualize
`resources and multiplex access from a set of overlying
`virtual machines. Within the single host, there are now
`two levels of interface to a given resource: at the bottom
`level is the raw physical interface between the hypervi-
`sor and the device, and above this is the virtual interface
`that is presented to the virtual machines.
`
`System Architecture
`
`Virtual
`Machine
`1
`
`Virtual
`Machine
`2
`
`Virtual
`Machine
`3
`
`Hypervisor
`
`Virtualized Network Interfaces
`
`Physical Network Interface
`
`Figure 1: Network Interfaces in a XenoServer
`
`Throughout the discussion of this design it is impor-
`tant to consider the major trade-offs involved. Primary
`among these is the balance between the utility pro-
`vided to virtual machines and the performance overhead
`imposed on them. Parallel to this performance over-
`head, and perhaps more important is the complexity im-
`posed on the hypervisor by any additional functionality.
`As a major design goal of the hypervisor is reliability
`through simplicity, it seems prudent to give each addi-
`tional feature careful consideration.
`
`1A note on terminology: hypervisors are also described as Virtual
`Machine Managers (VMM) in other literature
`
`In considering the virtual network interfaces that are
`provided to a set of operating system instances, there
`are many properties that may be desirable. As the hy-
`pervisor is multiplexing network resources, the network
`subsystem may be best understood as being a virtual
`network switching element. A simple hypervisor im-
`plementation might act as a link-layer hub, forwarding
`all inbound traf(cid:2)c to all virtual machines and multiplex
`outbound traf(cid:2)c to the network. Alternatively, the hy-
`pervisor may act as a switch or router, servicing each
`VM’s traf(cid:2)c differently and possibly providing addi-
`tional services.
`
`1
`
`Microsoft Ex. 1031, p. 3
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Network Architecture Models
`
`Virtual
`Machine 1
`
`Virtual
`Machine 2
`
`Virtual
`Machine 3
`
`Hypervisor
`
`Additional VMs
`
`External Network
`
`A bus or hub-based network model
`
`Virtual
`Machine 1
`
`Virtual
`Machine 2
`
`External Network
`Hypervisor
`
`Virtual
`Machine 3
`
`Additional VMs
`
`A switch-based network model
`
`In the second model, the hypervisor acts as a network
`switch. A packet classi(cid:2)er is incorporated to (cid:147)route(cid:148)
`individual packets to the appropriate virtual machine.
`In this model, the hypervisor acts as an IP router, and
`may provide additional IP-speci(cid:2)c services.
`In addi-
`tion, promiscuous mode can be implemented by al-
`lowing each virtual interface to see only those packets
`bound for the associated VM; this behaviour is be iden-
`tical to what would be expected if all of the VMs were
`separate physical machines on a common IP router.
`
`As the vast majority of traf(cid:2)c is likely be TCP/IP-based,
`there is an understandable bene(cid:2)t to providing addi-
`tional IP services, and the switch model appears to be
`a reasonable design direction. However, a limitation
`of this approach is that it does not account for non-
`IP (and non-ARP) traf(cid:2)c. Developers wishing to ex-
`plore alternate protocols may prefer the hub model, as
`it does not modify Ethernet frames before they are for-
`warded. A hybrid solution to this issue is to act as a
`router for all regular (IP/ARP) traf(cid:2)c, as described in
`the second model and as a hub for all other traf(cid:2)c. De-
`velopers wishing to bypass the IP routing facilities pro-
`vided within the hypervisor would be left to use existing
`IP overlay protocols such as IP over IP.
`
`Figure 2: Forwarding Models for ‘Isolated’ Interfaces.
`
`3 Hypervisor Packet Handling
`
`These two models are illustrated in Figure 2. In the (cid:2)rst
`example, the hypervisor presents all traf(cid:2)c to all vir-
`tual machines. Although this may be much closer to a
`pure virtualization of the physical resources, it presents
`several problems. First, there is a security concern in
`that each node can see each other node’s traf(cid:2)c. This
`may have management implications as to whether or
`not the virtual Ethernet device should allow a promiscu-
`ous mode. Second, this model presents a more compli-
`cated system structure from a performance standpoint.
`Incoming packets must be either copied to each VM’s
`receive queue, incurring an overhead, or alternatively
`the hypervisor must provide the ability to deliver a com-
`mon piece of memory as either read-only or copy-on-
`write to a set of VMs. Although this model’s abstraction
`is fairly simple (cid:150) a plain broadcast Ethernet between
`VMs, it may prove to be the case that providing what
`is essentially a pure virtualization of the underlying re-
`source imposes an unreasonable cost, both in terms of
`complexity and performance, on the system.
`
`The essential network concerns within the hypervisor
`can be characterised according to three broad activi-
`ties: scheduling, multiplexing/demultiplexing, and pro-
`tection.
`In the case of PlanetLab, as clients have a
`vested interest in being able to understand how their
`hosts are interacting with the network, it seems wise
`that there be a common design philosophy applied by
`all isolation system implementors.
`
`3.1 Scheduling
`
`In order to meet the service requirements of speci(cid:2)c
`VMs, the hypervisor must ensure that network traf(cid:2)c
`is scheduled to meet speci(cid:2)ed limits. Varying imple-
`mentations may choose to employ drastically different
`approaches to scheduling inbound and outbound traf(cid:2)c.
`
`An idealistic goal here might be that all implemented
`isolation models have identical behaviours in schedul-
`ing packets. This is obviously unrealistic and the spec-
`
`2
`
`Microsoft Ex. 1031, p. 4
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`i(cid:2)cation of a uniform model for packet scheduling may
`inhibit research in this area. As such, an initial goal
`here should be that the network system guarantee some
`degree of resiliency against misuse. As much as possi-
`ble, the hypervisor should prevent individual hosts from
`monopolizing resources in such a manner as to impair
`the functionality of other clients. Additionally, the sys-
`tem may attempt to protect VMs from being (cid:3)ooded by
`inbound traf(cid:2)c.
`
`3.2 Multiplexing/Demultiplexing
`
`Received packets should be delivered only to their tar-
`get host. Transmitted packets may need to be translated
`to share a single external IP address.
`
`One approach to this problem is to use a table-based
`packet classi(cid:2)er/forwarder within the hypervisor to
`route packets appropriately. The net(cid:2)lter and IPTa-
`bles modules within Linux serve as a good example,
`and form the basis of our implementation, which is de-
`scribed in the next section.
`
`An open issue that stems from the use of a rule based
`classi(cid:2)er is exactly what operations may be performed
`as packets are routed. Speci(cid:2)cally, we are concerned
`with how speci(cid:2)c matching rules should be, what trans-
`formations are allowed, and so on.
`
`3.3 Protection
`
`Individual VMs need to be protected from one an-
`other and from malicious external traf(cid:2)c. Among other
`things, this demands that VMs only be allowed to gener-
`ate IP packets that are valid, and that they not be able to
`spoof the identity of other hosts. Given the packet clas-
`si(cid:2)er approach described above, this behaviour can be
`achieved fairly easily by either dropping invalid pack-
`ets, or overwriting the source address and port (cid:2)elds of
`all outbound packets.
`
`3.4 Additional Considerations
`
`In addition to the packet classi(cid:2)cation functionalities
`described above, services such as the following may be
`desirable:
`
` Packet Filtering (cid:150) The hypervisor may act as a
`(cid:2)rewall, (cid:2)ltering traf(cid:2)c bound for each virtual ma-
`chine.
`
` Address Translation (cid:150) By performing network
`address translation (NAT) and port forwarding,
`many virtual machines may share a common ex-
`ternal IP address.
`
` Traf(cid:2)c Logging (cid:150) Details regarding connections
`may be logged to allow forensic auditing in the
`case of a speci(cid:2)c virtual machine acting mali-
`ciously.
`
` VM-based Packet Snif(cid:2)ng (cid:150) Clients may wish
`to have some interface approximating promiscu-
`ous mode. We suggest above that this is possible,
`allowing a VM to see all traf(cid:2)c bound for it as if
`all the VMs were on a router. A packet classi(cid:2)er
`could be con(cid:2)gured to deliver a larger (or even a
`complete) version of the traf(cid:2)c visible to the ex-
`ternal interface to individual VMs. The resolu-
`tion to this issue it partially a performance con-
`cern, as it would likely necessitate multiple copies
`of inbound message buffers, and partially a politi-
`cal/administrative one, for obvious reasons.
`
`Given the proposed system structure, it should be com-
`pletely reasonable to provide all of these services. As
`mentioned previously, the issue that must be considered
`in each case is the impact that their implementation will
`have on the ef(cid:2)cient processing of VM traf(cid:2)c.
`
`4 Network Virtualization in Xen
`
`This section describes the design approach that has been
`taken for the (cid:2)rst public release of the XenoServers hy-
`pervisor, Xen.
`Individual virtual machines may have
`one or more virtual interfaces, each of which appears as
`a point-to-point Ethernet link to an IP router. A diagram
`of the system appears in Figure 3.
`
`The network system within Xen consists of a virtual
`(cid:2)rewall router, which is a rule-based packet classi(cid:2)-
`cation/forwarding engine (based on the Linux net(cid:2)l-
`ter/IPTables code) responsible for simple, fast packet
`handling. Additionally, Xen’s network system incorpo-
`rates a network address translation (NAT) module that
`provides functions such as address translation and port
`
`3
`
`Microsoft Ex. 1031, p. 5
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`forwarding2.
`
`5 Conclusion
`
`This paper has presented a discussion of the issues in-
`volved in the sharing of network resources for a set
`of isolated virtual machines. By considering the net-
`work system implemented in the hypervisor as a virtu-
`alization of a local area network, we feel that its role
`becomes much more understandable. This approach
`presents a model in which VMs appear as isolated as
`they would be were they separate physical machines on
`a shared switching element.
`We hope that this discussion and the overview of our
`own approach serve to promote discussion on these is-
`sues with other researchers.
`[?]
`
`References
`
`Packet scheduling in Xen is at the granularity of vir-
`tual interfaces. A soft real-time scheduler moves trans-
`mit packets from virtual interface send queues through
`Xen’s routing tables. Received packets are delivered on
`arrival and appropriate RX scheduling is deferred on to
`the CPU scheduler as VMs are responsible for empty-
`ing their own inbound message buffers. VMs which do
`not empty their receive queues at the inbound packet
`rate will have extraneous packets dropped.
`
`Rules may be installed into classi(cid:2)cation engine
`through an interface provided within a privileged VM
`(known as domain zero). These rules are tuples of the
`form (pattern, action). Note that rules may be priori-
`tized and a particular packet may match multiple rules
`upon classi(cid:2)cation. This means that, for instance, an ar-
`riving packet bound for a VM may be routed to that VM
`and trigger the generation of a logging event to domain
`zero.
`
`As an example, the following rules are installed prior to
`instantiating the Windows XP VM in the diagram. The
`rules forward all traf(cid:2)c bound for the static address, but
`bar it’s access to privileged ports. The (cid:2)nal rules map
`it’s outbound traf(cid:2)c ensuring that it is not attempting to
`spoof the identity of another host3.
`
`(dstAddr=’128.232.103.201’
`dstPort=‘1-1024’, DROP)
`(dstAddr=’128.232.103.201’, FORWARD ds-
`tIf=pp2)
`(srcAddr=’128.232.103.201’ srcPort=‘1-
`1024’, DROP)
`(srcAddr=’128.232.103.201’ srcIf=pp2,
`FORWARD dstIf=eth0)
`(srcAddr=’128.232.103.201’, DROP)
`
`Additionally, the following rule is used to log TCP SYN
`messages. Message headers are sent to the reporting
`and monitoring interface of domain zero.
`
`(proto=TCP flags=SYN, LOG
`fmt=LOG PKT HEADER)
`
`2The NAT module does not presently attempt to provide heavier
`functionalities such as per-(cid:3)ow connection tracking and application-
`speci(cid:2)c (e.g. FTP) translations.
`3Rules for local delivery to other interfaces have been omitted for
`simplicity.
`
`4
`
`Microsoft Ex. 1031, p. 6
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`domain1
`
`Linux 2.4
`
`if
`128.232.103.202
`
`if
`
`pp1b
`
`Xen hypervisor
`pp1a
`
`pp2
`
`Virtual
`Firewall
`Router
`128.232.103.203
`NAT
`pp5
`pp3
`
`pp4
`
`eth0
`
`Connection(s)
`to outside world
`
`128.232.103.201-203
`
`domain2
`
`Win XP
`
`if
`128.232.103.201
`
`domain5
`
`Linux 2.5
`
`if
`192.168.0.2
`
`192.168.0.3
`if
`domain4
`
`NetBSD
`5.2
`
`Reporting &
`Monitoring
`
`Control &
`Configuration
`
`192.168.0.4
`
`if
`
`(Linux 2.4)
`
`XenoServer VFR
`Control Plane
`manage IP addrs
`accounting
`audit logging
`tcpdump
`
`Domain 0 : privileged domain
`
`Figure 3: Network virtualization in Xen.
`
`5
`
`Microsoft Ex. 1031, p. 7
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`