throbber
Building a MAC-Based Security Architecture
`for the Xen Open-Source Hypervisor
`
`Reiner Sailer Trent Jaeger Enriquillo Valdez Ram·on C·aceres
`Ronald Perez Stefan Berger
`John Linwood Grif(cid:2)n Leendert van Doorn
`fsailer,jaegert,rvaldez,caceres,ronpz,stefanb,jlg,leendertg@us.ibm.com
`IBM T. J. Watson Research Center, Hawthorne, NY 10532 USA
`
`Abstract
`We present the sHype hypervisor security architecture and
`examine in detail its mandatory access control facilities.
`While existing hypervisor security approaches aiming at
`high assurance have been proven useful for high-security
`environments that prioritize security over performance and
`code reuse, our approach aims at commercial security
`where near-zero performance overhead, non-intrusive im-
`plementation, and usability are of paramount importance.
`sHype enforces strong isolation at the granularity of a vir-
`tual machine, thus providing a robust foundation on which
`higher software layers can enact (cid:2)ner-grained controls. We
`provide the rationale behind the sHype design and describe
`and evaluate our implementation for the Xen open-source
`hypervisor.
`
`1 Introduction
`As workstation- and server-class computer systems have
`increased in processing power and decreased in cost, it has
`become feasible to aggregate the functionality of multiple
`standalone systems onto a single hardware platform. For
`example, a business that has been processing customer or-
`ders using three computer systems(cid:150)a web server front-end,
`a database server back-end, and an application server in
`the middle(cid:151)can increase hardware utilization and reduce
`its hardware costs, con(cid:2)guration complexity, management
`complexity, physical space, and energy consumption by
`running all three workloads on a single system.
`Virtualization technology is quickly gaining popularity
`as a way to achieve these bene(cid:2)ts. With this technology,
`a software layer called a virtual machine monitor (VMM),
`or hypervisor, creates multiple virtual machines out of
`one physical machine, and multiplexes multiple virtual re-
`sources onto a single physical resource. Virtualization is
`facilitated by recent development in terms of broad avail-
`ability of fully virtualizable CPUs [2, 15]. These advances
`
`make possible ef(cid:2)cient aggregation of multiple virtual ma-
`chines on a single physical machine, with each virtual ma-
`chine (VM) running its own operating system (OS).
`Although co-locating multiple operating systems and
`their workloads on the same hardware platform offers great
`bene(cid:2)ts, it also raises the specter of undesirable interac-
`tions between those entities. Mutually distrusted parties re-
`quire that the data and execution environment of one party’s
`applications are securely isolated from those of a second
`party’s applications. As a result, virtualization environ-
`ments by default do not give VMs direct access to physical
`resources. Instead, physical resources (e.g., memory, CPU)
`are virtualized by the hypervisor layer and can be accessed
`by a VM only through their virtualized counterparts (e.g.,
`virtual memory, virtual CPU). The hypervisor is strongly
`protected against software running in VMs, and enforces
`isolation of VMs and resources.
`However, total isolation is not desirable because today’s
`increasingly interconnected organizations require commu-
`nication between application workloads. Consequently,
`there is a need for secure resource sharing by enforcing ac-
`cess control between related groups of virtual machines.
`The main focus of this paper is on the controlled sharing
`of resources. In current hypervisor systems, such sharing is
`not controlled by any formal policy. This lack of formality
`makes it dif(cid:2)cult to reason about the effectiveness of iso-
`lation between VMs. Furthermore, current approaches do
`not scale well to large collections of systems because they
`rely on human oversight of complex con(cid:2)gurations to en-
`sure that security policies are being enforced. They also do
`not support workload balancing through VM migration be-
`tween machines well because the policy representations are
`machine-dependent.
`This paper explores the design and implementation of
`sHype, a security architecture for virtualization environ-
`ments that controls the sharing of resources among VMs
`according to formal security policies. sHype goals include
`(i) near-zero overhead on the performance-critical path, (ii)
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`Microsoft Ex. 1016, p. 1
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`non-intrusiveness with regard to existing VMM code, (iii)
`scalability of system management to many machines via
`simple policies, and (iv) support for VM migration via
`machine-independent policies.
`These goals are derived from the requirements of com-
`mercial environments. Hypervisor security approaches
`aimed at high assurance have proven useful in environments
`that give security the highest priority. These approaches
`control both explicit and implicit communication channels
`between VMs. We believe that controlling explicit data
`(cid:3)ows and minimizing, but not entirely eliminating, covert
`channels via careful resource management is suf(cid:2)cient in
`commercial environments.
`We implemented the sHype architecture in the Xen hy-
`pervisor [3], where it controls all inter-VM communication
`according to formal security policies. The architecture is
`designed to achieve medium assurance (Common Criteria
`EAL4 [8]) for hypervisor implementations. Our modi(cid:2)ca-
`tions to the Xen hypervisor are small, adding about 2000
`lines of code. Our hypervisor security enhancements incur
`less than 1% overhead on the performance-critical path and
`the Xen paravirtualization overhead is between 0%-9% [3].
`While this paper describes an sHype implementation tai-
`lored to the Xen hypervisor, the sHype architecture is not
`speci(cid:2)c to any one hypervisor.
`It was originally imple-
`mented in the rHype research hypervisor [14] and is also
`being implemented in the PHYP [13] commercial hypervi-
`sor.
`Section 2 introduces the Xen hypervisor environment in
`which we have implemented our generic security architec-
`ture. Mutually suspicious workload types serve as an exam-
`ple to illustrate requirements and the use of our hypervisor
`security architecture. We describe the design of the sHype
`hypervisor security architecture in Section 3, and its Xen
`implementation in Section 4. Section 5 evaluates our archi-
`tecture and implementation, and Section 6 discusses related
`work.
`2 Background
`
`2.1 The Xen Hypervisor
`
`We use the Xen [3] open-source hypervisor as an exam-
`ple of a virtual machine monitor throughout this paper. Fig-
`ure 1 illustrates a basic Xen con(cid:2)guration. The hypervi-
`sor consists of a small software layer on top of the physical
`hardware. It implements virtual resources (e.g., vMemory,
`vCPU, event channels, and shared memory) and it controls
`access to I/O devices.
`Virtual machines, also known as domains in Xen, are
`built on top of the Xen hypervisor. A special VM, called
`Dom0 (domain zero) is created (cid:2)rst. It serves to manage
`other VMs (create, destroy, migrate, save, restore) and con-
`trols the assignment of I/O devices to VMs.
`
`VMs started by Dom0 are called DomUs (user domains).
`They can run any para-virtualized [3] operating system,
`e.g., Linux. Guest OSs running on Xen are minimally
`changed, for example by replacing privileged operations
`with calls to the hypervisor. Such operations cannot be
`called directly by the guest OS because they can compro-
`mise the hypervisor.
`In general, calls to the hypervisor
`have three characteristics: (1) they offer access to virtual
`resources; (2) they speed up critical path operations such
`as page table management; and (3) they emulate privileged
`operations that are restricted to the hypervisor but might be
`necessary in guest operating systems as well.
`
`Dom0(cid:13)
`
`VM(cid:13)
`Management(cid:13)
`&(cid:13)
`I/O(cid:13)
`Management(cid:13)
`
`DomU(cid:13)
`
`DomU(cid:13)
`
`Guest(cid:13)
`OS(cid:13)
`
`Guest(cid:13)
`OS(cid:13)
`
`...(cid:13)
`
`DomU(cid:13)
`
`Guest(cid:13)
`OS(cid:13)
`
`Xen Hypervisor (vMem, vCPU, EventChannels, SharedMemory)(cid:13)
`
`System Hardware (Real Machine = CPU, MEM, Devices)(cid:13)
`
`Figure 1. Xen hypervisor architecture
`
`Xen offers just two shared virtual resources on top of
`which all inter-VM communication and cooperation is im-
`plemented:
`(cid:15) Event channels: An event-channel hypervisor call enables
`a VM to setup a point-to-point synchronization channel to
`another VM.
`(cid:15) Shared memory: A grant-table hypervisor call enables a
`VM to allow another VM access to virtual memory pages
`it owns. Event channels are used to synchronize access to
`such shared memory.
`Shared virtual resources, such as virtual network adapters
`and virtual block devices, are implemented as device drivers
`inside the Guest OS. Non-shared virtual resources include
`virtual memory and virtual CPU.
`Physical resources differ from virtualized resources in
`a couple of key ways: (1) Input/Output Memory Manage-
`ment Units (IO-MMUs) are needed to restrict Direct Mem-
`ory Access (DMA) to and from a VMM’s memory space.
`(2) Performance is best if the devices are co-located with
`the code using them in the same VM, and consequently the
`optimal case is a physical resource per VM, which may not
`be practically feasible. (3) Driver code is too complex for
`inclusion in the hypervisor, so a device to be shared by mul-
`tiple VMs needs to be managed by a device domain, which
`then makes this device available through inter-VM sharing
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`2
`
`Microsoft Ex. 1016, p. 2
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`to other VMs. In Xen, a SCSI disk or Ethernet device, for
`example, can be owned by a device domain and accessed by
`other VMs through virtual disk or Ethernet drivers, which
`communicate with the device domain using event channels
`and shared memory provided by the hypervisor.
`
`2.2 Coalitions of VMs
`
`In the near future, we believe that VM systems will
`evolve from a set of isolated VMs into sets of VM coali-
`tions. Due to hardware improvements enabling reliable iso-
`lation, we believe that some control now done in operating
`systems will be delegated to hypervisors. We aim for hyper-
`visors to provide isolation between coalitions and provide
`limited sharing within coalitions as de(cid:2)ned by a Mandatory
`Access Control (MAC) policy.
`Consider a customer order system. The web services
`and data base infrastructure that processes orders must have
`high integrity in order to protect the integrity of the busi-
`ness. However, browsing and collecting possible items to be
`purchased need not be as high integrity. At the same time,
`an OEM’s software advertising a product that the company
`distributes may be run as another workload that should be
`isolated from the order workloads (web service, database,
`browsing).
`In the customer order example, we merge the VMs per-
`forming customer orders into the Order coalition and pro-
`tect them from the other VMs on the system. The Order
`VMs may communicate, share some memory, network, and
`disk resources. Thus, they are as a coalition con(cid:2)ned by
`the hypervisor. Within the Order coalition, the hypervisor
`controls sharing using a MAC policy that permits inter-VM
`communication, sharing of network resources and disk re-
`sources, and sharing of memory. All this sharing must be
`veri(cid:2)ed to protect security of the order system. However,
`the MAC policy also enables the hypervisor and device do-
`mains to protect the order database from being shared with
`other VMs outside the Order coalition.
`
`2.3 Problem Statement
`
`The problem we address in this paper is the design of
`a VMM reference monitor that enforces comprehensive,
`mandatory access control policies on inter-VM operations.
`A reference monitor is designed to ensure mediation of all
`security-sensitive operations, which enables a policy to au-
`thorize all such operations [16]. A MAC policy is de(cid:2)ned
`by system administrators to ensure that system (i.e., VMM)
`security goals are achieved regardless of system user (i.e.,
`VM) actions. This contrasts with a discretionary access
`control (DAC) policy which enables users (and their pro-
`grams) to grant rights to the objects that they own.
`We apply the reference monitor to control all references
`to shared virtual resources by VMs. This allows coalitions
`
`of workloads to communicate or share resources within a
`coalition, while isolating workloads of different coalitions.
`Figure 2 shows an example of VM coalitions. Domain 0
`has started 5 user domains (VMs), which are distinguished
`inside the hypervisor by their domain ID (VM-id in Fig. 2).
`Domains 2 and 3 are running order workloads. Domain 6 is
`running an advertising workload, and domain 8 is running
`an unrelated generic computing workload. Finally, domain
`1 runs the virtual block device driver that offers two isolated
`virtual disks, vDisk Order and vDisk Ads, to the Order and
`Advertising coalitions. In this example, we want to enable
`ef(cid:2)cient communication and sharing among VMs of the Or-
`der coalition but contain communication of VMs inside this
`coalition. For example, no VM running an Order workload
`is allowed to communicate or share information with any
`VM running Computing or Advertising workloads, and vice
`versa.
`
`VM-id=8(cid:13)
`
`VM-id=2(cid:13)
`
`Orders(cid:13) Ads(cid:13)
`
`VM-id=6(cid:13)
`
`VM-id=3(cid:13) VM-id=0(cid:13)
`
`WL-Type:(cid:13)
`Computing(cid:13)
`
`WL-Type:(cid:13)
`Order(cid:13)
`
`RAM(cid:13)
`Disk(cid:13)
`
`Virt. Disk(cid:13)
`Connector(cid:13)
`
`SCSI(cid:13)
`HD(cid:13)
`
`real(cid:13)
`disk(cid:13)
`
`VM-id=1(cid:13)
`vDisk Server(cid:13)
`Orders(cid:13) Ads(cid:13)
`Virt.(cid:13)
`Virt.(cid:13)
`Disk(cid:13)
`Disk(cid:13)
`
`WL-Type:(cid:13)
`Advertising(cid:13)
`
`WL-Type:(cid:13)
`Order(cid:13)
`
`Virt. Disk(cid:13)
`Connector(cid:13)
`
`Virt. Disk(cid:13)
`Connector(cid:13)
`
`Dom0(cid:13)
`
`VM(cid:13)
`Mgmt(cid:13)
`
`Xen Hypervisor(cid:13)
`
`System Hardware (Real Machine = CPU, MEM, I/O)(cid:13)
`
`Figure 2. VM coalitions and payloads in Xen
`
`While the hypervisor controls the ability of the VMs to
`connect to the device domain, the device domain is trusted
`to keep data of different virtual disks securely isolated in-
`side its VM and on the real disk. This is a reasonable re-
`quirement since device domains are not application-speci(cid:2)c
`and can run minimized run-time environments. Device do-
`mains thus form part of the Trusted Computing Base (TCB).
`3 sHype Design
`Figure 3 illustrates the overall sHype security architec-
`ture and its integration into the Xen VMM system. sHype is
`designed to support a set of security functions: secure ser-
`vices, resource monitoring, access control between VMs,
`isolation of virtual resources, and TPM-based attestation.
`sHype supports interaction with secure services in
`custom-designed, minimized, and carefully engineered
`VMs. An example is the policy management VM, which
`we use to establish and manage the security policies for the
`Xen hypervisor. Resource accounting provides control of
`resource usage. This enables enforcement of service level
`agreements and addresses denial of service attacks on hy-
`pervisor or VM resources. The mandatory access control
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`3
`
`Microsoft Ex. 1016, p. 3
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`virtual machines only when allowed by a mandatory ac-
`cess control (MAC) policy. To support various business
`requirements, sHype supports various kinds of MAC poli-
`cies: Biba [5], Bell-LaPadula [4], Caernarvon [30], Type
`Enforcement [6], as well as Chinese Wall [7] policies.
`The classical de(cid:2)nition of a reference monitor [16]
`states that it possesses three properties: (1) it mediates all
`security-critical operations; (2) it can protect itself from
`modi(cid:2)cation; and (3) it is as simple as possible to enable
`validation of its correct implementation. We examine the
`(cid:2)rst requirement in more detail. The second and third re-
`quirement are covered by generic hypervisor properties: it
`is protected against the VMs and consists of a thin software
`layer.
`Mediating security-critical operations. A security-
`critical operation is one that requires MAC policy authoriza-
`tion. If such an operation is not authorized against the MAC
`policy, the system security guarantees can be circumvented.
`For example, if the mapping of memory among VMs is not
`authorized, then a VM in one coalition can leak its data to
`other VMs.
`We identify security-critical operations in terms of re-
`sources whose use must be controlled in order to imple-
`ment MAC policies. We also identify the location of the
`mediation points for these resources. The combination of
`resources to be controlled and their mediation points forms
`the reference monitor interface. We discuss only virtual
`resources, because real resources can only be used exclu-
`sively by one VM or shared in the form of virtual resources.
`The following resources must be controlled in a typical Xen
`VMM environment:
`(cid:15) Sharing of virtual resources between VMs controlled by
`the Xen hypervisor (e.g., event channels, shared memory,
`and domain operations).
`(cid:15) Sharing of local virtual resources between local VMs con-
`trolled by MAC domains (e.g.
`local vLANs and virtual
`disks).
`(cid:15) Sharing of distributed virtual resources between VMs in
`multiple hypervisor systems controlled by MAC-bridging
`domains (e.g., vLANs spanning multiple hypervisor sys-
`tems).
`The hypervisor reference monitor enforces access control
`and isolation on virtual resources in the Xen hypervisor.
`While sHype enforces mandatory access control on MAC
`domains regarding their participation in multiple coalitions,
`it relies on MAC domains to isolate the different virtual
`resources from each other and allow access to virtual re-
`sources only to domains that belong to the same coalition
`as the virtual resource. A good example of a MAC domain
`is the device domain in Fig. 2, which participates in both
`the Order and the Advertising coalition. MAC do-
`mains become part of the Trusted Computing Base (TCB)
`
`Secure Services(cid:13)
`(Policy Mgmt, Audit, ...)(cid:13)
`
`Resource Control(cid:13)
`
`Access Control between VMs(cid:13)
`
`Isolation of Virtual Resources(cid:13)
`
`TPM-based Attestation(cid:13)
`
`P(cid:13)o(cid:13)l(cid:13)i(cid:13)c(cid:13)y(cid:13)
`P(cid:13)o(cid:13)l(cid:13)i(cid:13)c(cid:13)y(cid:13)
`M(cid:13)a(cid:13)n(cid:13)a(cid:13)g(cid:13)e(cid:13)r(cid:13)
`M(cid:13)a(cid:13)n(cid:13)a(cid:13)g(cid:13)e(cid:13)r(cid:13)
`S(cid:13)e(cid:13)c(cid:13)u(cid:13)r(cid:13)i(cid:13)t(cid:13)y(cid:13)
`S(cid:13)e(cid:13)c(cid:13)u(cid:13)r(cid:13)i(cid:13)t(cid:13)y(cid:13)
`S(cid:13)e(cid:13)r(cid:13)v(cid:13)i(cid:13)c(cid:13)e(cid:13)s(cid:13)
`S(cid:13)e(cid:13)r(cid:13)v(cid:13)i(cid:13)c(cid:13)e(cid:13)s(cid:13)
`
`ACM(cid:13)
`
`Callbacks(cid:13)
`Callbacks(cid:13)
`
`Guest(cid:13)
`OS(cid:13)
`
`Guest(cid:13)
`OS(cid:13)
`
`sHype / XEN(cid:13)
`
`Hardware(cid:13)
`
`Hypervisor(cid:13)
`Hypervisor(cid:13)
`Hypervisor(cid:13)
`Mediation(cid:13)
`Mediation(cid:13)
`Mediation(cid:13)
`„(cid:13)Hooks(cid:13)“(cid:13)
`„(cid:13)Hooks(cid:13)“(cid:13)
`„(cid:13)Hooks(cid:13)“(cid:13)
`
`V(cid:13)M(cid:13)-(cid:13)M(cid:13)a(cid:13)n(cid:13)a(cid:13)g(cid:13)e(cid:13)r(cid:13)
`V(cid:13)M(cid:13)-(cid:13)
`
`D(cid:13)O(cid:13)M(cid:13)0(cid:13)
`D(cid:13)O(cid:13)M(cid:13)0(cid:13)
`
`Figure 3. sHype architecture
`
`enforces a formal security policy on information (cid:3)ow be-
`tween VMs.
`sHype leverages existing isolation between virtual re-
`sources and extends it with MAC features. TPM-based
`attestation [28] provides the ability to generate and re-
`port runtime integrity measurements on the hypervisor and
`VMs. This enables remote systems to infer the integrity
`properties of the running system.
`The rest of this paper focuses on the sHype mandatory
`(1) the policy
`access control architecture, consisting of:
`manager maintaining the security policy; (2) the access
`control module (ACM) delivering authorization decisions
`according to the policy; and (3) and mediation hooks con-
`trolling access of VMs to shared virtual resources based on
`decisions returned by the ACM.
`
`3.1 Design Decisions
`
`Three major decisions shape the design of sHype:
`(1) By building on existing isolation properties of virtual
`resources, sHype inherits the medium assurance of existing
`hypervisor isolation while requiring minimal code changes
`in the virtualization layer (hypervisor).
`(2) By using bind-time authorization and controlling ac-
`cess to spontaneously shared resources only on (cid:2)rst-time
`access and upon policy changes, sHype incurs very low per-
`formance overhead on the critical path.
`(3) By enforcing formal security policies, sHype enables
`reasoning about the effectiveness of speci(cid:2)c policies, pro-
`vides the basis for effective defense against denial of ser-
`vice attacks (through resource policy enforcement), and en-
`ables Service Level Agreement-style security guarantees
`(through TPM-based attestation of system properties).
`
`3.2 Access Control Architecture
`
`The key component of the access control architecture is
`the reference monitor, which in sHype isolates virtual ma-
`chines by default and allows sharing of resources among
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`4
`
`Microsoft Ex. 1016, p. 4
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`and should therefore be of minimal size (e.g., secure micro-
`kernel design). Since MAC domains are generic, the cost of
`making them secure will amortize as they are used in many
`application environments. We sketch the implementation of
`MAC domains in Section 4.4.
`If coalitions are distributed over multiple systems, we
`need MAC-bridging domains to control their interaction.
`The virtual resource that enables co-operation among VMs
`on multiple systems is typically a vLAN. Mac-bridging do-
`mains build bridges between their hypervisor systems over
`untrusted terrain to connect vLANs on multiple systems. To
`do so, they (cid:2)rst establish trust into required security proper-
`ties of the peer MAC Bridging domains and their underlying
`virtualization infrastructure (e.g., using TPM-based attesta-
`tion). Afterwards, they build secure tunnels between each
`other, and can from now on be considered as forming a sin-
`gle (distributed) MAC domain spanning multiple systems.
`Requirements on the resulting distributed MAC domain are
`akin the requirements described above for local MAC do-
`mains. MAC Bridging domains become part of the TCB,
`similarly to MAC domains.
`4 Implementation
`In this section, we (cid:2)rst de(cid:2)ne simple policies tailored
`to the Xen hypervisor environment based on the workload
`types and resources that must be controlled. Then we de-
`scribe the management of the policies and the labeling of
`VMs and resources. Finally, we introduce the access con-
`trol enforcement in the hypervisor, which guards access of
`VMs to resources based on the policies.
`
`4.1 Security Policies
`
`We implemented two formal security policies for Xen:
`(i) a Chinese Wall policy, (ii) a simple Type Enforcement
`(TE) policy. Both policies work on their own set of types
`(CW- or TE-types), which are assigned to VMs as a func-
`tion of the workloads they can run. The CW- and TE-types
`de(cid:2)ne the granularity upon which VMs and resources can
`be distinguished. The assignment of types to VMs and re-
`sources is an administrative task (i.e., part of policy man-
`agement).
`Chinese Wall policy: The (cid:2)rst policy enables admin-
`istrators to ensure that certain VMs (and their supported
`workload types) cannot run on the same hypervisor system
`at the same time. This is useful to mitigate covert channels
`or to meet other requirements regarding certain workload
`types (e.g., workload types of competitors) that shall not
`run on the same physical system at the same time.
`The Chinese Wall policy de(cid:2)nes a set Chinese wall types
`(CW-types), and these are assigned to a VM according to
`the workloads it can run. It also de(cid:2)nes con(cid:3)ict sets us-
`ing these CW-types and ensures that VMs that are assigned
`
`CW-types in the same con(cid:3)ict set never run at the same time
`on the same system.
`Type Enforcement policy: The second policy speci(cid:2)es
`which running VMs can share resources and which cannot.
`It supports the coalitions introduced in Section 2.2 by map-
`ping coalition membership onto TE types.
`The TE policy de(cid:2)nes the set of TE-types (coalitions)
`and assigns TE types to VMs (coalition membership). The
`TE policy rules enforce that VMs only share virtual re-
`sources if they have a TE type in common, i.e., they are
`member of at least one common coalition.
`
`4.2 Policy Management
`
`The policy management function is responsible for of-
`fering means to create and maintain policy instantiations
`for the Chinese Wall and Type Enforcement policies. To
`minimize code complexity inside the hypervisor, the policy
`management translates an XML-based policy representa-
`tion into a binary policy representation that is both system-
`independent and ef(cid:2)cient to use by the hypervisor layer.
`The binary policy created by the Policy Management in-
`cludes the assignment of VMs to CW-types and TE-types,
`as well as the con(cid:3)ict sets to be enforced on the CW-types.
`No other information is needed by the hypervisor to enforce
`the policies. The access class of a VM as sHype sees it is
`exactly a set of CW-types and TE-types. Access classes of
`virtual resources such as virtual disks comprise only TE-
`types, typically a single TE-type.
`Policy management can either run in a dedicated do-
`main on the managed system (the current Xen approach),
`or it can run on a separate special-purpose system, such as
`the Hardware Management Console (HMC) used by PHYP
`and other commercial virtualization solutions. The policy
`management is needed to change or validate a policy; it is
`not necessary to run the system and enforce the instantiated
`policies.
`
`4.3 Policy Enforcement
`
`Mandatory access control is implemented as a reference
`monitor. The mediation of references of VMs to shared
`virtual resources is implemented by inserting security en-
`forcement hooks into the code path inside the hypervisor
`where VMs share virtual resources. Hooks call into the ac-
`cess control module (ACM) for decisions and enforce them
`locally at the hook. Isolation of individual virtual resources
`is inherited from Xen since it is a general design issue for
`hypervisors rather than a security-speci(cid:2)c requirement.
`
`4.3.1 Reference Monitor
`sHype strictly separates access control enforcement from
`the access control policy, as in the Flask [33] architecture.
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`5
`
`Microsoft Ex. 1016, p. 5
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`VM(cid:13)
`(Subject)(cid:13)
`
`1. H_Call(cid:13)
`
`Hypervisor(cid:13)
`
`Hook(cid:13)
`
`Object(cid:13)
`
`2. Authorization Query(cid:13)
`3. Authorization Decision(cid:13)
`
`Core Hypervisor(cid:13)
`
`XML(cid:13)
`Security(cid:13)
`Security(cid:13)
`Policy(cid:13)
`Policy(cid:13)
`Manager(cid:13)
`VM(cid:13)
`
`Binary(cid:13)
`Security(cid:13)
`Policy(cid:13)
`A(cid:13)ccess(cid:13)
`C(cid:13)ontrol(cid:13)
`M(cid:13)odule(cid:13)
`
`Figure 4. sHype security reference monitor
`
`We describe the control architecture in the context of the
`hypervisor, but it will also be used in the MAC domains.
`Figure 4 shows the sHype access control architecture as part
`of the core hypervisor and depicts the relationships between
`its three major design components. Security enforcement
`hooks are carefully inserted into the core hypervisor and
`cover references of VMs to virtual resources. Enforcement
`hooks retrieve access control decisions from the access con-
`trol module (ACM).
`The ACM authorizes access of VMs to resources based
`on the policy rules and the security labels attached to VMs
`(CW-types, TE-types) and resources (TE-types). The for-
`mal security policy de(cid:2)nes these access rules as well as the
`structure and interpretation of security labels for VMs and
`resources. Finally, a hypervisor interface enables trusted
`policy-management VMs to manage the ACM security pol-
`icy.
`
`4.3.2 Access Control Hooks
`A security enforcement hook is a specialized access en-
`forcement function that guards access to a virtual resource
`by VMs. It enforces information (cid:3)ow constraints between
`VMs according to the security policy. Each security hook
`adheres to the following general pattern: (1) gather access
`control information (determine VM labels, virtual resource
`labels, and access operation type); (2) determine access de-
`cision by calling the ACM; and (3) enforce access control
`decision. Hooks are functionally transparent if the access is
`allowed, and they return an error code otherwise.
`Using security hooks, sHype minimizes the interference
`with the core hypervisor while enforcing the security pol-
`icy on access to virtual resources. We have placed secu-
`rity enforcement hooks at the following places inside the
`hypervisor in order to enforce the Chinese Wall and Type
`Enforcement policies.
`(cid:15) Domain management operations: This hook calls into the
`ACM reporting the security reference of the domain orig-
`inating the operation and of the domain that is being cre-
`
`ated, destroyed, saved, restored, migrated, etc. Calls from
`these hooks are used by the ACM (1) to assign security
`labels to created domains and to free labels of destroyed
`domains; (2) to check Chinese Wall con(cid:3)ict sets before
`creating, resuming, or migrating-in domains; and (3) to
`adjust the set of running CW-types when destroying, sus-
`pending, or migrating-out domains.
`(cid:15) Event channel operations: Event-channel hooks mediate
`the creation and destruction of event channels between
`domains. The ACM uses calls from these hooks to de-
`cide whether the two domains setting up an event channel
`are members of a common coalition. If the ACM returns
`a permitted decision, the event channel setup continues
`beyond the hook. The subsequent sending and receiving
`of eventsq via the connected channel do not need to be
`mediated because they would yield the same result (un-
`less the policy changes, see below). If the hook receives a
`deny decision, the event channel setup is aborted and the
`hypervisor call returns with an error.
`(cid:15) Shared memory hook: Grant-table hypervisor calls allow
`one VM to grant access to some if its memory pages
`to another VM. This mechanism (synchronized via event
`channels) enables ef(cid:2)cient communication between VMs
`running on the same hypervisor. Since the shared mem-
`ory may in some cases be established dynamically during
`the communication (e.g., sending and receiving network
`packets or reading and writing from virtual disks), the se-
`curity hook guarding this operation may be on the perfor-
`mance critical path.
`
`Decision caching. Since neither the event channel nor the
`shared memory hook calls induce any state change in the
`ACM, we use caching of access control decisions to mini-
`mize the overhead introduced by the security hooks calling
`into the ACM and the ACM authorizing access.
`We cache access control decisions locally in the data
`structures involved in a grant-table or event-channel oper-
`ation the (cid:2)rst time an access control decision is required
`between two VMs. The decision cache is not used for do-
`main operation hooks because the ACM must be aware of
`these calls to update its security state. We are experiment-
`ing with multiple cache layouts to (cid:2)nd the best trade-off
`between memory requirements and lookup speed.
`Decision caching achieves near-zero overhead on the
`critical path at the cost of additional management and com-
`plexity. When a VM is destroyed or migrated, all cache
`entries regarding this VM must be cleared. The overhead of
`clearing these caches is very low.
`
`Policy Changes. When the policy changes, we must ex-
`plicitly revoke a shared resource from a VM that is no
`longer authorized to use it. Since we use extensive caching,
`we must propagate access authorization changes into the
`
`Proceedings of the 21st Annual Computer Security Applications Conference (ACSAC 2005)
`1063-9527/05 $20.00 © 2005 IEEE
`
`6
`
`Microsoft Ex. 1016, p. 6
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`

`

`caches of VMs. Additionally, we de(cid:2)ne a re-evaluation
`function for both event-channel and grant-table hooks be-
`cause these hooks check permissions only when an event-
`channel or a shared memory area is

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket