`Computer Science
`
`
`IBM Research Report
`
`Multi-Level Security Requirements for Hypervisors
`
`
`Paul A. Karger
`IBM Research Division
`Thomas J. Watson Research Center
`P. O. Box 704
`Yorktown Heights, NY 10598, USA
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`IBM
`
`Limited Distribution Notice: This report has been submitted for publication outside of IBM and will probably be copyrighted if
`accepted for publication. It has been issued as a Research Report for early dissemination of its contents. In view of the transfer of
`copyright to the outside publisher, its distribution outside of IBM prior to publication should be limited to peer communications and
`specific requests. After outside publication, requests should be filled only by reprints or legally obtained copies of the article (e.g.,
`payment of royalties). Some reports are available at http://www.research.ibm.com/resources/paper_search.html. Copies may
`requested from IBM T.J. Watson Research Center, 16-220, P.O. Box 218, Yorktown Heights, NY 10598 or send email to
`reports@us.ibm.com.
`
`
`Research Division
`Almaden – Austin – Beijing – Delhi – Haifa – T.J. Watson – Tokyo – Zurich
`
`Microsoft Ex. 1015, p. 1
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`This paper has been accepted by the 21st Annual Computer Security Applications
`Conference, to be held 5-9 December 2005 in Tucson, AZ. It will appear in the
`conference proceedings, published by the IEEE, and downloadable from
`http://www.acsac.org
`
`Microsoft Ex. 1015, p. 2
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Multi-Level Security Requirements for Hypervisors
`
`
`Paul A. Karger
`IBM Thomas J. Watson Research Center
`P.O. Box 704, Yorktown Heights, NY 10598, USA
`karger@watson.ibm.com
`
`
`Abstract
`
`
`
`
`
`Using hypervisors or virtual machine monitors for
`security has become very popular in recent years, and
`a number of proposals have been made for supporting
`multi-level security on secure hypervisors, including
`PR/SM, NetTop, sHype, and others. This paper looks
`at the requirements that users of MLS systems will
`have and discusses their implications on the design of
`multi-level secure hypervisors. It contrasts the new
`directions for secure hypervisors with the earlier
`efforts of KVM/370 and Digital’s A1-secure VMM
`kernel.
`
` 1
`
`
`
` Purpose of this paper
`
`There have been a number of recent efforts to
`develop multi-level security (MLS) for hypervisors or
`virtual machine monitors (VMMs), such as NetTop
`[39], sHype [43], and a proposed combination of Xen
`[16] and sHype [32]. There has been a lot of
`confusion about what
`the
`requirements are
`to
`adequately support multi-level security (MLS) in a
`hypervisor. The hypervisor is being used to separate
`multiple instances of untrusted operating systems,
`running at different security levels. The purpose of
`this paper is to clarify what end-users of MLS expect
`to be able to do1, and what technical issues impact
`those requirements at Common Criteria levels EAL4
`and above.2 This paper presents no major new ideas or
`innovations. The goal is to assist developers of
`hypervisors to decide which of these ideas and features
`are important to make multi-level security useful to the
`
`
`1 The end-user requirements are derived from the author’s personal
`experience designing, deploying, and supporting a variety of MLS
`systems within the DoD and in designing high security hypervisors.
`2 Trusted Information Systems, Inc. developed a proposed
`interpretation of the Orange Book for Virtual Machine Monitors [10]
`that attempted to clarify some of these issues, but it did not address
`the networking issues on which this paper particularly focuses.
`
`end-users. A hypervisor with fewer features is less
`expensive to build and is easier to evaluate under the
`Common Criteria. However, if the hypervisor is too
`restrictive, then the customers will be unable to
`implement the MLS applications that they want to run.
`This paper identifies a set of features that are needed to
`make the hypervisor useful, yet are still simple enough
`to assure its security.
`
` 2
`
` End-User Expectations
`
`
`2.1 What does Multi-Level Secure Mean?
`
`
`MLS systems can mean many things to many
`people.
` What this paper will describe are the
`requirements and implications of a multi-level secure
`mode of operation as was defined many years ago in
`DoD Directive 5200.28 [11] and in the implementing
`manual [13].3 A system that runs in multi-level secure
`mode has information at a variety of classification
`levels4, but not all users are cleared for all information.
`By contrast, most classified systems in the DoD today
`run in system-high mode and have information at a
`variety of classification levels, but all users are cleared
`for the most sensitive information in the system. The
`system may be a single machine or an entire network.
`For example,
`the DoD’s SIPR network stores
`information marked from Unclassified through Secret,
`but all users are required to have at least a Secret
` There is also a controlled mode of
`clearance.
`operation in which all users are cleared to some level,
`but not necessarily the highest level of information.
`The first successfully deployed controlled mode
`
`3 A more modern version of these definitions can be found in [6]
`4 This paper will speak of security levels in most cases to make the
`language simpler. However, using only hierarchic security levels is
`an over-simplification of the model. The DoD security model is
`actually a lattice-structure with both levels and categories. A point in
`the lattice is usually called an access class, and any pair of access
`classes may be comparable (<, =, or >) or they may be disjoint and
`totally incomparable. See [17] for details.
`
`Microsoft Ex. 1015, p. 3
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`system was the Multics system at the Air Force Data
`Services Center in the Pentagon that processed Top
`Secret information, but allowed users who were only
`cleared for Secret. Under the old Orange Book
`evaluation system [5] and as recommended in the
`Yellow Books [4, 12], system-high systems were
`typically evaluated B1 or below. Controlled mode
`systems were typically evaluated at B2, and true multi-
`level mode required B3 or higher. Translating to the
`Common Criteria [7-9], B1 and below are roughly
`EAL4 and below, B2 is roughly EAL5, and B3 and
`higher are roughly EAL6 and higher.
`
`
`This paper has focused on the use of MLS for the
`defense applications, but they are by no means limited
`to defense applications. MLS can be extremely useful
`in commercial applications. An example use of MLS
`in a frequent-flyer smart card application is shown in
`[28, 29]. IBM has developed a new extended
`mandatory access control model, designed to provide
`multi-organizational MLS in a meaningful way to the
`entire Internet. This is described in [24] and in section
`3 of [46]. The development of multi-organizational
`MLS for commercial use also has payoffs for the
`military. Traditional military MLS models have been
`single-organization models.
` Everyone
`in
`the
`Department of Defense follows the same security
`rules. However, this traditional single-organization
`model has problems when multi-national coalition
`forces must work together. Each country’s military
`has its own security policies, and those policies do not
`easily map into a single policy. By contrast, IBM’s
`multi-organizational MLS, designed to handle many
`different businesses on a single world-wide Internet, is
`much better suited to modeling the many different
`security policies of multi-national coalition forces.
`
`2.2 What do Users Want to do with MLS
`Systems?
`
`
`
`The most basic requirement is that the MLS system
`keeps highly classified information from leaking to
`people who are not properly cleared. This requirement
`is met by a system that implements the Bell and
`LaPadula security model
`[17].
` However,
`this
`requirement can also be met by simply keeping data of
`different classifications on different computer systems
`and restricting access to those systems by clearance
`levels. Most systems in the DoD do exactly that and
`run in a system-high mode.
`
`The biggest problem with system-high mode is that
`sharing information across security levels is very hard.
`
`Users at high levels of security want to be able to read
`low-level information, even though they do not want to
`contaminate that low-level information with high-level
`secrets. Keeping multiple copies of the low level
`information on different machines running at different
`system-high levels is not acceptable. First, you need to
`have significantly larger amounts of storage in such a
`case, and keeping the data synchronized can be very
`difficult. If you update the low-level data on a low-
`level machine, that update must be replicated onto all
`the other copies. Such replication is particularly
`difficult, because machines running at different
`system-high levels must NOT be networked together.
`The DoD frequently has to resort to sneakernet to
`apply these types of updates.
`
`Users also want to downgrade information from
`higher security levels to lower security levels. The
`simplest form of this is the statutory downgrading
`required after the passage of specific numbers of years.
`Since statutory downgrading only happens after
`multiple decades have passed, there is little need to
`make it happen in real time, although there is a need
`for efficiently downgrading large numbers of files
`from archival storage.
`
`However, there is another form of downgrading that
`does need to be done quickly and in real time. A user
`at a high security level may determine that a particular
`piece of information needs to be made available to
`someone at a lower security clearance. For example,
`an intelligence analyst may determine from a spy’s
`report that the enemy is going to attack at dawn. The
`defenders who need to know about the upcoming
`attack, but those defenders should not know who is the
`spy. The analyst must sanitize the information,
`removing any indicator of who the spy is, but leaving
`the information that the enemy will attack at dawn.5
`The analyst needs to be able to isolate the information
`to be downgraded, ensure
`that
`the particular
`information cannot be modified until the downgrade
`operation has completed, and
`then release
`that
`information to the recipient on a timely basis.
`
`
`3
`
`Implications of the Bell and LaPadula
`Security Model
`
`
`
`The Bell and LaPadula security model [17] imposes
`a number of constraints on possible implementations of
`MLS systems. In particular, Bell and LaPadula require
`
`5 Sanitization without leaving indicators is often very tricky, but for
`this paper, we assume that the analyst can easily determine which
`information is safe to downgrade.
`
`Microsoft Ex. 1015, p. 4
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`needs to be a trusted intermediary that accept and error
`check all messages sent by the sender, even if the
`receiver is refusing all input. This means that the
`intermediary may need huge amounts of buffer
`memory to hold hours or days worth of traffic. In
`addition, many protocols that run on top of TCP need
`two-way communications. For example, the FTP
`protocol [40] cannot run over a one-way network. The
`above-mentioned products use their own proprietary
`protocol to transfer files from low to high.
`
`4 Hypervisor Implications
`
`There are two classes of hypervisors that must be
`considered when examining the technical implications
`of MLS for hypervisors. The two classes are pure
`isolation hypervisors and sharing hypervisors.
`
`4.1 Pure Isolation Hypervisors
`
` pure isolation hypervisor simply divides a
`machine into partitions, and permits no sharing of
`resources between the partitions (other than CPU time
`and primary memory). Implementing a pure isolation
`hypervisor is very easy, because the only security
`policy to be enforced is isolation. IBM’s EAL5-
`evaluated PR/SM system
`[2]
`for
`the z/Series
`mainframes is a good example of a pure isolation
`hypervisor. There is essentially no sharing between
`partitions in PR/SM. PR/SM does have features for
`certain very limited forms of sharing (such as channel
`to channel connections, etc.), but under the EAL5
`evaluation certificate, such sharing
`is absolutely
`forbidden. If a customer site turned on such sharing,
`they would no longer be running an evaluated
`configuration.
`
`The partitions of a pure isolation hypervisor are
`essentially
`just
`like a collection of system-high
`separate computers. Each partition has its own disks
`and network connections, and if one partition is
`unclassified and the other is secret, then there cannot
`even be a network connection between them.
`
` A
`
` A
`
` valid question is, “Who would want a pure
`isolation hypervisor? You can get the same results by
`running several separate machines.” In the case of a
`z/Series mainframe,
`there
`is a good
`reason.
`Mainframes are so expensive that the ability to
`partition one system into several isolated systems will
`save the customer lots of money, even if no sharing is
`permitted.
`
`
`that each process in a single system (or each system-
`high machine in a network) be identified at a particular
`security level. That process is allowed to read lower-
`classified information, but it is not allowed to write
`files that are marked at a lower classification level.
`This is to prevent Trojan horses from releasing
`arbitrary information. Note that this is a basic
`requirement of the model at evaluation levels EAL4
`and above. It is not to be confused with covert channel
`issues [33, 36] that only come into play at B2 or EAL5
`and above.
`
`The result of this no-write-down requirement is that
`network connections between system-high systems are
`only generally useful if the systems are at precisely the
`same system-high level. Most network protocols
`require two-way communications (if only for packet
`acknowledgements), and acknowledgements cannot be
`permitted from high to low. This requirement is made
`clear in the Trusted Network Interpretation (TNI) [14]
`of the Orange Book [5].6 It is possible to build truly
`one-way networks. Such networks were first proposed
`in chapter 7 of [25] and in [26]. Rushby and Randell
`[42] proposed a complete implementation of such a
`system, based on the Newcastle Connection, developed
`at University of Newcastle. There have been several
`in Australia7
`to
`commercial products evaluated
`implement one-way networks of one kind or another.
`These products from BAE Systems, Compucat, and
`Tenix Defence Systems all provide very limited
`communications capabilities.8
`
`
`Why are these one-way networks so limited? Most
`network protocols use two-way communications to
`implement both flow control and error control. If you
`cannot have two-way communications, then there
`
`6 The TNI [14] explicitly calls for strictly one-way networking at
`level B2 in section 3.2.1.3.4. However, in the B1 sections of the
`TNI, section 3.1.1.3.1 requires accurate labels on information
`transferred between network trusted computing base (NTCB)
`partitions, and section 3.1.1.4 requires that subjects and objects used
`for communication with other components are under control of the
`NTCS partition. The phrase “under control” is critical here, because
`the distinction between overt communications channels that must be
`secure at B1 and covert communications channels that need not be
`secure until B2 is whether or not they are “under control” of the
`TCB. Since the subjects and objects for communication are under
`control of the NTCB, the issues of one-way communications and
`packet acknowledgements are NOT covert channel issues. This is an
`inconsistency in the TNI and not an unexpected one. The TNI has
`been criticized in a number of ways for inconsistencies like this in
`[44].
`7http://www.dsd.gov.au/infosec/evaluation_services/epl/dap.html
`8 The BAE Systems product evaluation report [3] indicates that it
`may have covert channel issues that are discussed in classified
`supplementary reports. The covert channel situation seems better on
`the other two products.
`
`Microsoft Ex. 1015, p. 5
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`However, for smaller systems, such as small
`departmental servers or desktop or workstation clients,
`the benefits of a pure isolation hypervisor are much
`harder to justify. Additional separate systems are
`sufficiently cheap with modern technology that the
`inherent performance costs and complexity of
`hypervisors become significant issues.
`
`
`Sharing Hypervisors
`
`One could argue for a pure isolation desktop client
`as an alternative to four or five clients that some DoD
`end-users have find room for in their offices. One
`such pure isolation desktop client is NetTop [39]9,
`developed by the NSA specifically to reduce the
`number of distinct client machines that an analyst must
`have on their desktop. NetTop allows separate
`partitions to connect to separate external networks, but
`allows no sharing of any kind between partitions.
`However, the end user of NetTop or other pure
`isolation hypervisors will almost immediately want to
`transfer information from one client to another, and
`they will get very frustrated when they can’t.
`
`Thus, pure isolation hypervisors are really only
`useful for customers of the largest and most expensive
`servers. Using multiple separate machines makes more
`sense for smaller configurations.
`
`4.2
`
`Sharing hypervisors permit significant resource
`sharing between partitions. z/VM10 is the best example
`of a sharing hypervisor. Virtual machines under VM
`can share either virtual or physical disk, network
`connections, etc. In the early days, two virtual
`
`9 The paper on NetTop [39] describes it as a high-assurance system.
`However, this description by the NSA is quite inaccurate. High
`assurance conventionally means evaluation at levels EAL6 or above.
`EAL4 and EAL5 can be considered medium assurance, although
`some would classify EAL4 as low assurance. Anything lower than
`EAL4 is certainly low assurance. NetTop is based on SE/Linux [37,
`47] which has never been evaluated. Regular Linux kernels have
`been evaluated to EAL3 and are under evaluation at EAL4. Linux
`itself is so complex that it is likely to never reach high assurance
`without major reimplementation. SE/Linux, while it adds security
`features to Linux, also significantly increases the complexity of
`Linux [30] which makes it even less likely to ever achieve a high
`assurance evaluation. A quote from [44] well summarizes the need
`for genuine high assurance: “That which must be Trusted had best be
`Trustworthy.”
`10 z/VM is the latest version of IBM’s primary virtual machine
`monitor product. IBM invented the concept of hypervisors or virtual
`machine monitors in the mid 1960s at the Cambridge Scientific
`Center. The first prototype system was CP/40 [15, 35] on a specially
`modified System 360/40. The first version available outside of IBM
`was CP-67/CMS [21] for the System 360/67. The first fully
`supported product was VM/370 [21]. A full history of VM has been
`written by Varian [48].
`
`machines under VM communicated by connecting the
`output of the virtual card punch of one into the input of
`the virtual card reader of another. Secured versions of
`sharing hypervisors can support a variety of secure
`applications, including fast, easy low to high sharing,
`sophisticated downgrading, etc. The idea of a secure
`sharing hypervisor originated with Madnick and
`Donovan [38]. The best examples of such secure
`sharing hypervisors are KVM/370 [45] and Digital’s
`A1-secure VMM [31].
`
`
`The most critical feature of a secure sharing
`hypervisor is a secure shared file store11. The secure
`shared file store allows a high level partition to have
`read-only access to low-level data, while a low-level
`partition gets read-write access to the same data. This
`avoids the clumsy one-way networking approaches
`described in section 3. Only a single copy of the data
`is required and updates are visible immediately to all
`partitions.12
`
`The secure shared file store is so important, because
`it makes a variety of MLS applications possible. The
`most obvious is the secure read-down capability
`described in the previous paragraph.
` However,
`downgrading applications also need a secure shared
`file store. This is because the downgrading application
`needs to isolate the file to be downgraded, allow
`trusted programs and/or human beings to review what
`is to be downgraded, and only after all electronic or
`human approvals have been completed are
`the
`markings on the file changed. During that whole
`process that could take minutes or even hours, the
`candidate file must not be modified in any way by
`other than totally trusted software. Once approved, the
`remarking must be an atomic operation – either it
`totally completes or it doesn’t happen at all. A secure
`shared file store makes this much easier to implement
`and to assure correctness.
`
`Isolation
`from a Pure
`5 Evolving
`Hypervisor to a Sharing Hypervisor
`
`
`
`An obvious question is what has to be done to
`evolve from a pure isolation hypervisor to a sharing
`hypervisor. The major constraints on this evolution
`
`11 A secure shared file store is simply a secure file system available at
`the hypervisor level, rather than at the guest operating system level.
`A guest operating system might store its entire file system within a
`single file of secure shared file store. The mini-disks of z/VM are a
`good example of a secure shared file store.
`12 Properly synchronizing those updates, so that all partitions see
`consistent date requires a mechanism, such as version numbers [22]
`or event counts [41] to solve the secure readers-writers problem.
`
`Microsoft Ex. 1015, p. 6
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`are that the add-ons must be secure, but they also must
`perform very well. It is very easy for a hypervisor
`implementation to add huge amounts of overhead to a
`system, and adding such overhead needlessly could
`easily result in customers abandoning either security or
`hypervisors or both.
`
`5.1 One-Way Network Options
`
`
`As discussed in section 3, it is possible to
`implement semi-usable one-way networks, if you have
`a highly trusted intermediary. The intermediary must
`have a very large buffer store that is totally protected.
`The easiest way to construct such a buffer is in a
`secure shared file store. The reason that a huge buffer
`is needed is because there cannot be any flow control
`from a high partition to a low partition. Even if the
`high partition cannot accept more packets,
`the
`intermediary must continue to accept traffic and store
`it until the high partition can again accept input. This
`is discussed in some detail in chapter 7 of [25]. You
`could give the intermediary access to an entire real
`disk drive as its buffer store, but based on the principle
`of
`least privilege, you would
`like a different
`intermediary
`for each pair of security
`levels.
`Managing those buffers in a secure file store will be
`much easier than with many separate real physical
`drives.
`
`
`Another way to implement a one-way network is
`with one of the Australian-evaluated one-way network
`products that were mentioned in section 3. The
`product from Tenix Defence Systems deserves careful
`examination, because it even supports a one-way cut
`and paste capability (although only for Windows
`operating systems). However, these products all use
`hardware add-ons that are external to the hypervisor
`and would therefore require a separate physical
`network card for each partition.
`
`5.2
`
`
`Secure Shared File Store Options
`
`A secure shared file store need not be a fully
`general-purpose file system. Most files in the file store
`will be virtual disks (or mini-disks as called in z/VM).
`A virtual disk contains an entire guest OS file system
`within it, so the virtual disk is likely to be large and is
`not likely to change its size very often. The secure
`shared file store can easily insist that such files always
`be contiguous and always be of fixed length. The
`other use
`for secure
`file store
`is
`to support
`downgrading operations. Files to be downgraded are
`likely to be much smaller than entire virtual disks, but
`
`once created, they never change at all, so again, fixed
`size contiguous files are acceptable. Since there may
`be many of these files and their size is likely to be
`small, using entire disk drives is not a viable option.
`They also have relatively short lifetimes while the data
`is being reviewed and then the data is likely to be
`moved into the virtual disks of the intended destination
`partition.
`
`The secure shared file store could be implemented
`in two ways – as a subsystem of the hypervisor or as a
`separate highly trusted partition. Either approach can
`be made to be secure, but the overriding consideration
`must be performance. I/O operations are often the
`Achilles heel for performance in a hypervisor. Many
`workloads will be extremely I/O intensive and some
`will have downgrades occurring frequently. However,
`this is implemented, minimizing the cost of context
`switching into and out of the secure shared file store
`will be crucial.
`
`If the secure file store is a subsystem of the
`hypervisor, the calls can be made as cross-ring calls
`that can often be implemented with little performance
`overhead and without having to flush caches and
`translation buffers. If the secure file store is in a
`separate partition, then the interface is essentially a
`message-passing interface. Lauer and Needham [34]
`have shown the duality of subroutine calls and
`message-passing calls, but in section 6.8 of [23],
`Karger argues that the calling interface will always (or
`at least almost always) will have better performance.
`This can be seen in the hypervisor context as follows.
`If the secure shared file system is a subsystem of the
`hypervisor itself, then a call to perform a read or a
`write consists of a cross-ring call into the kernel,
`followed by a cross-ring return. The calls and returns
`should require no flushing of translation buffers or
`caches. Between the call and the return, the file
`system must perform I/O and those operations may
`have context switches, depending on
`the driver
`architecture, of course.
`
`By contrast, if the secure shared file system is
`implemented in a separate partition, then a call to
`perform a read or a write consists of a cross-ring call
`into the kernel to initiate a message pass, followed by a
`full context switch to the file system partition and a
`cross-ring return out to the file system code itself. The
`file system code now performs I/O, just as the file
`system subsystem would with essentially the same
`performance overheads for communicating to and from
`drivers. However, when the I/O has completed, the
`file system code must now do a cross-ring call into the
`
`Microsoft Ex. 1015, p. 7
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`that I/O drivers must be completely trusted, because
`they can address any location of main memory. An
`I/O MMU would provide the same concepts of virtual
`address translation and protection to I/O drivers that
`MMUs provide to the CPU. With an I/O MMU, a
`device driver would be constrained to only using those
`memory locations allocated to it by the operating
`system, just like an application program is similarly
`constrained. With such hardware support, most I/O
`drivers could become ordinary unprivileged programs.
`(The exception would be I/O drivers for shared multi-
`user devices, such as disks or networks. Such I/O
`drivers must provide secure multiplexing of those
`shared devices.)
`
`I/O MMUs are not a new idea. The use of an I/O
`MMU was first proposed in 1975 for the Multics
`Secure Front-End Processor (SFEP) [18], and the first
`practical
`implementation was for
`the Honeywell
`SCOMP processor [19] which received the first-ever
`A1 security evaluation in 1985. An example of a
`modern implementation of I/O MMUs can be found in
`section 2.19 of [1].
`
`7 Conclusion
`
`
`This paper has shown what some of the critical
`issues are in adding multi-level security (MLS) to a
`hypervisor. If the users are content with a pure
`isolation hypervisor
`that makes sharing between
`partitions extremely difficult, then an approach such as
`NetTop is viable. A pure isolation hypervisor, such as
`PR/SM, can be extremely cost effective at sharing very
`expensive server hardware, such as IBM’s z/Series
`mainframes. However, the paper has shown that many
`important applications cannot be easily implemented
`on a pure isolation hypervisor, and that implementing
`such applications creates the need for a secure file
`store, even at Common Criteria level EAL4. The
`paper contains theoretical analyses suggesting that
`implementing such a file store as a hypervisor
`subsystem will likely give much better performance,
`but tempers those recommendations with evidence that
`purely theoretical performance analysis of hypervisors
`can be misleading. Regardless of the implementation
`approach, the performance of such a secure file store
`will be extremely critical to the success of any
`hypervisor design.
`
`
`kernel to initiate a message pass back to the caller, the
`kernel must do a context switch to the calling partition,
`followed by a cross-ring return to the actual calling
`code in the guest OS. The exact amount of overhead
`for the extra context switches to and from a different
`partition will depend on the precise implementation, of
`course, but even if the underlying CPU supports
`multiple address spaces in the TB simultaneously, the
`overhead will be significant, just from register saving,
`clearing, and restoring.13 In a CPU without multiple
`address space support (such as the VAX or current
`generation x86 processors), the context switching
`overhead could easily be doubled or more. In an I/O
`intensive workload, this type of overhead could be
`prohibitive. Real decisions must be made on real
`performance benchmarks, of course. It is always
`dangerous to assume how a hypervisor will perform
`just from theoretical analyses. Section 7.3 of [20]
`shows how initial performance assists on the VAX
`VMM proved to be not very useful and how only
`measurement of those initial designs led to the proper
`optimizations.
`
`
`Even if the secure shared file store is implemented
`as part of the hypervisor, this does not mean that the
`code needs to be dispersed all over the hypervisor,
`leading to increased complexity. As required at the
`higher levels of assurance of the Common Criteria, the
`hypervisor should be
`implemented as a
`layered
`architecture, with the file store in separate layers from
`the other parts of the hypervisor. An example of such
`a layered design for a secure file store can be seen in
`[31].
`
`
`is
`store
`file
`shared
`secure
`the
`However
`implemented, the most important point of this paper is
`that the complexity and cost of a one-way network
`solution is likely to be comparable to that of a secure
`shared file store, yet the file store approach makes
`implementing sophisticated multi-level applications
`much simpler.
`
` 6
`
`
`
`
`
`I/O Memory Management Units
`
`A variety of processors are starting to deploy I/O
`memory management units
`(MMUs)
`that can
`significantly help performance by allowing a
`hypervisor partition to directly control unshared I/O
`devices. In most computers, I/O devices reference
`main memory using absolute addresses. This means
`
`13 See [27] for details on the costs of register saving, clearing, and
`restoring in these cases, together with possible optimization
`techniques.
`
`Microsoft Ex. 1015, p. 8
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`8. Information technology - Security techniques --
`Evaluation criteria for IT security -- Part 2: Security
`functional requirements, ISO/IEC 15408-2, 1999,
`International Organization for Standardizatio