throbber
Columbia Ex 2044-1
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Contents
`Introduction .................................................................................................................1
`Overview of x86 Virtualization..................................................................................2
`CPU Virtualization .......................................................................................................3
`The Challenges of x86 Hardware Virtualization ...........................................................................................................3
`Technique 1 - Full Virtualization using Binary Translation......................................................................................4
`Technique 2 - OS Assisted Virtualization or Paravirtualization.............................................................................5
`Technique 3 - Hardware Assisted Virtualization ..........................................................................................................6
`Memory Virtualization................................................................................................6
`Device and I/O Virtualization.....................................................................................7
`Summarizing the Current State of x86 Virtualization Techniques......................8
`Full Virtualization with Binary Translation is the Most Established Technology Today..........................8
`Hardware Assist is the Future of Virtualization, but the Real Gains Have Yet to Arrive..........................9
`Xen’s CPU Paravirtualization Delivers Performance Benefits with Maintenance Costs .........................9
`VMware’s Transparent Paravirtualization Balances Performance Benefits with Maintenance Costs
`..............................................................................................................................................................................................................11
`VMware is Fostering an Open Standards Approach to Virtualization ..........................................................13
`VMware Leverages a Multi-Mode VMM Architecture for Performance and Flexibility .......................13
`Conclusion ..................................................................................................................14
`Next Steps...................................................................................................................14
`
`
`
`Contents
`
`
`
`
`
`
`
`
`
`i
`
`Columbia Ex 2044-2
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Introduction
`In 1998, VMware figured out how to virtualize the x86 platform, once thought to be impossible,
`and created the market for x86 virtualization. The solution was a combination of binary translation
`and direct execution on the processor that allowed multiple guest OSes to run in full isolation on
`the same computer with readily affordable virtualization overhead.
`The savings that tens of thousands of companies have generated from the deployment of this
`technology is further driving the rapid adoption of virtualized computing from the desktop to the
`data center. As new vendors enter the space and attempt to differentiate their products, many are
`creating confusion with their marketing claims and terminology. For example, while hardware
`assist is a valuable technique that will mature and expand the envelope of workloads that can be
`virtualized, paravirtualization is not an entirely new technology that offers an “order of
`magnitude” greater performance.
`While this is a complex and rapidly evolving space, the technologies employed can be readily
`explained to help companies understand their options and choose a path forward. This white
`paper attempts to clarify the various techniques used to virtualize x86 hardware, the strengths
`and weaknesses of each, and VMware’s community approach to develop and employ the most
`effective of the emerging virtualization techniques. Figure 1 provides a summary timeline of x86
`virtualization technologies from VMware’s binary translation to the recent application of kernel
`paravirtualization and hardware-assisted virtualization.
`
`
`Figure 1 – Summary timeline of x86 virtualization technologies
`
`
`
`
`
`Introduction
`
`
`
`
`
`1
`
`Columbia Ex 2044-3
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Overview of x86 Virtualization
`The term virtualization broadly describes the
`separation of a service request from the
`underlying physical delivery of that service. With
`x86 computer virtualization, a virtualization layer is
`added between the hardware and operating
`system as seen in Figure 2. This virtualization layer
`allows multiple operating system instances to run
`concurrently within virtual machines on a single
`computer, dynamically partitioning and sharing
`the available physical resources such as CPU,
`storage, memory and I/O devices.
`As desktop and server processing capacity has
`consistently increased year after year,
`virtualization has proved to be a powerful
`technology to simplify software development and
`Figure 2 – x86 virtualization layer
`testing, to enable server consolidation, and to
`
`enhance data center agility and business continuity.
`As it turns out, fully abstracting the operating system and applications from the hardware and
`encapsulating them into portable virtual machines has enabled virtual infrastructure features
`simply not possible with hardware alone. For example, servers can now run in extremely fault
`tolerant configurations on virtual infrastructure 24x7x365 with no downtime needed for backups
`or hardware maintenance. VMware has customers with production servers that have been
`running without downtime for over three years.
`For industry standard x86 systems, virtualization approaches use either a hosted or a hypervisor
`architecture. A hosted architecture installs and runs the virtualization layer as an application on
`top of an operating system and supports the broadest range of hardware configurations. In
`contrast, a hypervisor (bare-metal) architecture installs the virtualization layer directly on a clean
`x86-based system. Since it has direct access to the hardware resources rather than going through
`an operating system, a hypervisor is more efficient than a hosted architecture and delivers greater
`scalability, robustness and performance. VMware Player, ACE, Workstation and Server employ a
`hosted architecture for flexibility, while ESX Server employs a hypervisor architecture on certified
`hardware for data center class performance.
`To better understand the
`techniques employed for x86
`virtualization, a brief background
`on the component parts is useful.
`The virtualization layer is the
`software responsible for hosting
`and managing all virtual machines
`on virtual machine monitors
`
`(VMMs). As depicted in Figure
`3Figure 3, the virtualization layer is
`a hypervisor running directly on
`
`Figure 3 – The hypervisor manages virtual machine monitors
`that host virtual machines
`
`
`Overview of x86 Virtualization
`
`
`
`
`
`2
`
`Columbia Ex 2044-4
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`the hardware. The functionality of the hypervisor varies greatly based on architecture and
`implementation. Each VMM running on the hypervisor implements the virtual machine hardware
`abstraction and is responsible for running a guest OS. Each VMM has to partition and share the
`CPU, memory and I/O devices to successfully virtualize the system.
`CPU Virtualization
`The Challenges of x86 Hardware Virtualization
`X86 operating systems are designed to run directly on
`the bare-metal hardware, so they naturally assume they
`fully ‘own’ the computer hardware. As shown in Figure
`4, the x86 architecture offers four levels of privilege
`known as Ring 0, 1, 2 and 3 to operating systems and
`applications to manage access to the computer
`hardware. While user level applications typically run in
`Ring 3, the operating system needs to have direct
`access to the memory and hardware and must execute
`its privileged instructions in Ring 0. Virtualizing the x86
`architecture requires placing a virtualization layer under
`the operating system (which expects to be in the
`most privileged Ring 0) to create and manage the
`virtual machines that deliver shared resources.
`
`Further complicating the situation, some sensitive
`instructions can’t effectively be virtualized as they have different semantics when they are not
`executed in Ring 0. The difficulty in trapping and translating these sensitive and privileged
`instruction requests at runtime was the challenge that originally made x86 architecture
`virtualization look impossible.
`VMware resolved the challenge in 1998, developing binary translation techniques that allow the
`VMM to run in Ring 0 for isolation and performance, while moving the operating system to a user
`level ring with greater privilege than applications in Ring 3 but less privilege than the virtual
`machine monitor in Ring 0. While VMware’s full virtualization approach using binary translation is
`the de facto standard today based on VMware’s 20,000 customer installed base and large partner
`ecosystem, the industry as a whole has not yet agreed on open standards to define and manage
`virtualization. Each company developing virtualization solutions is free to interpret the technical
`challenges and develop solutions with varying strengths and weaknesses.
`As clarified below, three alternative techniques now exist for handling sensitive and privileged
`instructions to virtualize the CPU on the x86 architecture:
`• Full virtualization using binary translation
`• OS assisted virtualization or paravirtualization
`• Hardware assisted virtualization (first generation)
`
`Figure 4 – x86 privilege level architecture
`without virtualization
`
`CPU Virtualization
`
`
`
`
`
`3
`
`Columbia Ex 2044-5
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Technique 1 – Full Virtualization using Binary Translation
`VMware can virtualize any x86 operating system using a
`combination of binary translation and direct execution
`techniques. This approach, depicted in Figure 5,
`translates kernel code to replace nonvirtualizable
`instructions with new sequences of instructions that
`have the intended effect on the virtual hardware.
`Meanwhile, user level code is directly executed on the
`processor for high performance virtualization. Each
`virtual machine monitor provides each Virtual Machine
`with all the services of the physical system, including a
`virtual BIOS, virtual devices and virtualized memory
`management.
`This combination of binary translation and direct
`execution provides Full Virtualization as the guest OS is
`fully abstracted (completely decoupled) from the
`underlying hardware by the virtualization layer. The guest OS is not aware it is being virtualized
`and requires no modification. Full virtualization is the only option that requires no hardware assist
`or operating system assist to virtualize sensitive and privileged instructions. The hypervisor
`translates all operating system instructions on the fly and caches the results for future use, while
`user level instructions run unmodified at native speed.
`Full virtualization offers the best isolation and security for virtual machines, and simplifies
`migration and portability as the same guest OS instance can run virtualized or on native
`hardware. VMware’s virtualization products and Microsoft Virtual Server are examples of full
`virtualization.
`
`Figure 5 – The binary translation
`approach to x86 virtualization
`
`
`CPU Virtualization
`
`
`
`
`
`4
`
`Columbia Ex 2044-6
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Technique 2 – OS Assisted Virtualization or Paravirtualization
`“Para-“ is an English affix of Greek origin that
`means "beside," "with," or "alongside.” Given the
`meaning “alongside virtualization,”
`paravirtualization refers to communication
`between the guest OS and the hypervisor to
`improve performance and efficiency.
`Paravirtualization, as shown in Figure 6, involves
`modifying the OS kernel to replace non-
`virtualizable instructions with hypercalls that
`communicate directly with the virtualization layer
`hypervisor. The hypervisor also provides hypercall
`interfaces for other critical kernel operations such
`
`Figure 6 – The Paravirtualization approach to x86
`Virtualization
`
`as memory management, interrupt handling and
`time keeping.
`
`Paravirtualization is different from full
`virtualization, where the unmodified OS does not know it is virtualized and sensitive OS calls are
`trapped using binary translation. The value proposition of paravirtualization is in lower
`virtualization overhead, but the performance advantage of paravirtualization over full
`virtualization can vary greatly depending on the workload. As paravirtualization cannot support
`unmodified operating systems (e.g. Windows 2000/XP), its compatibility and portability is poor.
`Paravirtualization can also introduce significant support and maintainability issues in production
`environments as it requires deep OS kernel modifications. The open source Xen project is an
`example of paravirtualization that virtualizes the processor and memory using a modified Linux
`kernel and virtualizes the I/O using custom guest OS device drivers.
`While it is very difficult to build the more sophisticated binary translation support necessary for
`full virtualization, modifying the guest OS to enable paravirtualization is relatively easy. VMware
`has used certain aspects of paravirtualization techniques across the VMware product line for years
`in the form of VMware tools and optimized virtual device drivers. The VMware tools service
`provides a backdoor to the VMM Hypervisor used for services such as time synchronization,
`logging and guest shutdown. Vmxnet is a paravirtualized I/O device driver that shares data
`structures with the hypervisor. It can take advantage of host device capabilities to offer improved
`throughput and reduced CPU utilization. It is important to note for clarity that the VMware tools
`service and the vmxnet device driver are not CPU paravirtualization solutions. They are minimal,
`non-intrusive changes installed into the guest OS that do not require OS kernel modification.
`Looking forward, VMware is helping develop paravirtualized versions of Linux to support proofs of
`concept and product development. Further information is provided later in this paper on page 11.
`
`
`CPU Virtualization
`
`
`
`
`
`5
`
`Columbia Ex 2044-7
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Technique 3 – Hardware Assisted Virtualization
`Hardware vendors are rapidly embracing
`virtualization and developing new features
`to simplify virtualization techniques. First
`generation enhancements include Intel
`Virtualization Technology (VT-x) and AMD’s
`AMD-V which both target privileged
`instructions with a new CPU execution
`mode feature that allows the VMM to run
`in a new root mode below ring 0. As
`depicted in Figure 7, privileged and
`sensitive calls are set to automatically trap
`to the hypervisor, removing the need for
`either binary translation or
`
`Figure 7 – The hardware assist approach to x86
`virtualization
`
`paravirtualization. The guest state is stored in
`Virtual Machine Control Structures (VT-x) or
`
`Virtual Machine Control Blocks (AMD-V).
`Processors with Intel VT and AMD-V became available in 2006, so only newer systems contain
`these hardware assist features.
`Due to high hypervisor to guest transition overhead and a rigid programming model, VMware’s
`binary translation approach currently outperforms first generation hardware assist
`implementations in most circumstances. The rigid programming model in the first generation
`implementation leaves little room for software flexibility in managing either the frequency or the
`cost of hypervisor to guest transitions1. Because of this, VMware only takes advantage of these
`first generation hardware features in limited cases such as for 64-bit guest support on Intel
`processors.
`Memory Virtualization
`Beyond CPU virtualization, the next critical component is memory virtualization. This involves
`sharing the physical system memory and dynamically allocating it to virtual machines. Virtual
`machine memory virtualization is very similar to the virtual memory support provided by modern
`operating systems. Applications see a contiguous address space that is not necessarily tied to the
`underlying physical memory in the system. The operating system keeps mappings of virtual page
`numbers to physical page numbers stored in page tables. All modern x86 CPUs include a memory
`management unit (MMU) and a translation lookaside buffer (TLB) to optimize virtual memory
`performance.
`
`
`1 For more information, see “A Comparison of Software and Hardware Techniques for x86 Virtualization” by Adams and Agesen
`presented at ASPLOS 2006
`
`Memory Virtualization
`
`
`
`
`
`6
`
`Columbia Ex 2044-8
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`To run multiple virtual machines on
`a single system, another level of
`memory virtualization is required.
`In other words, one has to
`virtualize the MMU to support the
`guest OS. The guest OS continues
`to control the mapping of virtual
`
`Figure 8 – Memory Virtualization
`
`
`addresses to the guest memory
`physical addresses, but the guest OS
`cannot have direct access to the
`actual machine memory. The VMM is responsible for mapping guest physical memory to the
`actual machine memory, and it uses shadow page tables to accelerate the mappings. As depicted
`by the red line in Figure 8, the VMM uses TLB hardware to map the virtual memory directly to the
`machine memory to avoid the two levels of translation on every access. When the guest OS
`changes the virtual memory to physical memory mapping, the VMM updates the shadow page
`tables to enable a direct lookup. MMU virtualization creates some overhead for all virtualization
`approaches, but this is the area where second generation hardware assisted virtualization will
`offer efficiency gains.
`Device and I/O Virtualization
`The final component required beyond CPU and
`memory virtualization is device and I/O
`virtualization. This involves managing the routing
`of I/O requests between virtual devices and the
`shared physical hardware.
`Software based I/O virtualization and
`management, in contrast to a direct pass-through
`to the hardware, enables a rich set of features and
`simplified management. With networking for
`example, virtual NICs and switches create virtual
`networks between virtual machines without the
`network traffic consuming bandwidth on the
`physical network, NIC teaming allows multiple
`physical NICS to appear as one and failover
`Figure 9 – Device and I/O virtualization
`
`transparently for virtual machines, and virtual
`machines can be seamlessly relocated to different systems using VMotion while keeping their
`existing MAC addresses. The key to effective I/O virtualization is to preserve these virtualization
`benefits while keeping the added CPU utilization to a minimum.
`The hypervisor virtualizes the physical hardware and presents each virtual machine with a
`standardized set of virtual devices as seen in Figure 9. These virtual devices effectively emulate
`well-known hardware and translate the virtual machine requests to the system hardware. This
`standardization on consistent device drivers also helps with virtual machine standardization and
`portability across platforms as all virtual machines are configured to run on the same virtual
`hardware regardless of the actual physical hardware in the system.
`
`Device and I/O Virtualization
`
`
`
`
`
`7
`
`Columbia Ex 2044-9
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Summarizing the Current State of x86 Virtualization
`Techniques
`VMware is currently utilizing all of these x86 virtualization techniques either in production or in
`the development lab to deliver the best balance between performance and functionality. With
`the overview of virtualization techniques covered, the summary comparison in Figure 10 is useful
`for understanding the high-level strengths and weaknesses of each technique.
`
`
`
`
`Technique
`
`Full Virtualization with
`Binary Translation
`
`Hardware Assisted
`Virtualization
`
`Binary Translation and
`Direct Execution
`
`Exit to Root Mode on
`Privileged Instructions
`
`
`OS Assisted
`Virtualization /
`Paravirtualization
`
`Hypercalls
`
`Guest Modification /
`Compatibility
`
`Unmodified Guest OS
`Excellent compatibility
`
`Unmodified Guest OS
`Excellent compatibility
`
`Performance
`
`Good
`
`Fair
`Current performance lags
`Binary Translation
`virtualization on various
`workloads but will improve
`over time
`
`Guest OS codified to issue
`Hypercalls so it can’t run
`on Native Hardware or
`other Hypervisors
`Poor compatibility; Not
`available on Windows
`OSes
`
`Better in certain cases
`
`Used By
`
`VMware, Microsoft,
`Parallels
`
`VMware, Microsoft,
`Parallels, Xen
`
`VMware, Xen
`
`Guest OS Hypervisor
`Independent?
`
`Yes
`
`Yes
`
`XenLinux runs only on Xen
`Hypervisor
`VMI-Linux is Hypervisor
`agnostic
`
`
`
`Figure 10 – Summary comparison of x86 processor virtualization techniques
`
`Full Virtualization with Binary Translation is the Most Established
`Technology Today
`Full Virtualization with Binary Translation is currently the most established and reliable
`virtualization technology available. VMware’s implementation also delivers the highest
`virtualization performance across commonly deployed Windows and Linux operating systems
`with the most robust feature set and greatest ease of management. With the exception of 64-bit
`guests running on Intel CPUs with binary translation, VMware can support both full virtualization
`and hardware assisted virtualization on production servers with the choice depending on the
`relative performance.
`
`Summarizing the Current State of x86 Virtualization Techniques
`
`
`
`
`
`8
`
`Columbia Ex 2044-10
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`Full virtualization with binary translation will continue to be a useful technique for years to come
`as newer and faster hardware continues to advance binary translation performance for
`unmodified guest OSes, however, hardware assisted virtualization is where virtualization is going
`with processor paravirtualization being a performance enhancing stopgap along the way.
`
`Hardware Assist is the Future of Virtualization, but the Real Gains Have Yet
`to Arrive
`Intel and AMD’s first generation hardware assist features released in 2006 are the first step in
`removing the need for hypervisors to employ binary translation and OS-assisted processor
`paravirtualization. As Xen has demonstrated, these early features have made it much easier to
`build a simple hypervisor without having to address binary translation or paravirtualize the
`operating system. Xen is using these hardware assist features to virtualize Windows, but with
`significantly reduced efficiency versus either VMware’s binary translation or XenLinux’s
`paravirtualization. The challenge is that these first generation implementations employ a rigid
`programming model with high hypervisor to guest transition overhead, resulting in lower
`virtualization performance than VMware’s binary translation approach. VMware is working with
`partners Intel and AMD to enhance future hardware designs to better take advantage of
`software’s inherent flexibility to address virtualization challenges.
`Second generation hardware assist technologies are in development that will have a greater
`impact on virtualization performance while reducing memory overhead. Both AMD and Intel have
`announced future development roadmaps, including hardware support for memory virtualization
`(AMD Nested Page Tables [NPT] and Intel Extended Page Tables [EPT]) as well as hardware
`support for device and I/O virtualization (Intel VT-d, AMD IOMMU).
`Compute-intensive workloads already run well with binary translation of privileged instructions
`and direct execution of non-privileged instructions, but NPT/EPT will provide noticeable
`performance improvements for memory-remapping intensive workloads by removing the need
`for shadow page tables that consume system memory. Increased performance and reduced
`overhead expected in future CPUs will provide motivation to use hardware assist features much
`more broadly, but don’t expect revolutionary improvements. As processors get significantly faster
`each year, each year’s processor performance increases will likely have a greater impact on
`virtualization capacity and performance than future hardware assist optimizations.
`Over time, expect to see hardware assisted virtualization offset paravirtualization’s near-term
`performance benefits related to processor and memory virtualization. As hardware assist is further
`developed for the CPU, memory and device I/O, the performance benefits of paravirtualization
`over hardware assisted direct execution will grow smaller. As hardware assist features develop
`and mature, hypervisors will commoditize as they increasingly leverage a common set of
`hardware assist features, but they will continue to compete on performance, manageability and
`features.
`
`Xen’s CPU Paravirtualization Delivers Performance Benefits with
`Maintenance Costs
`Xen likes to position paravirtualization as the second generation of virtualization, labeling
`VMware‘s full virtualization technology as the first generation. The reality is that paravirtualization
`is an old and useful virtualization technique that does deliver performance benefits for some
`workloads but typically with added maintenance costs. No one is disputing the value of installing
`
`Summarizing the Current State of x86 Virtualization Techniques
`
`
`
`
`
`9
`
`Columbia Ex 2044-11
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`virtualization management applications and device drivers into guest operating systems to
`increase performance, but data centers must weigh the advantage of reduced virtualization
`overhead versus the maintenance and support costs of running a modified guest OS kernel to
`enable processor paravirtualization. The performance advantage depends on the nature of the
`workload. Most user space workloads gain very little, and near native performance is not achieved
`for all workloads.
`Xen’s first major challenge is that processor paravirtualization does not apply to unmodified guest
`OSes, so it is not an option for guests that can’t be modified (e.g. Windows) and guests where
`modifications are undesired (e.g. when supported versions of Linux are required). Most data
`centers are not willing to run production business applications on a third party paravirtualized
`Linux kernel running on an open source hypervisor. Additionally, the invasive kernel modifications
`tightly couple the guest OS to the hypervisor with data structure dependencies, preventing the
`modified guest OS from running on other hypervisors or native hardware.
`Even though major Linux distributions are starting to bundle paravirtualization into the OS kernel,
`deploying servers using these paravirtualized operating system can increase maintenance costs
`and reduce portability. As many companies have found XenLinux’s paravirtualization approach
`unsuitable for enterprise use, many new virtualization vendors with open source Xen-derived
`offerings are abandoning Linux paravirtualization entirely. Virtual Iron, for example, has declared
`paravirtualization a “dead-end approach2” and is focusing entirely on full virtualization of
`unmodified guest operating systems using hardware assist.
`This leads to Xen’s second competitive challenge. Xen 3.x only recently introduced support for
`hardware assisted full virtualization of unmodified guest OSes (e.g. Windows). As VMware’s binary
`translation is far more sophisticated and higher performing than Xen's employment of first
`generation hardware assist, Xen vendors can’t compete with VMware’s overall performance,
`reliability, and ease of management. Those that haven’t given up on processor paravirtualization
`entirely often try to confuse the issue by implying that their Linux paravirtualization performance
`benefits carry over to unmodified guest OSes that use no processor paravirtualization and require
`hardware assist for virtualization.
`To be clear, VMware does find processor paravirtualization to increase performance significantly
`on some workloads today, but the longer term performance delta when second generation
`hardware assist features are available is unclear. The performance difference may be reduced,
`eliminated, or expanded as enhancements to the paravirtualization interface may create new
`opportunities. It’s an open question.
`As VMware sees it, the major problem with processor paravirtualization is the need for guest OS
`modification that makes it dependent on a specific hypervisor to run. The Xen interface, for
`example, implements deep paravirtualization with strong hypervisor dependency. The OS kernel
`is closely tied to structures in the hypervisor implementation. This creates an incompatibility as
`the XenLinux kernel can’t run on native hardware or other hypervisors, doubling the number of
`kernel distributions that have to be maintained. Additionally, it’s limited to newer, open source
`operating systems as the intrusive changes to the guest OS kernel require OS vendor support.
`Finally, the strong hypervisor dependency impedes the independent evolution of the kernel.
`
`
`2 http://www.virtualiron.com/fusetalk/blog/blogpost.cfm?catid=3&threadid=10
`
`Summarizing the Current State of x86 Virtualization Techniques
`
`
`
`
`
`10
`
`Columbia Ex 2044-12
`Symantec v Columbia
`IPR2015-00375
`
`

`
`VMware Understanding Full Virtualization, Paravirtualization, and Hardware Assist
`
`VMware’s Transparent Paravirtualization Balances Performance Benefits
`with Maintenance Costs
`While Xen has implemented paravirtualization that
`is deeply rooted and intrusive to the Linux kernel,
`VMware has been working on a standard interface
`to gain the performance advantages of OS assisted
`paravirtualization while mitigating the maintenance
`costs. In 2005, VMware proposed a transparent
`paravirtualization interface, the Virtual Machine
`Interface (VMI), as a standardized communication
`mechanism between the guest operating system
`and the hypervisor.
`Figure 11 – Transparent paravirtualization
`with the Virtual Machine Interface (VMI)
`As depicted in Figure 11, the Virtual Machine
`
`Interface is a layer between the hypervisor and the
`paravirtualized guest OS. Transparent paravirtualization is delivered as the same guest OS kernel
`can run either natively on the hardware or virtualized on any compatible hypervisor. It works by
`design as all VMI calls have two implementations, inline native instructions for bare hardware and
`indirect calls to a layer between the guest OS and hypervisor in

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket