`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE WESTERN DISTRICT OF TEXAS
`WACO DIVISION
`
`DAEDALUS BLUE, LLC,
`
`Plaintiff,
`
`6:20-cv-1152
`Case No. ________________
`
`v.
`
`JURY TRIAL DEMANDED
`
`MICROSOFT CORPORATION,
`
`Defendant.
`
`DAEDALUS BLUE, LLC’S COMPLAINT FOR PATENT INFRINGEMENT
`
`TO THE HONORABLE JUDGE OF SAID COURT:
`
`Plaintiff, Daedalus Blue, LLC for its Complaint against Defendant Microsoft Corporation
`
`(“Microsoft”) hereby alleges as follows:
`
`INTRODUCTION
`
`The novel inventions disclosed in the Asserted Patents in this matter were
`
`invented by International Business Machines Corporation (“IBM”). IBM pioneered the field of
`
`shared resources and cloud computing. Every year, IBM spends billions of dollars on research
`
`and development to invent, market, and sell new technology, and IBM obtains patents on many
`
`of the novel inventions that come out of that work, including the Asserted Patents. The 5 patents
`
`asserted in this case are the result of the work from 14 different IBM researchers, spanning a
`
`period of nearly a decade.
`
`Over the years, the inventions claimed in the Asserted Patents have been licensed
`
`to many companies, including Amazon Web Services and Oracle Corporation.
`
`1
`
`Microsoft Ex. 1010, p. 1
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 2 of 68
`
`THE PARTIES
`
`
`
`Daedalus Blue, LLC (“Daedalus”) is the current owner and assignee of the
`
`Asserted Patents.
`
`
`
`Plaintiff Daedalus is a Delaware limited liability company with its principal place
`
`of business located at 51 Pondfield Road, Suite 3, Bronxville, NY 10708.
`
`
`
`Defendant Microsoft Corporation is a Washington Corporation with a principal
`
`place of business at One Microsoft Way, Redmond, Washington. Microsoft Corporation also
`
`maintains corporate sales offices in this District, located at 10900 Stonelake Boulevard, Suite
`
`225, Austin, Texas, and at Concord Park II 401 East Sonterra Boulevard, Suite 300, San
`
`Antonio, Texas.
`
`
`
`Microsoft conducts business in Texas and in the Western District of Texas, as set
`
`forth below.
`
`JURISIDICTION AND VENUE
`
`
`
`This is an action arising under the patent laws of the United States, 35 U.S.C. §
`
`101, et seq. Accordingly, this Court has subject matter jurisdiction pursuant to 28 U.S.C. §§
`
`1331 and 1338(a).
`
`
`
`Defendant Microsoft is subject to this Court’s personal jurisdiction in accordance
`
`with due process and/or the Texas Long Arm Statute because, in part, Microsoft “[r]ecruits
`
`Texas residents, directly or through an intermediary located in this state, for employment inside
`
`or outside this state.” See Tex. Civ. Prac. & Rem. Code § 17.042.
`
`
`
`This Court also has personal jurisdiction over Defendant Microsoft because it
`
`committed and continue to commit acts of direct and/or indirect infringement in this judicial
`
`district in violation of at least 35 U.S.C. §§ 271(a) and (b). In particular, on information and
`
`2
`
`
`Microsoft Ex. 1010, p. 2
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 3 of 68
`
`belief, Microsoft has made, used, offered to sell and sold licenses for, or access to, the accused
`
`products in this judicial district, and have induced others to use the accused products in this
`
`judicial district.
`
`
`
`Defendant Microsoft is subject to the Court’s personal jurisdiction, in part,
`
`because it regularly conducts and solicits business, or otherwise engages in other persistent
`
`courses of conduct in this district, and/or derives substantial revenue from the sale and
`
`distribution of infringing goods and services provided to individuals and businesses in this
`
`district.
`
`
`
`This Court has personal jurisdiction over Defendant Microsoft because, inter alia,
`
`Defendant (1) has substantial, continuous, and systematic contacts with this State and this
`
`judicial district; (2) owns, manages, and operates facilities in this State and this judicial district;
`
`(3) enjoys substantial income from its operations and sales in this State and this judicial district;
`
`(4) employs Texas residents in this State and this judicial district; and (5) solicits business and
`
`market products, systems and/or services in this State and judicial district including, without
`
`limitation, those related to the infringing accused products.
`
`
`
`Venue is proper in this District pursuant to at least 28 U.S.C. §1319(b)-(c) and
`
`§1400(b), at least because Defendant Microsoft, either directly or through its agents, have
`
`committed acts within this judicial district giving rise to this action, and continue to conduct
`
`business in this district, and/or has committed acts of patent infringement within this District
`
`giving rise to this action.
`
`3
`
`
`Microsoft Ex. 1010, p. 3
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 4 of 68
`
`FACTUAL ALLEGATIONS
`
`Daedalus Patents
`
`
`
`The IBM inventions contained in the Asserted Patents in this case relate to
`
`groundbreaking improvements to cloud infrastructure, cloud management, network security,
`
`database management, data processing, and data management and have particular application in
`
`the cloud-based computing environments as will be further described below.
`
`U.S. Patent No. 7,177,886
`
`
`
`On February 13, 2007, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 7,177,886 (“the ’886 Patent”), entitled “Apparatus and Method
`
`for Coordinating Logical Data Replication with Highly Available Data Replication.” A true and
`
`correct copy of the ’886 Patent is attached hereto as Exhibit 1.
`
`
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’886 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`
`
`The ’886 Patent describes, among other things, a novel apparatus configuration
`
`that improves data storage techniques that provides for faster, more reliable backup of data files
`
`to remote servers, which ensures against data loss and system failure. These inventive
`
`technological improvements solved then-existing problems in the field of data replication for
`
`databases. For example, as described in the ’886 Patent, relational database systems distribute
`
`data across a plurality of computers, servers, or other platforms. Distributed database systems
`
`typically include a central database and various remote servers that are synchronized with the
`
`central database. (Ex. 1 at 1:34-36). The central database server provides a repository for all
`
`database contents, and its contents are preferably highly robust against server failures. (Id. at
`
`4
`
`
`Microsoft Ex. 1010, p. 4
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 5 of 68
`
`1:47-49). Remote databases which store some or all information contained in the central
`
`database are typically maintained by synchronous or asynchronous data replication. In
`
`synchronous replication, a transaction updates data on each target remote database before
`
`completing the transaction.
`
`
`
`However, as described in the ’886 Patent, traditional synchronous replication
`
`methods introduce substantial delays into data processing, because the replication occurs as part
`
`of the user transaction. This increases the cost of the transaction, making it too expensive.
`
`Moreover, a problem at a single database can result in an overall system failure. Hence,
`
`synchronous replication is usually not preferred, except in transactions which require a very high
`
`degree of robustness against database failure. (Id. at 2:9-24).
`
`
`
`As also described in the ’886 Patent, known methods of asynchronous replication
`
`were preferred for most data distribution applications. In asynchronous replication, transaction
`
`logs of the various database servers are monitored for new transactions. When a new transaction
`
`is identified, a replicator rebuilds the transaction from the log record and distributes it to other
`
`database instances, each of which apply and commit the transaction at that instance. Such
`
`replicators have a high degree of functionality, and readily support multiple targets, bi-
`
`directional transmission of replicated data, replication to dissimilar machine types, and the like.
`
`However, asynchronous replicators have a substantial latency between database updates,
`
`sometimes up to a few hours for full update propagation across the distributed database system,
`
`which can lead to database inconsistencies in the event of a failure of the central database server.
`
`Hence, asynchronous replicators are generally not considered to be fail-safe solutions for high
`
`data availability. (Ex. 1 at 25-41).
`
`5
`
`
`Microsoft Ex. 1010, p. 5
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 6 of 68
`
`
`
`The ’886 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network, including computer database replication by providing fail-safe data
`
`replication in a distributed database system. This invention provides for reliable fail-safe
`
`recovery and retains the high degree of functionality of asynchronous replication. (Ex. 1 at 2:42-
`
`46). The ’886 Patent describes that, in accordance with one aspect of the invention, a database
`
`apparatus includes a critical database server having a primary server supporting a primary
`
`database instance and a secondary server supporting a secondary database instance that mirrors
`
`the primary database instance. Fig. 1 of the patent shows an exemplary arrangement where,
`
`“[t]he central database server 12 includes a primary server 20 and a secondary server 22 that
`
`mirrors the primary server 20.” (Id. at 4:48-50).
`
`
`
`6
`
`
`Microsoft Ex. 1010, p. 6
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 7 of 68
`
`The secondary server generates an acknowledgment signal (34) indicating that a selected critical
`
`database transaction is mirrored at the secondary database instance. A plurality of other servers
`
`(14, 16, 18) each support a database. A data replicator communicates with the critical database
`
`server and one or more of the other servers to replicate the selected critical database transaction
`
`on at least one of said plurality of other servers responsive to the acknowledgment signal. (Id. at
`
`2:56-67). This configuration of primary and secondary database resources, along with remotely
`
`provisioned database backups, was a novel and unconventional system setup that facilitated the
`
`improved reliability and failure protection enabled by the claims.
`
`
`
` The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’886 Patent recites:
`
`A database apparatus comprising:
`a critical database server including a primary server supporting a primary
`database instance and a secondary server supporting a secondary database
`instance that mirrors the primary database instance, the secondary server
`generating an acknowledgment signal indicating that a selected critical
`database transaction at the primary database instance is mirrored at the
`secondary database instance, the critical databases server including a
`mirroring component communicating with the primary and secondary
`servers to transfer database log file entries of the primary database instance
`to the secondary server, the secondary server applying and logging the
`transferred database log file entries to the secondary database instance and
`producing said acknowledgement signal subsequent to the applying and
`logging of the selected critical database transaction, wherein the mirroring
`component includes a control structure that indexes critical database
`transactions that are applied and logged at the secondary database instance,
`the acknowledgement signal corresponding to indexing in the control
`structure of at least one of the selected critical database transaction and a
`critical database transaction that commits after the selected critical database
`transaction;
`a plurality of other servers each supporting corresponding database
`instances; and
`a data replicator communicating with the critical database server and the
`plurality of other servers to replicate the selected critical database
`transaction on at least one of said plurality of other servers responsive to the
`acknowledgment signal.
`
`7
`
`
`Microsoft Ex. 1010, p. 7
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 8 of 68
`
`
`(Ex. 1 at 10:57-11:22). Claim 1 of the ’886 Patent describes claim elements, individually or as
`
`an ordered combination, that were non routine and unconventional at the time of the invention in
`
`2003 and an improvement over prior art, as it provided a way (not previously available) to avoid
`
`data inconsistencies among remote servers in the event of a failure of the central database
`
`primary server; provide asynchronous replication functionality that is robust with respect to
`
`primary database failure; and provide for fail-safe recovery via a high availability replication
`
`system, while retaining the broad functionality of data distribution by asynchronous replication.
`
`(Id, at 3:55-67). For example, in a distributed database system, it was unconventional for a
`
`secondary server to produce an acknowledgement for applying received logs to the secondary
`
`database and for a data replicator to wait to replicate critical database transactions in response to
`
`such acknowledgement.
`
`U.S. Patent No. 7,437,730
`
`
`
`On October 14, 2008, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 7,437,730 (“the ’730 Patent”), entitled “System and Method for
`
`Providing a Scalable On Demand Hosting System.” A true and correct copy of the ’730 Patent is
`
`attached hereto as Exhibit 2.
`
`
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’730 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`
`
`The ’730 Patent describes, among other things, novel systems, methods, and
`
`devices that optimize dynamic control over the fractions of workloads handled by virtual
`
`machines across multiple servers in a cloud environment. By recognizing when one of the
`
`servers is overloaded and automatically shifting work to another, not yet overloaded server, these
`
`8
`
`
`Microsoft Ex. 1010, p. 8
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 9 of 68
`
`inventive technological improvements solved then-existing problems in the field of virtual
`
`machine based hosting architectures, including improved server utilization in a virtual machine
`
`based hosting architecture. For example, prior to the invention of the ’730 Patent, one type of
`
`data center hosting was called dedicated hosting, in which servers would be statically assigned to
`
`customers/applications based on the peak load that each customer may receive. However, since
`
`the peak load is significantly higher than the average load, this would result in lower than
`
`average server utilization.
`
`
`
`To improve on server utilization, dedicated hosting solutions prior to the ’730
`
`Patent were modified to dynamically assign servers to each customer. Hosting solutions
`
`provided traffic measuring entities to determine the offered load for each customer or
`
`application, and based on that offered load, the traffic measuring entity determined the number of
`
`servers needed for each customer or application. Though an improvement over static
`
`assignment, the efficiency of this type of solution was still severely limited. For example, owing
`
`to the time needed to reassign servers, existing solutions used excessive time and resources.
`
`Then-existing systems failed to provide fine grain control and scalability in a virtual machine
`
`based hosting architecture. For example, existing systems did not dynamically adjust workload
`
`distribution among a set of virtual machines. Without such means of dynamic adjustment,
`
`existing systems failed to maintain an optimum utilization level across the set of servers. (See
`
`Ex. 2, 1:13-1:39).
`
`
`
`The ’730 Patent overcomes these drawbacks and improves server utilization in a
`
`virtual machine hosting architecture, for example, by describing novel and inventive systems in
`
`which finer grain control is achieved in optimizing workload distribution among multiple
`
`servers. The ’730 Patent uses resource management logic to optimally distribute server
`
`9
`
`
`Microsoft Ex. 1010, p. 9
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 10 of 68
`
`resources among virtual machines and servers, wherein the virtual machines at each of the set of
`
`servers can each serve a different workload depending on available resources. In one aspect, the
`
`resource management logic distributes server resources, such as percentage of CPU, percentage
`
`of network bandwidth, etc., according to the current and predicted resource needs of each of the
`
`multiple workloads handled by the group of virtual machines. Moreover, the system can
`
`dynamically adjust the fractions of each of the multiple workloads to provide for optimization of
`
`multiple workloads across multiple servers. For example, the system recognizes when one of the
`
`set of servers is overloaded and automatically shifts work to another of the set of servers which is
`
`not overloaded. In this way, the ’730 Patent can achieve finer grain control in optimizing
`
`workloads across servers.
`
`
`
`An exemplary hosting architecture diagramed in Fig. 1 of the ‘730 Patent is
`
`shown below, in which each of the servers (12, 14, 16) host multiple VMs, and there exists an
`
`exemplary global resource allocator (26) for allocating resources among the VMs along with
`
`resource control agents (44, 46, 48). Customer applications (18, 20, 22, 24) run on multiple VMs
`
`across multiple servers. As depicted, a load balancer (50, 52, 54, 56) is attached to each
`
`customer, however, the ’730 Patent also describes how a single load balancer could be used for
`
`multiple customers.
`
`10
`
`
`Microsoft Ex. 1010, p. 10
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 11 of 68
`
`
`
`(Ex. 2 at FIG. 1)
`
`
`
`The ’730 Patent overcomes the efficiency limitations of the prior art to optimize
`
`the distribution of workloads across multiple servers. In one embodiment, for example, the ’730
`
`Patent describes distributing workloads among virtual machines according to a server
`
`optimization device with several resource allocator components at each of the multiple server
`
`machines. These resource allocators are responsible for creating virtual machines and assigning
`
`virtual machines to workloads in response to instructions received from the global resource
`
`allocator partitioning component. (Ex 2 at 1:37-2:19). In one aspect, the ’730 Patent can include
`
`a global resource allocator to monitor distribution between the set of virtual machines and a load
`
`balancer to measure the current offered load, wherein the global resource allocator utilizes the
`
`measurements from one or more load balancers to determine how to distribute the resources
`
`among the virtual machines. (Id. at 2:9-15). Moreover, the global resource allocator partitioning
`
`component of the ’730 Patent assigns resources at each of the sever machines to the assigned
`
`virtual machines according to the identified resource requirements. (Id. at 2:25-32). The ’730
`
`11
`
`
`Microsoft Ex. 1010, p. 11
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 12 of 68
`
`Patent further enables finer grain control and workload optimization by reassigning the virtual
`
`machines according to changes in the identified resource requirements. (Id. at 2:20-36). The
`
`’730 Patent further optimizes workload across multiple servers by utilizing the global resource
`
`allocator partitioning component to issue redistribution instructions to all of the resource
`
`allocator components at each of the server machines. (Id. at 2:43-52). This is one example by
`
`which the ’730 Patent provides for optimizing workload across the servers to prevent the over-
`
`utilization or under-utilization of any of the server machines. (Id. at 2:48-56).
`
`
`
`The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’730 Patent recites:
`
`
`
`
`
`A system to provide finer grain control in optimizing multiple workloads across
`multiple servers, comprising:
`
`a plurality of servers to be utilized by multiple workloads;
`
`a plurality of virtual machines at each of the plurality of servers, wherein the
`plurality of virtual machines at each of the plurality of servers each serve a
`different one of the multiple workloads; and
`
`resource management logic to distribute server resources to each of the plurality
`of virtual machines according to current and predicted resource needs of each of
`the multiple workloads utilizing the server resources,
`
`whereby, each of the multiple workloads are distributed across the plurality of
`servers, wherein fractions of each of the multiple workloads are handled by the
`plurality of virtual machines,
`
`whereby, the fractions of each of the multiple workloads handled by each of the
`virtual machines can by dynamically adjusted to provide for optimization of the
`server resources utilized by the multiple workloads across the multiple servers.
`
`
`(Ex. 2 at 8:2-21). Claim 1 of the ’730 Patent describes claim elements, individually or as an
`
`ordered combination, that were non-routine and unconventional at the time of the invention in
`
`2003 and an improvement over prior art, as it provided a way (not previously available) to
`
`achieve finer grain control in optimizing multiple workloads across multiple servers in a virtual
`
`12
`
`
`Microsoft Ex. 1010, p. 12
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 13 of 68
`
`machine based hosting architecture system; improve upon the inefficiencies among dedicated
`
`hosting solutions which dynamically assigned servers to each customer; and avoid the
`
`unnecessary use of time and resources required of prior solutions to reassign servers. (Ex. 2 at
`
`Abstract; 1:23-33, 37-65; 2:1-8).
`
`U.S. Patent No. 8,381,209
`
`
`
`On February 19, 2013, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 8,381,209 (“the ’209 Patent”), entitled “Moveable Access
`
`Control List (ACL) Mechanisms for Hypervisors and Virtual Machines and Virtual Port
`
`Firewalls.” A true and correct copy of the ’209 Patent is attached hereto as Exhibit 3.
`
`
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’209 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`
`
`The ’209 Patent describes, among other things, a novel system and method that
`
`improves the control of network security of a virtual machine (VM) during the migration of the
`
`VM to a new underlying hardware device by enforcing network security and routing at a
`
`hypervisor layer when migrating the virtual machine. A hypervisor (sometimes called a
`
`virtualization manager) is a program that allows multiple VMs to share hardware resources.
`
`Each operating system running on a VM appears to have the processor, memory, and other
`
`resources all to itself. However, the hypervisor actually controls the real processor and its
`
`resources, allocating what is needed to each operating system in turn. In order to perform
`
`maintenance on or provide a fail-over for a processor device or machine, it is desirable to move
`
`or migrate a virtual machine (VM) from one processor machine or device to another processor
`
`machine or device. (Ex. 3 at 2:27-31).
`
`13
`
`
`Microsoft Ex. 1010, p. 13
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 14 of 68
`
`
`
`The ’209 Patent describes inventive technological improvements that solved then-
`
`existing problems in the field relating to VM migration. As described in the ʼ209 Patent, in the
`
`conventional methods and systems, it is difficult to move one virtual machine from one physical
`
`machine to another. Generally, in conventional systems, to move a virtual machine from one
`
`machine to another (e.g., from hardware 1 to hardware 2), the conventional methods and systems
`
`would merely shut down and copy from hardware 1 to hardware 2. The conventional systems
`
`and methods have difficulties with security and routing. (Id. at 5:30-37). For example, some
`
`conventional systems, before the date of the ’209 invention, did not have access control lists
`
`(ACLs) and provided very little security. (Id. at 2:24-59). In other conventional systems, an
`
`ACL would be installed on a real network switch (hardware) in order to restrict the access to the
`
`device. To migrate a virtual machine from one device to another device, a complex update
`
`scheme was required to update the ACLs in the real switches and the filters in the firewalls. (Id.
`
`at 3:6-9). Additionally, routing generally was provided by a mechanism known as an open
`
`shortest path first (OSPF) route.
`
`
`
`To solve the problems with the conventional systems and methods, the ’209
`
`Patent invention copies security and routing, etc. for the virtual machine to the hypervisor layer
`
`so that the user will see no difference in operation between running the virtual machine on
`
`hardware 1 or hardware 2. That is, according to the present invention, the first and second
`
`device (e.g., hardware 1 and hardware 2) would each act the same (and preferably, would each
`
`have the same internet protocol (IP) address). An important problem arises when networks are
`
`very large, such as Google and Yahoo, in which there could be a thousand servers, and no flat
`
`topography, switches and routers to protect the servers. That is, in such systems, the virtual
`
`14
`
`
`Microsoft Ex. 1010, p. 14
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 15 of 68
`
`system is run on top of the hypervisor such that each virtual system is only as good as the
`
`security at each machine. (Id. at 5:38-53).
`
`
`
`The ’209 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network by enforcing network security and routing at a hypervisor during migration.
`
`To migrate the virtual machine from a first hardware device to a second hardware device, the
`
`’209 invention routes network traffic for the virtual machine to the second hardware device at the
`
`hypervisor layer. The ’209 invention also may use firewalls to permit network traffic for the
`
`virtual machine to go to the second hardware device at the hypervisor layer. The hypervisor
`
`level provides traffic filtering and routing updating. Thus, the real switches do not need to be
`
`updated at the first and second hardware devices. (Ex. 3 at 5:38-62). The invention
`
`decentralizes the updating scheme by using the hypervisor layer for security and routing, thus
`
`preferably only two software components would be needed to be updated, whereas the
`
`conventional systems and methods would require all systems to be updated (e.g., routers,
`
`firewalls, etc.). The ’209 invention also is more predictable than the conventional systems and
`
`methods. Thus, the ’209 invention has an important advantage over the conventional systems of
`
`pushing all security and intelligence to the hypervisor level, instead of the OS level. That way,
`
`under the protection of the hypervisor, the ’209 invention can provide traffic filtering and routing
`
`15
`
`
`Microsoft Ex. 1010, p. 15
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 16 of 68
`
`updating. (Id. at 6:3-15). An exemplary method is depicted in Figure 4 of the patent as follows:
`
`
`
`
`
`
`
`The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’209 Patent recites:
`
`A computer implemented method of controlling network security
`of a virtual machine, the method comprising enforcing network
`security and routing at a hypervisor layer via dynamic updating of
`outing controls initiated by a migration of said virtual machine
`from a first device to a second device.
`
`(Ex. 3 at 15:39-43). Claim 1 of the ’209 Patent describes claim elements, individually or as an
`
`ordered combination, that were non routine and unconventional at the time of the invention in
`
`2007 and an improvement over prior art, as it provided a way (not previously available) to
`
`control network security during VM migration. For example, during VM migration, it was
`
`unconventional to enforce network security and routing at a hypervisor layer via dynamic
`
`16
`
`
`Microsoft Ex. 1010, p. 16
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 17 of 68
`
`updating of routing controls initiated by the migration. The invention thus can provide a
`
`hypervisor security architecture designed and developed to provide a secure foundation for
`
`server platforms, providing numerous beneficial functions, such as, strong isolation, mediated
`
`sharing, and communication between virtual machines. These properties can all be strictly
`
`controlled by a flexible access control enforcement engine which can also enforce mandatory
`
`policies.
`
`
`
`U.S. Patent No. 8,572,612
`
`
`
`On October 29, 2013, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 8,572,612 (“the ’612 Patent”), entitled “Autonomic Scaling of
`
`Virtual Machines in a Cloud Computing Environment.” A true and correct copy of the ’612
`
`Patent is attached hereto as Exhibit 4.
`
`
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’612 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`
`
`The ’612 Patent describes, among other things, novel systems and methods that
`
`improve the data processing and the scaling of resources in a cloud computing environment by
`
`efficiently utilizing virtual machines (VM) that autonomically deploy and terminate based on
`
`workload. These inventive technological improvements solved then-existing problems in the
`
`field of cloud computing. As described in the ʼ612 Patent, cloud computing is a cost-effective
`
`means of delivering information technology services through a virtual platform rather than
`
`hosting and operating the resources locally. Virtual machines (VMs) may reside on a single
`
`powerful blade server, or a cloud system may utilize thousands of blade servers. (See Ex. 4 at
`
`1:27-36). A VM is composed of modules of automated computing machinery. (Id. at 1:56-58).
`
`17
`
`
`Microsoft Ex. 1010, p. 17
`Microsoft v. Daedalus Blue
`IPR2021-00832
`
`
`
`Case 6:20-cv-01152-ADA Document 1 Filed 12/16/20 Page 18 of 68
`
`The hypervisor (a separate module of automated computing machinery that interacts with the
`
`host hardware) creates a particular instance of a VM. (Id. at 6:7-9; 6:25:33). One of the
`
`drawbacks of cloud computing systems before the ’612 Patent invention, was that the end user
`
`would lose control over the underlying hardware infrastructure, including control over scaling
`
`the number of virtual machines running an application. In such an environment, scaling of an
`
`application would be carried out manually by a system administrator, but only when end users
`
`would report performance degradation. This technique is slow and complex, and it inherently
`
`risks a user's experiencing a poor quality of service. (Id. at 1:37-50).
`
`
`
`The ’612 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network, for example, by disclosing an improved way of scaling virtual machine
`
`instances using autonomic scaling to deploy additional VM instances,