throbber
Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 1 of 64
`
`
`
`IN THE UNITED STATES DISTRICT COURT
`FOR THE WESTERN DISTRICT OF TEXAS
`WACO DIVISION
`
`
`
`
`DAEDALUS BLUE, LLC,
`
`
`
`
`
`
`
`
`
`
`
`Case No. ________________
`
`JURY TRIAL DEMANDED
`
`
`
`
`v.
`
`
`
`
`
`
`
`
`
`ORACLE CORPORATION AND ORACLE
`AMERICA, INC.,
`
`
`
`
`Plaintiff,
`
`
`
`
`
`
`
`
`
`
`
`
`Defendants.
`
`
`
`
`
`
`
`
`
`
`
`
`
`
`DAEDALUS BLUE, LLC’S COMPLAINT FOR PATENT INFRINGEMENT
`
`TO THE HONORABLE JUDGE OF SAID COURT:
`
`Plaintiff, Daedalus Blue, LLC for its Complaint against Defendants Oracle Corporation
`
`and Oracle, America, Inc. (collectively, “Oracle”), hereby alleges as follows:
`
`INTRODUCTION
`
`1.
`
`The novel inventions disclosed in the Asserted Patents in this matter were
`
`invented by International Business Machines Corporation (“IBM”). IBM pioneered the field of
`
`shared resources and cloud computing. Every year, IBM spends billions of dollars on research
`
`and development to invent, market, and sell new technology, and IBM obtains patents on many
`
`of the novel inventions that come out of that work, including the Asserted Patents. The five
`
`patents asserted in this case are the result of the work from 15 different IBM researchers,
`
`spanning a period of nearly a decade.
`
`1
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 2 of 64
`
`
`
`2.
`
`Over the years, IBM has licensed its inventions—including those claimed in the
`
`Asserted Patents—to many companies, including Amazon Web Services.
`
`THE PARTIES
`
`3.
`
`Daedalus Blue, LLC (“Daedalus”) is the current owner and assignee of the
`
`Asserted Patents.
`
`4.
`
`Plaintiff Daedalus is a Delaware limited liability company with its principal place
`
`of business located at 51 Pondfield Road, Suite 3, Bronxville, NY 10708.
`
`5.
`
`Defendant Oracle Corporation is a Delaware Corporation with a principal place of
`
`business at 500 Oracle Parkway Redwood City, CA 94065. Oracle Corporation also maintains
`
`regional offices in this District, located at 2300 Oracle Way, Austin, Texas, at 5300 Riata Park
`
`Court, Building B, Austin, Texas, and at 613 NW Loop 410 San Antonio, Texas.
`
`6.
`
`Defendant Oracle America, Inc. (“Oracle America”) is a Delaware Corporation
`
`with a principal place of business at 500 Oracle Parkway Redwood City, CA 94065. Oracle
`
`America also maintains regional offices in this district, located 613 NW Loop 410 San Antonio,
`
`Texas.
`
`7.
`
`Oracle Corporation and Oracle America conduct business in Texas and in the
`
`Western District of Texas, as set forth below.
`
`JURISIDICTION AND VENUE
`
`8.
`
`This is an action arising under the patent laws of the United States, 35 U.S.C.
`
`§ 101, et seq. Accordingly, this Court has subject matter jurisdiction pursuant to 28 U.S.C.
`
`§§ 1331 and 1338(a).
`
`9.
`
`Defendants Oracle are subject to this Court’s personal jurisdiction in accordance
`
`with due process and/or the Texas Long Arm Statute because, in part, Oracle “[r]ecruits Texas
`
`2
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 3 of 64
`
`
`
`residents, directly or through an intermediary located in this state, for employment inside or
`
`outside this state.” See Tex. Civ. Prac. & Rem. Code § 17.042.
`
`10.
`
`This Court also has personal jurisdiction over Defendants Oracle because they
`
`committed and continue to commit acts of direct and/or indirect infringement in this judicial
`
`district in violation of at least 35 U.S.C. §§ 271(a) and (b). In particular, on information and
`
`belief, Defendants have made, used, offered to sell and sold licenses for, or access to, the
`
`accused products in this judicial district, and have induced others to use the accused products in
`
`this judicial district.
`
`11.
`
`Defendants Oracle are subject to the Court’s personal jurisdiction, in part, because
`
`they regularly conduct and solicit business, or otherwise engage in other persistent courses of
`
`conduct in this district, and/or derive substantial revenue from the sale and distribution of
`
`infringing goods and services provided to individuals and businesses in this district.
`
`12.
`
`This Court has personal jurisdiction over Defendants Oracle because, inter alia,
`
`Defendants (1) have substantial, continuous, and systematic contacts with this State and this
`
`judicial district; (2) own, manage, and operate facilities in this State and this judicial district;
`
`(3) enjoy substantial income from their operations and sales in this State and this judicial district;
`
`(4) employ Texas residents in this State and this judicial district; and (5) solicit business and
`
`market products, systems and/or services in this State and judicial district including, without
`
`limitation, those related to the infringing accused products.
`
`13.
`
`Venue is proper in this District pursuant to at least 28 U.S.C. §1319(b)-(c) and
`
`§1400(b), at least because Defendants Oracle, either directly or through their agents, have
`
`committed acts within this judicial district giving rise to this action, and continue to conduct
`
`3
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 4 of 64
`
`
`
`business in this district, and/or have committed acts of patent infringement within this District
`
`giving rise to this action.
`
`FACTUAL ALLEGATIONS
`
`Daedalus Patents
`
`14.
`
`The Asserted Patents in this case relate to groundbreaking improvements to
`
`computer network functionality and computer security. The techniques described in the Asserted
`
`Patents relate to computer networks and have particular application in the cloud-based
`
`computing environments as will be further described below.
`
`15.
`
`On July 19, 2005, the U.S. Patent and Trademark Office duly and lawfully issued
`
`United States Patent No. 6,920,494 (“the ’494 Patent”), entitled “Storage Area Network Methods
`
`and Apparatus with Virtual SAN Recognition.” A true and correct copy of the ’494 Patent is
`
`attached hereto as Exhibit 1.
`
`16.
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’494 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`17.
`
`The ’494 Patent describes, among other things, novel systems and methods that
`
`improve the monitoring and discovery of network components and their topology, thereby
`
`allowing users to more efficiently monitor the network and components. These inventive
`
`technological improvements solved then-existing problems in the field of storage area networks
`
`(SANs) and methods of operating SANs. For example, as described in the ’494 Patent, with the
`
`rise of the personal computer and workstations in the 1980's, demand by business users led to the
`
`development of interconnection mechanisms that permitted otherwise independent computers to
`
`access data on another computer's storage devices. A prevalent business network that emerged
`
`4
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 5 of 64
`
`
`
`was/is the local area network, typically comprising “client” computers (e.g., individual PCs or
`
`workstations) connected by a network to a “server” computer. In a storage area network, many
`
`storage devices are often placed on a network or switching fabric that can be accessed by several
`
`servers (such as file servers and web servers) which, in turn, service respective groups of clients.
`
`Sometimes even individual PCs or workstations are enabled for direct access of the storage
`
`devices. (See Ex. 1. at 1:24-54). The complexity engendered by having storage-area networks
`
`of shared-access storage components being used by multiple servers spread across separate local-
`
`area networks created system management problems that were addressed by the invention of the
`
`’494 Patent. (See, e.g., id. at 1:55-2:26).
`
`18.
`
`Prior to the invention of the ’494 Patent, a drawback in storage area networks
`
`arose in managing the proliferation of hosts and storage devices. For example, a storage area
`
`network (SAN) has one or more host digital data processors which are coupled to one or more
`
`storage devices by an interconnect, for example, a fibre channel-based fabric. Hosts are typically
`
`web or file servers for client computers but may be any digital data device that accesses and/or
`
`stores information on the storage devices. In managing the SAN connections, solutions existing
`
`before the ’494 invention focused on setting switches or switch-like interfaces on the network or
`
`interconnect fabric between the hosts and storage device, electrically “blocking” certain hosts
`
`and certain storage devices. A problem with these solutions is that they permitted only zoning or
`
`switch-like control. Another problem is that, by their very nature, these solutions tended to be
`
`provider specific. (See Ex. 1, at 1:55-63).
`
`19.
`
`The ’494 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network by improving storage area networks (SANs) and methods for operating SANs.
`
`The invention of the ’494 Patent provides for provisioning and discovery of “virtual”
`
`5
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 6 of 64
`
`
`
`connections and regions within a SAN, that are not dependent on the limited zoning capabilities
`
`and connectivity of the storage fabric switch hardware. (See, e.g., Ex. 1, at 6:58-7:22, 44:25-
`
`45:25, Figs. 23, 24). An exemplary depiction of a virtual SAN that can be detected by host
`
`adapters and disambiguated by a SAN manager is shown in Fig. 23 of the patent:
`
`
`
`This “virtual SAN” discovery and management provided a novel and unconventional solution
`
`over existing network management solutions at the time. For example, in one aspect of the
`
`invention, scanners are utilized for each virtualized region (e.g., defined independent of physical
`
`connectivity by the storage resources seen and accessible by a host or group of hosts) of a SAN
`
`to collect information regarding the components and their interconnectivity; such scanners are
`
`coupled to a manager that uses that information to determine the topology of the SAN. A
`
`6
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 7 of 64
`
`
`
`scanner may run on an agent within a host. The exemplary arrangement is shown in Fig. 1 of the
`
`patent:
`
`
`
`20.
`
`The patent describes, for example, that a scanner may be an executable that
`
`interacts with the hosts by performing system calls and I/O control calls to gather information.
`
`(Ex. 1, at 39:4-7). The manager may then disambiguate information from the regions and
`
`discern the topology of the portion of the SAN spanned by the regions. As illustrated in Fig. 25
`
`of the patent, the SAN manager may store the internal model store of the SAN topology, and
`
`such store may contain objects representing the components of the SAN (e.g., hosts, storage
`
`devices, interconnect), their attributes and the interrelationships therebetween. The objects may
`
`be arranged hierarchically or otherwise. (Id., at 46:61-47:29). They may describe a “collection
`
`of ports that may constitute a virtual SAN.” (Id., at 45:25-46:60). For example:
`
`7
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 8 of 64
`
`
`
`
`
`21.
`
`Further, Figure 26 of the patent shows an exemplary hierarchical display that may
`
`be presented using the models depicted in Figure 25, and which may also identify information
`
`about the status of components or history data:
`
`
`
`
`
`8
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 9 of 64
`
`
`
`22.
`
`The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’494 Patent recites:
`
`A storage area network (SAN) comprising
`one or more regions forming at least a portion of the SAN, each region
`having one or more components, the components including one or more
`digital data processors and one or more storage devices;
`one or more scanners that collect, for each region, information regarding the
`components and their interconnectivity;
`a manager, coupled to the one or more scanners, that responds to the
`collected information to determine a topology of a portion of the SAN
`spanned by the regions.
`
`
`(Ex. 1, at 84:16-27). Claim 1 of the ’494 Patent describes claim elements, individually or as an
`
`ordered combination, that were non routine and unconventional at the time of the invention in
`
`2001 and was an improvement over prior art. For example, it provided a way (not previously
`
`available) to collect information about network components and their interconnectivity in order
`
`to monitor and discover network components and their topology. For example, Claim 1
`
`discloses the unconventional step that one or more scanners are maintained to collect information
`
`on components for different regions of the SAN, and that the manager is coupled to the scanners
`
`for the different regions and responds to the collected information to determine a portion of the
`
`SAN topology that spans the regions. As described above and in the disclosures of the ’494
`
`Patent, the claimed regions may be inventively and unconventionally virtualized from the storage
`
`network fabric hardware, and constitute “virtual SANs.” This virtualization functionally
`
`improved the discovery and management of storage network components in the complex
`
`environment of new cloud-based and networked computing systems.
`
`23.
`
`On February 13, 2007, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 7,177,886 (“the ’886 Patent”), entitled “Apparatus and Method
`
`9
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 10 of 64
`
`
`
`for Coordinating Logical Data Replication with Highly Available Data Replication.” A true and
`
`correct copy of the ’886 Patent is attached hereto as Exhibit 2.
`
`24.
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’886 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`25.
`
`The ’886 Patent describes, among other things, a novel apparatus configuration
`
`that improves data storage techniques that provides for faster, more reliable backup of data files
`
`to remote servers, which ensures against data loss and system failure. These inventive
`
`technological improvements solved then-existing problems in the field of data replication for
`
`databases. For example, as described in the ’886 Patent, relational database systems distribute
`
`data across a plurality of computers, servers, or other platforms. Distributed database systems
`
`typically include a central database and various remote servers that are synchronized with the
`
`central database. (Ex. 2, at 1:34-36). The central database server provides a repository for all
`
`database contents, and its contents are preferably highly robust against server failures. (Id., at
`
`1:47-49). Remote databases which store some or all information contained in the central
`
`database are typically maintained by synchronous or asynchronous data replication. In
`
`synchronous replication, a transaction updates data on each target remote database before
`
`completing the transaction.
`
`26.
`
`However, as described in the ’886 Patent, traditional synchronous replication
`
`methods introduce substantial delays into data processing, because the replication occurs as part
`
`of the user transaction. This increases the cost of the transaction, making it too expensive.
`
`Moreover, a problem at a single database can result in an overall system failure. Hence,
`
`10
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 11 of 64
`
`
`
`synchronous replication is usually not preferred, except in transactions which require a very high
`
`degree of robustness against database failure. (Id., at 2:9-24).
`
`27.
`
`As also described in the ’886 Patent, known methods of asynchronous replication
`
`were preferred for most data distribution applications. In asynchronous replication, transaction
`
`logs of the various database servers are monitored for new transactions. When a new transaction
`
`is identified, a replicator rebuilds the transaction from the log record and distributes it to other
`
`database instances, each of which apply and commit the transaction at that instance. Such
`
`replicators have a high degree of functionality, and readily support multiple targets, bi-
`
`directional transmission of replicated data, replication to dissimilar machine types, and the like.
`
`However, asynchronous replicators have a substantial latency between database updates,
`
`sometimes up to a few hours for full update propagation across the distributed database system,
`
`which can lead to database inconsistencies in the event of a failure of the central database server.
`
`Hence, asynchronous replicators are generally not considered to be fail-safe solutions for high
`
`data availability. (Ex. 2, at 25-41).
`
`28.
`
`The ’886 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network, including computer database replication by providing fail-safe data
`
`replication in a distributed database system. This invention provides for reliable fail-safe
`
`recovery and retains the high degree of functionality of asynchronous replication. (Ex. 2, at
`
`2:42-46). The ’886 Patent describes that, in accordance with one aspect of the invention, a
`
`database apparatus includes a critical database server having a primary server supporting a
`
`primary database instance and a secondary server supporting a secondary database instance that
`
`mirrors the primary database instance. Fig. 1 of the patent shows an exemplary arrangement
`
`11
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 12 of 64
`
`
`
`where, “[t]he central database server 12 includes a primary server 20 and a secondary
`
`server 22 that mirrors the primary server 20.” (Id. at 4:48-50).
`
`
`
`The secondary server generates an acknowledgment signal (34) indicating that a selected critical
`
`database transaction is mirrored at the secondary database instance. A plurality of other servers
`
`(14, 16, 18) each support a database. A data replicator communicates with the critical database
`
`server and one or more of the other servers to replicate the selected critical database transaction
`
`on at least one of said plurality of other servers responsive to the acknowledgment signal. (Id. at
`
`2:56-67). This configuration of primary and secondary database resources, along with remotely
`
`provisioned database backups, was a novel and unconventional system setup that facilitated the
`
`improved reliability and failure protection enabled by the claims.
`
`29.
`
`The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’886 Patent recites:
`
`
`
`
`
`12
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 13 of 64
`
`
`
`A database apparatus comprising:
`a critical database server including a primary server supporting a primary
`database instance and a secondary server supporting a secondary database
`instance that mirrors the primary database instance, the secondary server
`generating an acknowledgment signal indicating that a selected critical
`database transaction at the primary database instance is mirrored at the
`secondary database instance, the critical databases server including a
`mirroring component communicating with the primary and secondary
`servers to transfer database log file entries of the primary database instance
`to the secondary server, the secondary server applying and logging the
`transferred database log file entries to the secondary database instance and
`producing said acknowledgement signal subsequent to the applying and
`logging of the selected critical database transaction, wherein the mirroring
`component includes a control structure that indexes critical database
`transactions that are applied and logged at the secondary database instance,
`the acknowledgement signal corresponding to indexing in the control
`structure of at least one of the selected critical database transaction and a
`critical database transaction that commits after the selected critical database
`transaction;
`a plurality of other servers each supporting corresponding database
`instances; and
`a data replicator communicating with the critical database server and the
`plurality of other servers to replicate the selected critical database
`transaction on at least one of said plurality of other servers responsive to the
`acknowledgment signal.
`
`
`(Ex. 2, at 10:57-11:22). Claim 1 of the ’886 Patent describes claim elements, individually or as
`
`an ordered combination, that were non routine and unconventional at the time of the invention in
`
`2003 and an improvement over prior art, as it provided a way (not previously available) to avoid
`
`data inconsistencies among remote servers in the event of a failure of the central database
`
`primary server; provide asynchronous replication functionality that is robust with respect to
`
`primary database failure; and provide for fail-safe recovery via a high availability replication
`
`system, while retaining the broad functionality of data distribution by asynchronous replication.
`
`(Id., at 3:55-67). For example, in a distributed database system, it was unconventional for a
`
`secondary server to produce an acknowledgement for applying received logs to the secondary
`
`13
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 14 of 64
`
`
`
`database and for a data replicator to wait to replicate critical database transactions in response to
`
`such acknowledgement.
`
`30.
`
`On October 29, 2013, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 8,572,612 (“the ’612 Patent”), entitled “Autonomic Scaling of
`
`Virtual Machines in a Cloud Computing Environment.” A true and correct copy of the ’612
`
`Patent is attached hereto as Exhibit 3.
`
`31.
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’612 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`32.
`
`The ’612 Patent describes, among other things, novel systems and methods that
`
`improve the data processing and the scaling of resources in a cloud computing environment by
`
`efficiently utilizing virtual machines (VM) that autonomically deploy and terminate based on
`
`workload. These inventive technological improvements solved then-existing problems in the
`
`field of cloud computing. As described in the ʼ612 Patent, cloud computing is a cost-effective
`
`means of delivering information technology services through a virtual platform rather than
`
`hosting and operating the resources locally. Virtual machines (VMs) may reside on a single
`
`powerful blade server, or a cloud system may utilize thousands of blade servers. (See Ex. 3, at
`
`1:27-36). A VM is composed of modules of automated computing machinery. (Id., at 1:56-58).
`
`The hypervisor (a separate module of automated computing machinery that interacts with the
`
`host hardware) creates a particular instance of a VM. (Id., at 6:7-9; 6:25:33). One of the
`
`drawbacks of cloud computing systems before the ’612 Patent invention, was that the end user
`
`would lose control over the underlying hardware infrastructure, including control over scaling
`
`the number of virtual machines running an application. In such an environment, scaling of an
`
`14
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 15 of 64
`
`
`
`application would be carried out manually by a system administrator, but only when end users
`
`would report performance degradation. This technique is slow and complex, and it inherently
`
`risks a user's experiencing a poor quality of service. (Id., at 1:37-50).
`
`33.
`
`The ’612 Patent overcomes these drawbacks and improves the functioning of a
`
`computer network, for example, by disclosing an improved way of scaling virtual machine
`
`instances using autonomic scaling to deploy additional VM instances, terminate VM instances,
`
`and provide user control with little or no governance by hand.
`
`34.
`
`In one aspect, the ’612 Patent invention describes autonomic scaling of virtual
`
`machines in a cloud computing environment. A self-service portal enables users themselves to
`
`set up VMs as they wish, according to the user’s specifications. The cloud operating system then
`
`deploys an instance of the now-specified VM in accordance with the received user specifications.
`
`The self-service portal passes the user specification to the deployment engine. The VM catalog
`
`contains VM templates, standard-form descriptions used by hypervisors to define and install
`
`VMs. The deployment engine fills in the selected template with the user specifications and
`
`passes the complete template to the data center administration server in the local data center.
`
`The data center administration server then calls a hypervisor on a cloud computer to install the
`
`instance of the VM specified by the selected, completed VM template. (See Ex. 3, at 5:17-36).
`
`35.
`
`The ’612 Patent further describes that the cloud computing environment includes
`
`a plurality of virtual machines (‘VMs’), and a cloud operating system and a data center
`
`administration server operably coupled to the VMs. The cloud operating system deploys an
`
`instance of a VM and flags the instance of a VM for autonomic scaling including termination.
`
`The cloud operating system monitors one or more operating characteristics of the instance of the
`
`VM, deploys an additional instance of the VM if a value of an operating characteristic exceeds a
`
`15
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 16 of 64
`
`
`
`first predetermined threshold value, and terminates operation of the additional instance of the
`
`VM if a value of an operating characteristic declines below a second predetermined threshold
`
`value. (See Ex. 3, at 1:53-2:6). With autonomic scaling, the environment gracefully handles
`
`varying workloads, either increasing or decreasing, and can adapt to varying workloads
`
`transparently, smoothly, and with a minimum of difficulty for the users of the data processing
`
`service provided by such a cloud computing environment. (See id., at 2:28-45).
`
`36.
`
`Figure 3 of the ’612 Patent shows a flowchart illustrating example methods of
`
`autonomic scaling:
`
`
`
`
`
`16
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 17 of 64
`
`
`
`37.
`
`The novel features of the invention are recited in the claims. For example, Claim
`
`1 of the ’612 Patent recites:
`
`A method of autonomic scaling of virtual machines in a cloud computing
`environment, the cloud computing environment comprising a plurality of
`virtual machines (‘VMs’), the VMs comprising modules of automated
`computing machinery installed upon cloud computers disposed within a
`data center, the cloud computing environment further comprising a cloud
`operating system and a data center administration server operably coupled
`to the VMs, the method comprising:
`deploying, by the cloud operating system, an instance of a VM, including
`flagging the instance of a VM for autonomic scaling including termination
`and executing a data processing workload on the instance of a VM;
`monitoring, by the cloud operating system, one or more operating
`characteristics of the instance of the VM;
`deploying, by the cloud operating system, an additional instance of the VM
`if a value of an operating characteristic exceeds a first predetermined
`threshold value, including executing a portion of the data processing
`workload on the additional instance of the VM; and
`terminating operation of the additional instance of the VM if a value of an
`operating characteristic declines below a second predetermined threshold
`value;
`wherein the cloud operating system comprises a module of automated
`computing machinery, further comprising a self service portal and a
`deployment engine, and deploying an instance of a VM further comprises:
`passing by the self service portal user specifications for the instance of a
`VM to the deployment engine;
`implementing and passing to the data center administration server, by the
`deployment engine, a VM template with the user specifications; and
`calling, by the data center administration server, a hypervisor on a cloud
`computer to install the VM template as an instance of a VM on the cloud
`computer.
`
`
`(Ex. 3, at 15:42-16:8). Claim 1 of the ’612 Patent describes claim elements, individually or as an
`
`ordered combination, that were non routine and unconventional at the time of the invention in
`
`2010 and an improvement over prior art, as it provided a way (not previously available) to add or
`
`terminate virtual machines based on individualized thresholds, thereby efficiently utilizing
`
`resources and transparently adapting workload. For example, as noted by the U.S. Patent and
`17
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 18 of 64
`
`
`
`Trademark Office upon issuance, the known prior art failed to teach at least the combination of
`
`“deploying, by the cloud operating system, an additional instance of the VM if a value of an
`
`operating characteristic exceeds a first predetermined threshold value, including executing a
`
`portion of the data processing workload on the additional instance of the VM; and terminating
`
`operation of the additional instance of the VM if a value of an operating characteristic declines
`
`below a second predetermined threshold value, wherein the cloud operating system comprises a
`
`module of automated computing machinery, further comprising a self-service portal and a
`
`deployment engine, and deploying an instance of a VM further comprises: passing by the self-
`
`service portal user specifications for the instance of a VM to the deployment engine;
`
`implementing and passing to the data center administration server, by the deployment engine, a
`
`VM template with the user specifications; and calling, by the data center administration server, a
`
`hypervisor on a cloud computer to install the VM template as an instance of a VM on the cloud
`
`computer.” Accordingly, the use of user-defined template structures, incorporating user
`
`specifications, for both the allocation and deallocation of virtual machine resources was
`
`described and acknowledged to be a novel and unconventional solution at the time.
`
`38.
`
`On March 11, 2014, the U.S. Patent and Trademark Office duly and lawfully
`
`issued United States Patent No. 8,671,132 (“the ’132 Patent”), entitled “System, Method, and
`
`Apparatus for Policy-Based Data Management.” A true and correct copy of the ’132 Patent is
`
`attached hereto as Exhibit 4.
`
`39.
`
`Daedalus is the owner and assignee of all right, title, and interest in and to the
`
`’132 Patent, including the right to assert all causes of action arising under said patent and the
`
`right to any remedies for infringement of it.
`
`18
`
`
`

`

`Case 6:20-cv-00428-ADA Document 1 Filed 05/26/20 Page 19 of 64
`
`
`
`40.
`
`The ’132 Patent describes, among other things, novel systems and methods that
`
`improve data management by prioritizing file storage operations to allow remote clients (or end
`
`users) using different computing platforms to have more efficient and less expensive access.
`
`These inventive technological improvements solved then-existing problems in the field of data
`
`storage systems. For example, prior to the invention of the ’132 Patent, distributed storage
`
`systems’ ability to automatically allocate resources to prioritize operations was severely limited.
`
`For example, existing systems suffered from saturation when many users simultaneously store,
`
`retrieve, or move data on the distributed storage system. Another problem was the lack of a
`
`method for prioritizing operations, resulting in unnecessary delays in the performance of the
`
`more important operations. Additionally, existing distributed storage systems were not capable
`
`of storing data using prioritized operations within multiple platforms. Existing systems also did
`
`not permit a user to automatically select between multiple storage options when generating files
`
`or account for the different requirements placed on these files. Yet another problem is the great
`
`variation in the equipment available to store data, wherein some files are stored in a manner that
`
`provides insufficient performance, while others take up comparatively expensive storage
`
`capacity

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket