`
`EXHIBIT 2002
`
`
`
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`|
`
`IN THE UNITED STATES PATENT AND TRADEMARK OFFICE
`
`RELOADED GAMES, INC.
`(Petitioner)
`
`v.
`
`PARALLEL NETWORKS LLC
`(Patent Owner)
`
`Case No. IPR2014-00136
`
`Patent No. 7,188,145
`
`Inventors:
`Keith A. Lowery
`Bryan S. Chin
`David A. Consolver
`Gregg A. DeMasters
`
`Filed: January 12, 2001
`
`For: Method and System for
`Dynamic Distributed Data Caching
`
`Mail Stop: PATENT BOARD
`Patent Trial and Appeal Board
`US Patent and Trademark Office
`PO Box 1450
`Alexandria, VA 22313-1450
`
`
`
`
`
`DECLARATION OF DR. MITCHELL A. THORNTON
`
`60445015_1
`
`
`
`I, Mitchell A. Thornton, declare the following:
`
`1.
`
`I submit this Declaration in connection with my role as a consulting
`
`expert in the matter of the Inter Partes review, Reloaded Games, Inc. v. Parallel
`
`Networks, LLC, IPR2014-00136 (the “IPR”), on behalf of Parallel Networks, LLC.
`
`I have personal knowledge of the facts set forth below.
`
`INTRODUCTION AND BACKGROUND
`
`2.
`
`I have been retained as an expert by McGuire-Woods LLP (“MGW”)
`
`in the above-captioned matter to provide expert analysis, opinions, and testimony
`
`regarding U.S. Patent No. 7,118,145 (“the ’145 patent”).
`
`3.
`
`I am being compensated for my work and travel time associated with
`
`this case at my ordinary rate of $350.00 per hour, plus reimbursement of
`
`reasonable direct expenses. I have no other interest in this action or the parties
`
`thereto and my compensation does not depend on the outcome of this matter.
`
`4.
`
`I was asked by MGW to review and analyze the ‘145 patent, the ‘145
`
`prosecution history, IPR pleadings including the petition and response, and various
`
`prior art identified in the following sections of this declaration.
`
`5.
`
`In forming my opinions set forth in this Declaration, I relied on the
`
`’145 patent and the file history of the ‘145 patent, U.S. Patent No. 6,341,311 issued
`
`to Smith (“Smith”), and U.S. Patent No. 6,256,747 issued to Shigekazu Inohara, et
`
`al. (“Inohara”).
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`1
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`6.
`
`In this Declaration, I have set forth my opinions concerning the
`
`teachings and disclosure of Inohara.
`
`7.
`
`In forming my opinions, I also relied upon my knowledge as person of
`
`ordinary skill in the art obtained through more than 29 years of work in computer
`
`science and engineering.
`
`QUALIFICATIONS
`
`8. My curriculum vitae is attached as Appendix 1 to this Declaration.
`
`9.
`
`I hold a Ph.D. degree from Southern Methodist University in
`
`Computer Engineering; a Master’s degree from Southern Methodist University in
`
`Computer Science; a Master’s degree in Electrical Engineering from the University
`
`of Texas-Arlington; and a Bachelor’s degree in Electrical Engineering from
`
`Oklahoma State University.
`
`10.
`
`I am a Full Professor of Computer Science and Engineering and also
`
`of Electrical Engineering at Southern Methodist University. I additionally hold the
`
`position of Technical Director of the Darwin Deason Institute for Cyber Security
`
`and Principal of the Hardware and Network Security Engineering Program within
`
`the Institute. I have been engaged in a variety of computer architecture and digital
`
`systems research projects funded by both governmental and industrial sponsors.
`
`Furthermore, I have served as an independent professional engineering consultant
`
`offering design and analysis services related to aspects of computer systems
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`2
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`architecture, digital systems design, wireless/wired networked systems and
`
`interfaces, and integrated circuit design. My independent consulting work began in
`
`1993.
`
`11. Before entering academia, I was employed fulltime by E-Systems,
`
`Inc. from 1986 through 1991 in Greenville, Texas (now L3-Communications, Inc.).
`
`I resigned as a full-time employee with the title Senior Electronic Systems
`
`Engineer in 1991 to pursue full-time graduate study. At E-Systems, I was involved
`
`in the design, implementation, test, and evaluation of a number of both ground-
`
`based and airborne computer systems and customized peripherals.
`
`12. During my PhD studies, I was employed both part-time and full-time
`
`at various points in time with the Cyrix Corporation as a Design Engineer. In that
`
`capacity I was a member of the M1 microprocessor design group where my duties
`
`included the design and test of various portions of a new microprocessor including
`
`the bus controller, branch prediction circuitry, cache and memory interfaces, and
`
`others.
`
` The M1 microprocessor
`
`is compatible with
`
`the Intel Pentium
`
`microprocessor but is a completely new and independent design and architecture as
`
`compared to the Intel device.
`
`13.
`
`I am a licensed professional engineer in Texas, Mississippi, and
`
`Arkansas. I am also the proprietor of a registered engineering firm in Texas and
`
`have worked as a professional engineer either independently or as an employee of
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`3
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`another firm for over 23 years.
`
`14.
`
`I am a senior member of both the IEEE (Institute of Electrical and
`
`Electronic Engineers) and the ACM (Association of Computing Machinery). I
`
`regularly attend IEEE- and ACM-sponsored events and conferences, present
`
`papers at conferences, publish research results in archival journals, and serve on
`
`conference and symposia Program Committees.
`
`15.
`
`In terms of service to my profession, I have been selected to serve as
`
`chair of the IEEE Computer Society Technical Committee on Multiple-Valued
`
`Logic from 2010 to 2011, chair of the IEEE-USA Committee on Licensure and
`
`Registration from 2009 to 2011, chair of the NCEES (National Council of
`
`Examiners for Engineering and Surveying) electrical and computer engineering
`
`licensure examination preparation committee from 2009 to 2011, and numerous
`
`other positions as listed in my curriculum vitae in Exhibit 2.
`
`DESCRIPTION OF TECHNOLOGY
`
`16.
`
`It is helpful as background to briefly discuss the purpose of caching as
`
`it existed at the time of the filing of the ‘145 patent. A cache is a dedicated
`
`allocation of memory whose purpose is to store data, computer instructions, or in
`
`general, objects. Caches improve the performance of computer systems and
`
`networks due to the principle of locality. The two types of locality commonly
`
`exploited in caches are referred to as temporal and spatial locality. The principle
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`4
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`of temporal locality states that data or instructions that have been recently accessed
`
`by the computer system are likely to be accessed again. Spatial locality is the
`
`principle that data or instructions recently accessed by a computer indicate that
`
`other data or instructions “nearby” the accessed data are also likely to be accessed.
`
`The use of the term “nearby” in the previous sentence refers to the fact that data
`
`and instructions are located in a memory storage device that uses addresses or
`
`other related referencing schemes for access. Depending upon the storage
`
`technology, the address may be formulated using physical, virtual, track/sector, or
`
`other information. In this case, “nearby” would refer to addresses or file locations
`
`that are close to the same value as those of the previously accessed data. “Nearby”
`
`can also refer to other data that is referenced or pointed to by the accessed data.
`
`One example may be that an accessed webpage contains Uniform Resource
`
`Locators (URLs) or links to other webpages. The linked webpages could be
`
`considered to be “nearby” and hence exhibit spatial locality.
`
`17. Networks incur delay in the transmission of message packets from one
`
`computer system to another. Furthermore, large networks of computer systems are
`
`often configured in a hierarchical manner with one subset of the computer systems
`
`networked with local intra-network connections (LAN) that may define a group of
`
`computer systems connected to other subsets of computers or groups with an inter-
`
`network (WAN) such as the Internet. Each group may include one or more
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`5
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`network servers whose function is to provide a gateway between the intra-network
`
`and the inter-network connecting other intra-networks. Network servers are
`
`specialized computer systems that generally contain dedicated hardware and
`
`software enabling them to receive and transmit packets among the typically high-
`
`bandwidth networks that connect intra-networks. Furthermore, network servers
`
`often contain or utilize a large number of intra-network interfaces that allow for the
`
`computers within the intra-network to have direct connections to the network
`
`server. The specialized hardware and software utilized by network servers is often
`
`specified by the Internet Service Provider (ISP) or other inter-network service
`
`provider and is generally not found in other computer systems within the intra-
`
`network.
`
`18.
`
`Intra-networks or LANs may contain one or more specialized caching
`
`servers whose purpose is to maintain caches of objects requested by computer
`
`systems in the intra-network.
`
` Caching servers are intended to increase
`
`performance since accesses occurring within a LAN are generally more efficient
`
`than those over the WAN. Caching servers must thus contain specialized
`
`functionality such as a cache memory and the policies and protocols that enable
`
`caching functionality. Cache coherency is an issue to ensure that current versions
`
`of objects are present in the caches of the servers within the LAN when multiple
`
`caches are used to form a single virtual cache.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`6
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`19. Although the inter-network (WAN) is usually a high-bandwidth
`
`connection, it also often exhibits significant delay as compared to traffic within the
`
`intra-network (LAN) due to long geographic propagation distances, the fact that
`
`many intermediate network servers are included in a packet’s path, and the fact
`
`that the interface or gateway to the WAN is shared by computer systems within the
`
`LAN, thus sharing the high-bandwidth. When a client process originating from a
`
`client within a first intra-network requests service from a server within a second
`
`intra-network, a delay penalty in accordance with the inter-network transaction
`
`occurs. Because this delay penalty is undesirable, the employment of caching can
`
`be used to improve performance.
`
`20. The use of caching to combat the delay penalty described in the
`
`previous paragraph can be implemented such that a request for content by a server
`
`within a first intra-network from a server located within a second intra-network
`
`causes content to be stored in a cache more accessible to computer systems within
`
`the first intra-network. Such content is then available for subsequent inquiries
`
`predicted to occur because of the foregoing described principles of temporal and
`
`spatial locality. The content can be stored at one or more servers in the first intra-
`
`network itself or in any of the inter-networks that connect the two intra-networks,
`
`each storage location that is closer to the requesting server reducing the speed
`
`penalty incurred by accessing content all the way from the second intra-network.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`7
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`21. Cooperative caching refers to the use of a plurality of caches as a
`
`single virtual cache. A cooperative cache can be formed using a group of caching
`
`modules at individual computer systems. The computer systems may be part of
`
`one or more networks but cooperate in a single architecture to expand the amount
`
`of cached content available to users of the cooperative cache. In addition to the
`
`goal of reducing the delay penalty of object retrieval, the goals of such an
`
`architecture are to increase the likelihood of finding an object within the cache, or
`
`the “hit rate.” Increased hit rates mean that the cache is more often able to respond
`
`to a request with content that is present in the cache. Since object retrieval from a
`
`cache incurs less penalty than object retrieval from an origin or other proxy server,
`
`performance increases in proportion to the hit rate. Another goal of a cooperative
`
`cache is to increase the “reach” of the cache to make it available to additional
`
`requestors of content.
`
`22. Cooperative
`
`caching
`
`architectures
`
`vary
`
`in
`
`structure
`
`and
`
`implementation. A hierarchical cooperative cache utilizes a pre-determined
`
`structure that causes cache access of computer systems participating in the caching
`
`structure to occur in a prescribed sequence to locate cached content. Distributed
`
`cooperative caches can generally be considered to define a large single virtual
`
`cache accessible to a collection of servers, typically using directories of the
`
`location of content cached by the caching structure. Some cooperative caching
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`8
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`architectures, like the architecture of Inohara discussed below, can be considered to
`
`be hybrid combinations of the hierarchical and distributed caching concept.
`
`23. The technology at issue involves the specific structure of such a
`
`cooperative caching scheme among a collection of distributed, networked,
`
`computer systems.
`
`DISCUSSION OF THE CITED ART
`
`24.
`
`I have been advised of the decision of the Patent Trial and Appeal
`
`Board (the “Board”) to institute inter partes review of the ’145 Patent, and more
`
`particularly, that the Board has decided to institute review of claims 2-4, 6, 7, 10,
`
`16-18, 20, 21, 24, and 29-36 of the ’145 patent of the ’145 Patent based on the
`
`alleged grounds that claims 2-4, 6, 7, 10, 16-18, 20, 21, 24, and 29-36 of the ’145
`
`Patent would have been obvious over Smith and Inohara under 35 U.S.C. § 103. I
`
`have carefully considered both Smith and Inohara and believe that the Petitioner
`
`misunderstands the cache architecture of Inohara, particularly with regard to how
`
`Inohara is described as teaching aspects of the claims of the ‘145 Patent. More
`
`particularly, after reviewing Inohara, I have concluded that Inohara does not teach
`
`the concepts of determining whether to allow a client to enter a cache community,
`
`determining admittance into a cache community, or otherwise denying entrance
`
`into a cache community, and that such teachings would in fact be contrary to the
`
`entire premise of Inohara.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`9
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`25.
`
`I have included a brief discussion of Smith and a more complete
`
`discussion of Inohara below to assist the Board in understanding the cache
`
`architecture of Inohara and to explain my reasons for concluding that Inohara
`
`should not be read as teaching elements of the claims of the ’145 Patent that are
`
`missing in Smith.
`
`SMITH
`
`26. Smith discloses a distributed network caching system consisting of a
`
`“Proxy Server Array” that provides access to stored data objects available over the
`
`Internet. Col. 2, lines 47-51. Smith discusses the caching system with regard to a
`
`hashing algorithm that identifies proxy servers to provide access to the stored data
`
`objects. Abstract of Smith. Smith also discusses how the hashing algorithm
`
`behaves when a proxy server is added to the array. Abstract of Smith.
`
`27. Since the actual invention described by Smith appears to be the
`
`above-mentioned hashing algorithm, Smith provides only a cursory description of
`
`proxy servers being assembled into an array, and generally contemplates that proxy
`
`servers seeking to join the array are automatically admitted without any
`
`determination as to whether to allow the proxy server to join the array or potential
`
`for preventing the proxy server from joining the array. Smith has no discussion of
`
`a decision to allow a proxy server to enter the array, much less any basis for a
`
`determination that could form the basis of any such decision. Unlike Inohara,
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`10
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`which affirmatively discusses a system where there is no need to limit membership
`
`in a cache architecture, Smith remains silent on the topic.
`
`
`
`INOHARA
`
`28. After studying the Inohara reference, I have concluded that Inohara’s
`
`primary premise of design is that membership in a cache architecture should not be
`
`limited, and that Inohara cannot be combined with Smith to teach a determination
`
`of whether a server is to be allowed (or disallowed) entrance into a cache
`
`community. To evaluate the teachings of Inohara, a complete understanding of
`
`Inohara is needed. Inohara addresses the need for a cache system that is suitable
`
`for use with a large network for the world-wide web (the “Internet”). Inohara
`
`discusses problems that need to be solved for a cache system to function for a
`
`network as large as the Internet. Col. 1, lines 7-15. One of such problems
`
`discussed is the heavy administrative burden in configuring the cache system to
`
`establish cooperation between servers of cache content.
`
`The Cache Tree of Inohara
`
`29. To reduce such an administrative burden, Inohara describes a process
`
`to dynamically structure a multi-cast hierarchy that includes many groups of
`
`servers arranged in a tree structure. Col. 4, lines 1-9. Each group of servers has a
`
`leader, which in turn forms a higher-level logical group with other leaders of
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`11
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`groups located at the same logical level. Col. 7, lines 51-56. Thus, each of the
`
`groups located at the same level forms a branch of the high level group. The high
`
`level group has its own leader, which in turn forms a still higher-level logical
`
`group with leaders of other high level groups, each high level group thereby
`
`forming branches of the still higher logical level group. Col. 7, lines 51-56. This
`
`basic tree structure of Inohara 1 is illustrated below in Diagram A, in which leader
`
`A of group A, B, and C forms an upper level group with leader D of group D, E,
`
`and F. Such upper level group in term has its own leader X (which could in turn be
`
`part of a still higher level group with its own leader). In such a manner, a multi-
`
`cast hierarchy can be infinitely extended to create extremely large caching
`
`networks, suitable for use with the Internet as discussed by Inohara. Col. 4, lines
`
`14-22.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`12
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`
`
`Adding Servers to Inohara
`
`30. The process of how Inohara adds additional cache servers is important
`
`to the creation and expansion of a large caching system, particularly because the
`
`self-organizing nature of such a system is important to addressing the above-
`
`referenced need to reduce the administrative burden incurred when operating
`
`caching systems in large networks.
`
`31.
`
`In Inohara, a request is received from one or more new servers who
`
`wish to join a particular group in the cache hierarchy. In response to the request,
`
`Inohara uses a process for reconstructing the group to account for the increase in
`
`the number of members necessitated by the addition of the one or more new
`
`servers. The process involves comparing the total number of members that would
`
`result from accepting the new servers into the group to a maximum number
`
`(identified as MAX in Inohara). Col. 10, lines 51 – Col. 11, lines 5.
`
`32. While the reason for such a MAX number is not explained in great
`
`detail by Inohara, Inohara does note that the cache hierarchy and its ability to have
`
`individual groups with less than a fixed number is desirable such that
`
`“communication for management does not explode even if the number of servers
`
`becomes large.” Col. 14, lines 27-33. In other words, the reason for imposing a
`
`maximum number of servers in any particular branch of the cache hierarchy is to
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`13
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`limit the communications required from a single server to manage caching between
`
`the servers on any such branch to an acceptable level.
`
`33. For example, if a leader of a particular group had to communicate
`
`with unacceptably large numbers of servers within such a group each time it
`
`received an inquiry or propagated a message to such a group, the overhead for such
`
`communications could overwhelm the leader or a network as a whole. By splitting
`
`the number of servers into several linked sub-groups that communicate with each
`
`other
`
`through group
`
`leaders
`
`in
`
`the
`
`tree structure described above,
`
`the
`
`communications overhead is reduced to an acceptable level. Thus, the benefit of
`
`having the additional volume of content capable of being cached by a large number
`
`of servers can be realized with far fewer communications required by any one
`
`group leader. Col. 14, lines 21-40.
`
`34.
`
`Inohara describes three different scenarios with respect to adding new
`
`servers to a server group. In the first scenario, if adding all of the new servers to
`
`the server group would not cause the group to go above a maximum number, the
`
`new servers are added to the existing server group without further reorganization.
`
`Col. 10, lines 60-66.
`
`35.
`
`In the second scenario, if adding all of the new servers would cause
`
`the group to go above a maximum number, then a second determination of whether
`
`adding even one of the new servers would cause the group to go above the
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`14
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`maximum number. If the answer to the second determination is no, then the first
`
`new server is added to the existing group to form a first sub-group and the
`
`remaining new servers form a second sub-group branching off of the first sub-
`
`group with the first new server as the leader of the second sub-group, such second
`
`sub-group thereby becoming a branch of the now higher level first sub-group. Col.
`
`11, lines 6-17.
`
`36.
`
`In the third scenario, if adding all of the new servers would cause the
`
`group to go above a maximum number, Inohara discusses making a second
`
`determination to see if adding even one of the new servers would cause the group
`
`to go above a maximum number. If even one server would cause the maximum
`
`number of servers to be exceeded, a first higher level sub-group is formed from the
`
`leader of the existing group of servers, the first of the new servers, and a second
`
`member of the existing group of servers. The leader of the existing group is
`
`reassigned to be the leader of the first higher level sub-group. The second member
`
`of the existing group becomes the leader of its own second lower level sub-group
`
`that branches off from the high level sub-group. Similarly, the first of the new
`
`servers becomes the leader of a third sub-group that includes all of the other new
`
`members that requested to join the original group. Thus, as a result of the new
`
`servers requesting to join the original group of servers, the original group is split
`
`into three sub-groups that cooperate to share cached content (one higher level sub-
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`15
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`group and two lower level sub-groups forming branches of the higher level group).
`
`Col. 11, lines 18-37.
`
`37. To aid in the understanding of the above, I have included diagrams of
`
`the organization of servers into hypothetical groups of servers based on the three
`
`different scenarios described above from the disclosure of Inohara. Diagram B
`
`illustrates the organization of servers where the addition of new servers (servers
`
`301 and 301’) requesting to be added to an existing server group (servers 10, 232,
`
`232’, and 232”) in the caching hierarchy of Inohara does not cause the server group
`
`to exceed the maximum number of servers (where it is assumed that MAX=6). As
`
`shown, all of the servers participate in a single group. No sub-grouping is
`
`required.
`
`38. Diagram C illustrates the organization of servers where the addition of
`
`new servers (servers 301, 301’, and 301”) requesting to be added to the existing
`
`server group (servers 10, 232, 232’, and 232”) does cause the server group to
`
`exceed the maximum number of servers (where it is assumed that MAX=6), but
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`16
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`where the addition of at least one of the new servers would not cause the server
`
`group to exceed the maximum number of servers. As shown, server 301 forms a
`
`first sub-group with the original servers of the group and also forms a second lower
`
`level sub-group with servers 301’ and 301”.
`
`39. Diagram D illustrates the situation where the addition of even one of a
`
`number of new servers (servers 301, 301”, and 301”) requesting to be added to the
`
`existing server group (servers 10, 232, 232’, 232”, 232’”, and 232””) causes the
`
`server group to exceed the maximum number of servers (where it is assumed that
`
`MAX=6). As shown, server 301 forms an upper level sub-group with server 10
`
`and server 232. Server 301 also forms a lower level sub-group with server 301’
`
`and 301’’. Server 232 forms another lower level sub-group with servers 232’,
`
`232”, 232’”, and 232””.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`17
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`40.
`
`In all three scenarios, the process described by Inohara results in the
`
`new servers being added to the same caching tree hierarchy and cooperating to
`
`share cached content. As illustrated, in all three scenarios, new joining servers are
`
`placed into the existing group or sub-groups with previously existing members of
`
`the existing server group within a single caching system. For example, server 301
`
`is always in a sub-group (shown with dashed lines in the illustrations above) with
`
`both server 10 and server 232.
`
`
`
`The Teachings of Inohara
`
`41. According to its teachings, Inohara has solved significant problems
`
`associated with a distributed cache suitable for use with an ever-expanding
`
`network such as the Internet that removes the need to limit membership in a cache
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`18
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`system, including: (i) reducing the administrative burden with configuring and
`
`reconfiguring a large distributed cache using the self-organizing arrangement of
`
`groups of servers into a hierarchical tree structure as new servers are added to the
`
`cache system; and (ii) reducing the extent of management communications
`
`involved in managing a large number of servers and cache directories in a large
`
`distributed cache by separating the servers into overlapping and cooperative groups
`
`of servers. Col. 3, line 65 – col. 4, line 22.
`
`
`
`Inohara Teaches Away From Denying Membership in a Cache System
`
`42.
`
`Inohara cannot be said to teach the concept of denying a server
`
`membership in a cache system. The entire thesis of Inohara is that the architecture
`
`of the cache hierarchy is infinitely scalable, as noted by Inohara, even when used
`
`with the Internet “with an enormous number of machines connected.” Col. 3, lines
`
`29-30. In all cases, a server seeking to join the cache hierarchy of Inohara is
`
`admitted into the tree hierarchy that serves as the cache system of Inohara and is
`
`operable to share content with all other servers in the hierarchy. In all of the
`
`scenarios described in Inohara where servers seek to be added to the hierarchy, all
`
`servers requesting membership in the cache system are admitted, assigned a
`
`position within the caching hierarchy, and cooperate to share cached content with
`
`the cache hierarchy.
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`19
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`43. Even if, for the sake of argument, an individual server group of
`
`Inohara itself could be said to represent a cache system by itself, a position that is
`
`inconsistent with the teachings of Inohara, Inohara does not even teach denying a
`
`server membership in such an individual server group. Instead, Inohara goes to
`
`great lengths to accept a new server or server into the individual server group, so
`
`much so that if including the server in the individual server group would cause it to
`
`become too large, even then Inohara doesn’t deny the server entrance. Instead,
`
`Inohara reorganizes the server group into sub-groups and includes the new server
`
`or servers in one of the sub-groups, the sub-groups being composed of both the
`
`new server or servers as well as the servers that were previously members of the
`
`individual server group. Col. 11, lines 18-37. As noted above, in all cases a new
`
`server is included in a sub-group with a previous member of the individual server
`
`group that such new server requested to join. Even when the cache hierarchy has
`
`to go to the extreme of restructuring the individual server group that received a join
`
`request into the described sub-groups, the new server or servers are always
`
`accepted into the individual server group.
`
`44. The overriding principle of Inohara is inclusion. Such inclusion is
`
`necessary in order to accept a large number of servers without constraint in order
`
`to scale the cache hierarchy a size that is required to service large networks. All of
`
`the goals and issues described in Inohara are means to the ultimate end of
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`20
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`achieving a cache hierarchy that is infinitely scalable. Inohara clearly teaches
`
`away from any concept of permission, determination, or any limiting or denial of
`
`membership. Adding the possibility of denial of servers into the cache hierarchy
`
`of Inohara, based on size or otherwise, would render Inohara inoperable for its
`
`intended purpose of serving as a distributed caching system for the entire Internet
`
`or other large scale networks.
`
`COMBINATION OF SMITH AND INOHARA
`
`45. As should be readily apparent from the foregoing, combining Inohara
`
`with Smith for the purpose of Inohara teaching the concept of determining whether
`
`or not to allow a client to join the proxy-server based cache community of Smith
`
`would both render Inohara inoperable for its intended purpose and be contrary to
`
`the express teachings of Inohara. As discussed above, the express purpose of
`
`Inohara is to provide a cache architecture that is capable of including a large
`
`number of participants so as to be scalable to accommodate extremely large groups
`
`of servers (such as the thousands upon thousands of servers forming the Internet).
`
`The primary purpose or intention of the cache architecture described by Inohara is
`
`to overcome the administrative and management hurdles that impose limitations on
`
`the size of cache architectures. Combining any element of Inohara with Smith in a
`
`manner that results in a determination whereby a server is not allowed to join a
`
`cache community (particularly a determination based on the size of the community
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`21
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`as suggested by the Petitioners), would destroy the ability of Inohara to achieve its
`
`primary purpose, rendering Inohara inoperable for that purpose. In addition to
`
`concerns of operability, there is no question that Inohara teaches away from a
`
`combination of Smith and Inohara to include a determination of allowing a client
`
`to join a cache community (again, particularly based on the size of the cache
`
`community) in order to limit the size of the cache community in Smith.
`
`
`
`
`
`DECLARATION OF DR. MITCHELL THORNTON
`
`22
`
`
`Case No. IPR2014-00136
`
`
`
`
`
`SUMMARY
`
`
`
`46.
`
`Inohara discloses a cache architecture that overcomes the size
`
`limitations on other cache systems through the use of a self-organizing structure of
`
`overlapping groups of servers into a hierarchical tree. The architecture solves
`
`issues with previous cache systems associated with setting up large cache
`
`architectures and managing their operations. Inohara has no teaching of
`
`determining membership in a cache community, either because of size limitations
`
`or otherwise and instead teaches an entirely inclusive process for building out a
`
`cache system using a very large number of servers. Inohara does not teach limiting
`
`membership in a cache comm