`By:
`
`Joseph E. Palys
`Paul Hastings LLP
`875 15th Street NW
`Washington, DC 20005
`Telephone: (202) 551-1996
`Facsimile: (202) 551-0496
`E-mail: josephpalys@paulhastings.com
`
`Naveen Modi
`Paul Hastings LLP
`875 15th Street NW
`Washington, DC 20005
`Telephone: (202) 551-1990
`Facsimile: (202) 551-0490
`E-mail: naveenmodi@paulhastings.com
`
`
`
`
`
`
`
`
`UNITED STATES PATENT AND TRADEMARK OFFICE
`
`
`
`
`
`BEFORE THE PATENT TRIAL AND APPEAL BOARD
`
`
`
`
`
`APPLE INC.
`Petitioner
`v.
`VIRNETX INC.
`Patent Owner
`
`
`
`Case IPR2014-00238
`Patent 8,504,697
`
`
`
`
`
`Declaration of Fabian Monrose, Ph.D.
`
`
`
`
`
`
`1
`
`Page 1 of 59
`
`VIRNETX EXHIBIT 2025
`Apple v. VirnetX
`Trial IPR2014-00238
`
`
`
`Case No. IPR2014-00238
`
`Table of Contents
`I. Introduction .......................................................................................................... 4
`II. Resources Consulted ........................................................................................... 4
`III. Background and Qualifications ......................................................................... 5
`IV. Level of Ordinary Skill ...................................................................................10
`V. Claim Terms .....................................................................................................11
`A. “Secure Communication Link” (Claims 1-3, 11-13, 16-17, and
`24-27) ..................................................................................................11
`B. “Virtual Private Network (VPN) Communication Link” (Claims 3
`and 17) .................................................................................................14
`C. “Intercept[ing] . . . a request to look up an internet protocol (IP)
`address” (Claims 1 and 16) .................................................................15
`D. “Determining, in response to the request, whether the second
`network device is available for a secure communications
`service” (Claims 1 and 16) ..................................................................18
`VI. Wesinger ..........................................................................................................20
`A. Wesinger’s Disclosure ............................................................................20
`B. Claims 1 and 16 ......................................................................................25
`1. “Determining, In Response to the Request [to Look Up an
`IP Address], Whether the Second Network Device Is
`Available for a Secure Communications Service” ...................25
`a) “In Response to the Request to Look up an IP
`Address” .........................................................................26
`b) “Whether the Second Network Device Is Available
`for the Secure Communications Service” .......................34
`2. “Intercepting . . . a Request to Look up an Internet Protocol
`(IP) Address of the Second Network Device” ..........................35
`
`2
`
`Page 2 of 59
`
`
`
`Case No. IPR2014-00238
`
`C. Dependent Claims...................................................................................38
`1. Dependent Claims 8, 9, 22, and 23—Mobile Device ...................38
`2. Dependent Claims 10 and 29—Receiving the Request to
`Determine Whether the Second Network Device is
`Available for the Secure Communications Service ..................39
`3. Dependent Claims 14 and 28— Function of a Domain
`Name Lookup ............................................................................40
`VII. Wesinger in View of RFC 2543 .....................................................................40
`VIII. Conclusion .....................................................................................................43
`
`
`
`
`
`3
`
`Page 3 of 59
`
`
`
`Case No. IPR2014-00238
`
`I, FABIAN MONROSE, declare as follows:
`
`I.
`
`Introduction
`I have been retained by VirnetX Inc. (“VirnetX”) for this inter partes
`1.
`
`review proceeding. I understand that this proceeding involves U.S. Patent No.
`
`8,504,697 (“the ’697 patent”). I understand the ’697 patent is assigned to VirnetX
`
`and that it is part of a family of patents that stems from U.S. provisional
`
`application nos. 60/106,261 (“the ’261 application”), filed on October 30, 1998,
`
`and 60/137,704 (“the ’704 application”), filed on June 7, 1999. I understand that
`
`the ’697 patent has a continuation relationship through several applications to U.S.
`
`application no. 09/558,210 filed April 26, 2000 (“the ’210 application,”
`
`abandoned). And I understand the ’210 application is a continuation-in-part of
`
`U.S. application no. 09/504,783 filed February 15, 2000 (now U.S. Patent
`
`6,502,135, “the ’135 patent”), and that the ’135 patent is a continuation-in-part of
`
`U.S. application no. 09/429,643 (now U.S. Patent No. 7,010,604) filed October 29,
`
`1999, which claims priority to the ’261 and ’704 applications.
`
`II. Resources Consulted
`I have reviewed the ’697 patent, including claims 1-30. I have also
`2.
`
`reviewed the Petition for Inter Partes Review (Paper No. 1, the “Petition”) filed
`
`with the U.S. Patent and Trademark Office (“Office”) by Apple Inc. on December
`
`6, 2013. I have also reviewed the Patent Trial and Appeal Board’s (“Board”)
`
`4
`
`Page 4 of 59
`
`
`
`Case No. IPR2014-00238
`
`decision to institute inter partes review (Paper No. 15, the “Decision”) of May 14,
`
`2014. I understand that in this proceeding the Board instituted review of the ’697
`
`patent on two grounds: (1) anticipation of claims 1-3, 8-11, 14-17, 22-25, and 28-
`
`30 by Wesinger; and (2) obviousness of claims 4-7 and 18-21 over Wesinger in
`
`view of RFC 2543. I have reviewed the exhibits and other documentation
`
`supporting the Petition that are relevant to the Decision and the instituted grounds.
`
`III. Background and Qualifications
`I have a great deal of experience and familiarity with computer and
`3.
`
`network security, and have been working in this field since 1993 when I entered
`
`the Ph.D. program at New York University.
`
`4.
`
`I am currently a Professor of Computer Science at the University of
`
`North Carolina at Chapel Hill. I also hold an appointment as the Director of
`
`Computer and Information Security at the Renaissance Computing Institute
`
`(RENCI). RENCI develops and deploys advanced technologies to facilitate
`
`research discoveries and practical innovations. To that end, RENCI partners with
`
`researchers, policy makers, and technology leaders to solve the challenging
`
`problems that affect North Carolina and our nation as a whole. In my capacity as
`
`Director of Computer and Information Security, I
`
`lead
`
`the design and
`
`implementation of new platforms for enabling access to, and analysis of, large and
`
`sensitive biomedical data sets while ensuring security, privacy, and compliance
`
`5
`
`Page 5 of 59
`
`
`
`Case No. IPR2014-00238
`
`with regulatory requirements. At RENCI, we are designing new architectures for
`
`securing access to data (e.g., using virtual private networks and data leakage
`
`prevention technologies) hosted among many different institutions. Additionally, I
`
`serve on RENCI’s Security, Privacy, Ethics, and Regulatory Oversight Committee
`
`(SPOC), which oversees the security and regulatory compliance of technologies,
`
`designed under the newly-formed Data Science Research Program and the Secure
`
`Medical Research Workspace.
`
`5.
`
`I received my B.Sc. in Computer Science from Barry University in
`
`May 1993. I received my MSc. and Ph.D. in Computer Science from the Courant
`
`Institute of Mathematical Sciences at New York University in 1996 and 1999,
`
`respectively. Upon graduating from the Ph.D. program, I joined the Systems
`
`Security Group at Bell Labs, Lucent Technologies. There, my work focused on the
`
`analysis of
`
`Internet Security
`
`technologies
`
`(e.g.,
`
`IPsec and client-side
`
`authentication) and applying
`
`these
`
`technologies
`
`to Lucent’s portfolio of
`
`commercial products. In 2002, I joined the Johns Hopkins University as Assistant
`
`Professor in the Computer Science department. I also served as a founding
`
`member of the Johns Hopkins University Information Security Institute (JHUISI).
`
`At JHUISI, I served a key role in building a center of excellence in Cyber Security,
`
`leading efforts in research, education, and outreach.
`
`6
`
`Page 6 of 59
`
`
`
`Case No. IPR2014-00238
`
`6.
`
`In July of 2008, I joined the Computer Science department at the
`
`University of North Carolina (UNC) Chapel Hill as Associate Professor, and was
`
`promoted to Full Professor four years later. In my current position at UNC Chapel
`
`Hill, I work with a large group of students and research scientists on topics related
`
`to cyber security. My former students now work as engineers at several large
`
`companies, as researchers in labs, or as university professors themselves. Today,
`
`my research focuses on applied areas of computer and communications security,
`
`with a focus on traffic analysis of encrypted communications (e.g., Voice over IP);
`
`Domain Name System (DNS) monitoring for performance and network abuse;
`
`network security architectures for traffic engineering; biometrics and client-to-
`
`client authentication techniques; computer forensics and data provenance; runtime
`
`attacks and defenses for hardening operating system security; and large-scale
`
`empirical analyses of computer security incidents. I also regularly teach courses in
`
`computer and information security.
`
`7.
`
`I have published over 75 papers in prominent computer and
`
`communications security publications. My research has received numerous
`
`awards, including the Best Student Paper Award (IEEE Symposium on Security &
`
`Privacy, July, 2013), the Outstanding Research in Privacy Enhancing Technologies
`
`Award (July, 2012), the AT&T Best Applied Security Paper Award (NYU-Poly
`
`CSAW, Nov., 2011), and the Best Paper Award (IEEE Symposium on Security &
`
`7
`
`Page 7 of 59
`
`
`
`Case No. IPR2014-00238
`
`Privacy, May, 2011), among others. My research has also received corporate
`
`sponsorship, including two Google Faculty Research Awards (2009, 2011) for my
`
`work on network security and computer forensics, as well as an award from
`
`Verisign Inc. (2012) for my work on DNS.
`
`8.
`
`I am the sole inventor or a co-inventor on five issued US patents and
`
`two pending patent applications, nearly all of which relate to network and systems
`
`security. Over the past 12 years, I have been the lead investigator or a
`
`co-investigator on grants totaling nearly nine million US dollars from the National
`
`Science Foundation (NSF), the Department of Homeland Security (DHS), the
`
`Department of Defense (DoD), and industry. In 2014, I was invited to serve on the
`
`Information Science and Technology (ISAT) study group for the Defense
`
`Advanced Research Projects Agency (DARPA). During my
`
`three year
`
`appointment, I will assist DARPA by providing continuing and independent
`
`assessment of the state of advanced information science and technology as it
`
`relates to the U.S. Department of Defense.
`
`9.
`
`I have chaired several international conferences and workshops,
`
`including for example, the USENIX Security Symposium, which is the premier
`
`systems-security conference for academics and practitioners alike. Additionally, I
`
`have also served as Program Chair for the USENIX Workshop on Hot Topics in
`
`Security, the Program Chair for the USENIX Workshop on Large-scale Exploits &
`
`8
`
`Page 8 of 59
`
`
`
`Case No. IPR2014-00238
`
`Emergent Threats, the local arrangements Chair for the Financial Cryptography
`
`and Data Security Conference, and the General Chair of the Symposium on
`
`Research in Attacks and Defenses. As a leader in the field, I have also served on
`
`numerous technical program committees including the Research in Attacks,
`
`Intrusions, and Defenses Symposium (2012, 2013), USENIX Security Symposium
`
`(2013, 2005-2009), Financial Cryptography and Data Security (2011, 2012),
`
`Digital Forensics Research Conference (2011, 2012), ACM Conference on
`
`Computer and Communications Security (2009-2011, 2013), IEEE Symposium on
`
`Security and Privacy (2007, 2008), ISOC Network & Distributed System Security
`
`(2006—2009), International Conference on Distributed Computing Systems (2005,
`
`2009, 2010), and USENIX Workshop on Large-scale Exploits and Emergent
`
`Threats (2010-2012).
`
`10. From 2006 to 2009, I served as an Associate Editor for IEEE
`
`Transactions on Information and Systems Security (the leading technical journal
`
`on cyber security), and currently serve on the Steering Committee for the USENIX
`
`Security Symposium.
`
`11. My curriculum vitae, which is appended, details my background and
`
`technical qualifications. Although I am being compensated at my standard rate of
`
`$450/hour for my work in this matter, the compensation in no way affects the
`
`statements in this declaration.
`
`9
`
`Page 9 of 59
`
`
`
`Case No. IPR2014-00238
`
`IV. Level of Ordinary Skill
`I am familiar with the level of ordinary skill in the art with respect to
`12.
`
`the inventions of the ’697 patent as of what I understand is the patent’s early-2000
`
`priority date. Specifically, based on my review of the technology, the educational
`
`level of active workers in the field, and drawing on my own experience, I
`
`believe a person of ordinary skill in art at that time would have had a master’s
`
`degree in computer science or computer engineering, as well as two years of
`
`experience in computer networking with some accompanying exposure to network
`
`security. My view is consistent with VirnetX’s view that a person of ordinary skill
`
`in the art requires a master’s degree in computer science or computer engineering
`
`and approximately two years of experience in computer networking and computer
`
`security. I have been asked to respond to certain opinions offered by Apple’s
`
`expert, Michael Fratto, consider how one of ordinary skill would have understood
`
`certain claim terms, and consider how one of ordinary skill in the art would have
`
`understood the references mentioned above in relation to the claims of the ’697
`
`patent.1 My findings are set forth below.
`
`
`1 As noted, I was asked to opine only with respect to certain issues that are
`
`
`discussed in this declaration. By doing so, however, I do not necessarily agree
`
`with other positions taken by Apple or Mr. Fratto that I do not address here.
`
`10
`
`Page 10 of 59
`
`
`
`Case No. IPR2014-00238
`
`V. Claim Terms
`“Secure Communication Link” (Claims 1-3, 11-13, 16-17, and
`A.
`24-27)
`I understand that the Decision preliminarily construed “secure
`
`13.
`
`communication link” to mean “a transmission path that restricts access to data,
`
`addresses, or other information on the path, generally using obfuscation methods to
`
`hide information on the path, including, but not limited to, one or more of
`
`authentication, encryption, or address hopping.” (Decision at 8.) I disagree with
`
`this interpretation.
`
`14. The ordinary meaning of “secure communication link” and the
`
`broadest reasonable interpretation in light of the teachings of the ’697 patent,
`
`which I understand is the standard used by the Board in this proceeding, require
`
`encryption. (See Paper No. 12 at 20-27.) The patent specification teaches that
`
`“data security is usually tackled using some form of data encryption,” and it
`
`repeatedly discusses using encryption in various embodiments. (Ex. 1001 at 1:57-
`
`58; see also id. at 10:26-27, 11:42-49, 34:38-39.)
`
`15. Additionally, the Decision’s construction is technically incorrect. Of
`
`the “obfuscation” methods in the construction—authentication, encryption, and
`
`address hopping—only encryption restricts access to “data, addresses, or other
`
`information on the path.” The other techniques alone do not provide the claimed
`
`security.
`
`11
`
`Page 11 of 59
`
`
`
`Case No. IPR2014-00238
`
`16.
`
`In my opinion, authentication merely ensures the recipient that a
`
`message originated from the expected sender, which is consistent with the
`
`definition of authentication in a dictionary the ’697 patent incorporates by
`
`reference. (See Ex. 2004 at 3, Glossary for the Linux FreeS/WAN Project.)
`
`Authentication does not prevent an eavesdropper from accessing data transmitted
`
`over an unsecure communication link. The specification is also consistent with my
`
`understanding, as it describes at least one scenario where an authenticated
`
`transmission occurs “in the clear”—i.e., over an unsecured communication link:
`
`SDNS [secure domain name service] 3313 can be accessed
`through secure portal 3310 “in the clear”, that is, without using
`an administrative VPN communication link. In this situation,
`secure portal 3310 preferably authenticates the query using any
`well-known technique, such as a cryptographic technique,
`before allowing the query to proceed to SDNS [3313].
`(Ex. 1001 at 52:7-12.)
`17. Address hopping alone also does not provide security, as there is
`
`nothing inherent in moving from address to address that precludes an eavesdropper
`
`from reading the details of a communication. This is why the ’697 patent discloses
`
`embodiments that use encryption in conjunction with address hopping to protect,
`
`for example, the next address in a routing scheme from being viewed by
`
`eavesdroppers. (See, e.g., Ex. 1001 at 3:36-50, stating in part that “[e]ach TARP
`
`packet’s true destination is concealed behind a layer of encryption generated using
`
`12
`
`Page 12 of 59
`
`
`
`Case No. IPR2014-00238
`
`a link key.”) The Decision states that address hopping alone is sufficient because
`
`the ’697 patent states that “[a]ddress hopping provides security and privacy.”
`
`(Decision at 7, citing Ex. 1001 at 25:54-56, 40:66-68.) But the address hopping
`
`embodiments in the ’697 patent also use encryption, and it is the encryption that
`
`provides security while moving from address to address. (See, e.g., id. at 3:16-
`
`4:40.)
`
`18.
`
`I understand that in IPR2014-00237 for the ’697 patent, the Institution
`
`Decision cites dictionary definitions, but these definitions are for peripheral terms
`
`and do not support the proposed definition of “secure communication link.”
`
`(IPR2014-00237, Institution Decision at 8, Paper 15; IPR2014-00237 Exs. 3002,
`
`3003.) For example, the Decision refers to an IEEE dictionary definition of
`
`“security service.” As defined, however, “security service” broadly pertains to
`
`securing “system resources” in addition to data. (IPR2014-00237 Ex. 3003.)
`
`System resources can be secured through physical means that include access
`
`controls or authentication, such as where the computing system is in a locked room
`
`and the person attempting to gain access to it must present a guard with credentials
`
`to physically access the machine. Securing data that travels over a public network
`
`of switches and routers controlled by third parties, where there is no opportunity
`
`for physically guarding access, cannot
`
`rely solely on
`
`techniques
`
`like
`
`authentication.
`
`13
`
`Page 13 of 59
`
`
`
`Case No. IPR2014-00238
`
`19. Additionally,
`
`the Decision’s dictionary definitions support my
`
`understanding that a “secure communication link” requires encryption. For
`
`example, a few lines above the general definition of “security” that the Decision
`
`relies on, the dictionary states that “secure visual communications” is defined as
`
`“[t]he transmission of an encrypted digital signal consisting of animated visual and
`
`audio information; the distance may vary from a few hundred feet to thousands of
`
`miles.” (IPR2014-00237 Ex. 3002 at 3, emphasis added.) For signals traveling the
`
`distances contemplated by the ’697 patent, and particularly where those distances
`
`include third-party controlled network hardware, encryption is the only viable
`
`mechanism for security over those physically unsecured portions of the networks.
`
`20. Thus, while authentication and address hopping may be used in
`
`conjunction with encryption to achieve data security, neither is sufficient by itself
`
`to make a link a secure communication link.
`
`B.
`
`“Virtual Private Network (VPN) Communication Link” (Claims 3
`and 17)
`21. Neither party nor the Decision construed “virtual private network
`
`communication link” (“VPN communication link”), appearing in dependent claims
`
`3 and 17. To the extent the Board finds it helpful, in my opinion, a VPN
`
`communication link refers to a communication path between computers in a VPN.
`
`The ’697 patent supports my view. For example, the specification describes the
`
`ultimate VPN communication link as a path between the claimed first network
`
`14
`
`Page 14 of 59
`
`
`
`Case No. IPR2014-00238
`
`device and the second network device in a VPN. (See, e.g., Ex. 1001 at 51:60-66.)
`
`The specification also describes an optional “administrative” VPN communication
`
`link between the first network device and a secure name service (and/or between
`
`the secure name service and a gatekeeper computer that provisions the ultimate
`
`VPN) that facilitates establishing the ultimate VPN. (See, e.g., id. at 41:14-20
`
`51:17.) The administrative VPN communication link is similarly described as a
`
`path between these devices in an administrative VPN. (See, e.g., id. at. 40:44-48,
`
`41:14-20, 43-56, 47:11-14.)
`
`C.
`
`22.
`
`“Intercept[ing] . . . a request to look up an internet protocol (IP)
`address” (Claims 1 and 16)
`I understand
`that
`the Decision preliminarily construed
`
`the
`
`“intercepting” phrase in independent claims 1 and 16 to mean “receiving a request
`
`pertaining to a first entity at another entity.” (Decision at 12.) In my opinion, the
`
`meaning of this phrase would be apparent to one of ordinary skill in the art without
`
`further interpretation. The Decision’s construction is inconsistent with that
`
`understanding, in part, because it rewrites the language to focus on an aspect that
`
`the claimed embodiments may have in common with conventional systems
`
`identified in the ’697 patent, rather than on aspects that make the claimed
`
`embodiments different from those conventional systems.
`
`23. As explained in the ’697 patent, in conventional DNS, a DNS request
`
`requesting an address of a target web site 2503 (a first entity) is received at a DNS
`
`15
`
`Page 15 of 59
`
`
`
`server 2502 (a second entity), which returns the address. (Ex. 1001, FIG. 25,
`
`reproduced below; id. at 9:38-39, 39:39-48.)
`
`Case No. IPR2014-00238
`
`
`
`The same is true of the ’697 patent’s embodiment in which a DNS request to look
`
`up a network address of a secure target site 2604 or unsecure target site 2611 (a
`
`first entity) is received at a DNS server 2602 (a second entity). (Id., FIG. 26,
`
`reproduced below; see also id. at 40:31-40.)
`
`16
`
`Page 16 of 59
`
`
`
`Case No. IPR2014-00238
`
`
`
`24. However, the ’697 patent goes on to explain that the claimed
`
`embodiments differ from conventional DNS, in part, because they apply an
`
`additional layer of functionality to a request to look up a network address beyond
`
`merely resolving it and returning the network address. For example, the DNS
`
`proxy 2610 may intercept the request and “determine[ ] whether access to a secure
`
`site has been requested,” “determine[ ] whether the user has sufficient security
`
`privileges to access the site,” and/or “transmit[ ] a message to gatekeeper 2603
`
`requesting that a virtual private network be created between 40 user computer 2601
`
`and secure target site 2604.” (Id. at 40:31-40.) Additionally, the DNS resolves an
`
`address and returns it to the first network device. (Id. at 40:44-48.)
`
`25. The Decision’s construction addresses a common aspect of a
`
`conventional DNS and the disclosed embodiments, namely that a request to look
`
`up an address of one entity may be received at another entity. However, the
`
`17
`
`Page 17 of 59
`
`
`
`Case No. IPR2014-00238
`
`construction overlooks the aspects distinguishing the “intercepting” phrase from
`
`conventional DNS.
`
`26. Thus, in my opinion, the “intercepting” phrase refers to the notion of
`
`performing an additional evaluation on a request to look up an IP address related to
`
`establishing a secure communication link, beyond conventionally resolving it and
`
`returning the address. The independent claims also support this view, for example,
`
`by reciting that a determination is made whether the second network device is
`
`available for a secure communications service “in response to the request [to look
`
`up an IP address].” (Ex. 1001, claims 1 and 16.) Additionally, dependent claims
`
`10 and 29 expressly specify the evaluation, reciting that “intercepting” involves
`
`“receiving the request to determine whether the second network device is available
`
`for the secure communications service.”
`
`D.
`
`27.
`
`“Determining, in response to the request, whether the second
`network device is available for a secure communications service”
`(Claims 1 and 16)
`Independent claims 1 and 16 recite “determining, in response to the
`
`request, whether
`
`the second network device
`
`is available for a secure
`
`communications service.” I understand that the Institution Decision in IPR2014-
`
`00237 preliminarily interprets this phrase to mean “determining, one or more of 1)
`
`whether the device is listed with a public internet address, and if so, allocating a
`
`private address for the second network device, or 2) some indication of the relative
`
`18
`
`Page 18 of 59
`
`
`
`Case No. IPR2014-00238
`
`permission level or security privileges of the requester.” (IPR2014-00237
`
`Institution Decision at 14-15, Paper 15.) I disagree with this interpretation.
`
`28. Determining a public internet address or allocating a private address
`
`has no relationship to whether a device is available for a secure communications
`
`service. For example, a device listed with a public internet address may or may not
`
`be available for a secure communications service. Similarly, a private address may
`
`be allocated for a device even if no determination is made that the device is
`
`available for a secure communications service. These features have nothing to do
`
`with the claimed determination.
`
`29. The ’697 patent does not limit the determination of whether a network
`
`device is available for a secure communications service to require a determination
`
`of a public address and an allocation of a private address. Passages of the ’697
`
`patent cited by the Decision disclose an example in which a determination is made
`
`that a target computer is available for a secure communications service without
`
`“determining . . . whether the device is listed with a public internet address, and if
`
`so, allocating a private address for the second network device.” (See Ex. 1001 at
`
`41:6-64; see also IPR2014-00237, Decision at 15, Paper 15.)
`
`30. The Decision also states that the “determining” phrase can mean
`
`“determining . . . some indication of the relative permission level or security
`
`privileges of the requester.” I disagree that the claimed “determination” requires
`
`19
`
`Page 19 of 59
`
`
`
`Case No. IPR2014-00238
`
`“some indication of the relative permission level or security privileges of the
`
`requester.” The specification provides examples of determinations focusing on the
`
`second network device to which access is requested, as well as examples of
`
`separate determinations focusing on the first network device desiring the access.
`
`(Compare, for example, id. at 40:32-33, “determin[ing] whether access to a secure
`
`site has been requested” with id. at 40:36-37, “determines whether the user has
`
`sufficient security privileges to access the site.”) The claimed determination,
`
`however, expressly focuses on the second network device (Ex. 1001, claims 1 and
`
`16, “whether the second network device is available for a secure communications
`
`service,” emphasis added), so I disagree that the “determining” phrase must be
`
`limited to the Decision’s determining “permission level or security privileges of
`
`the requester.”
`
`VI. Wesinger
`A. Wesinger’s Disclosure
`31. Wesinger describes a system in which a client C in one section of a
`
`network, such as a corporate network, connects to a host D in another section of
`
`the network through a series of firewalls 105, 107, 155, and 157. (See Ex. 1008
`
`7:16-66, 8:63-65; FIG. 1, reproduced below.)
`
`
`
`20
`
`Page 20 of 59
`
`
`
`Case No. IPR2014-00238
`
`32. Wesinger describes two separate processes, each responsive to a
`
`unique request and having unique characteristics. The first is a “transparent” DNS
`
`resolution process, which is set off by “DNS queries” or “name request[s].” (Ex.
`
`1008 at 9:16-19, 13:9-14.) The second is non-transparent firewall allow/disallow
`
`processing, which is set off by a separate “ensuing connection request.” (Id.)
`
`33.
`
`In Wesinger’s “transparent” DNS resolution process, each firewall is
`
`associated with a DNS server 115, 117, 167, 165, which “may be a dedicated
`
`virtual host on the same physical machine as the firewall” or “may be a separate
`
`machine.” (Id. at 8:55-62.) The client C transmits a DNS query to its local
`
`firewall’s DNS server 115, and Wesinger states that:
`
`DNS operates in the usual manner to propagate [the]
`name request to successive levels of the network until D
`
`21
`
`Page 21 of 59
`
`
`
`Case No. IPR2014-00238
`
`is found. The DNS server for D returns the network
`address of D to a virtual host on the firewall 155. The
`virtual host returns its network address to the virtual host
`on the firewall 157 from which it received the lookup
`request, and so on, until a virtual host on the firewall 105
`returns its network address (instead of the network
`address of D) to the client C. This activity is all
`transparent to the user.
`
`(Id. at 9:16-25, emphasis added; see also id. at 8:33-49.) “This asking and
`
`answering is all transparent to the client. As far as the client is concerned, it has
`
`communicated only with the local [DNS] server. It does not know or care that the
`
`local server may have contacted several other servers in the process of answering
`
`the original question.” (Id. at 8:49-54.) Thus, Wesinger’s DNS resolution process
`
`is “transparent” in the sense that the client is unaware of the intervening firewalls
`
`or that the client C receives the address of a virtual host on the client C’s firewall
`
`105 instead of the true address of the host D. (See, e.g., id. at 4:21-31, 6:5-58,
`
`8:20-24, 8:65-9:2, 9:42-44.) Wesinger calls this “programmable” transparency
`
`because it requires programming the DNS tables with “DNS mappings between
`
`remote hosts to be accessed through one of the network interfaces and respective
`
`virtual hosts on that interface.” (Id. at 8:22-24.)
`
`34. Separate from the “transparent” processing of DNS queries, Wesinger
`
`also describes firewall allow/disallow processing and further connection
`
`22
`
`Page 22 of 59
`
`
`
`Case No. IPR2014-00238
`
`processing of ensuing connection requests. While the DNS query is received at a
`
`“dedicated virtual host” or “separate machine” serving as the DNS server (id. at
`
`8:58-61, 13:9-13), an ensuing connection request is received at a separate
`
`“daemon” on the firewall (id. at 16:20-21, “[t]he daemon then waits to receive a
`
`connection request”; see also id. at 15:5-9), as outlined in the excerpts of FIGS. 3
`
`and 8 reproduced below.
`
`
`
`23
`
`
`
`
`
`Page 23 of 59
`
`
`
`Case No. IPR2014-00238
`
`“When a connection request is received, the daemon spawns a process to handle
`
`the connection request,” which “determin[es], in accordance with the appropriate
`
`Allow and Deny databases, whether the connection is to be allowed.” (Id. at
`
`16:21-28; see also id at 15:9-12.) And “[o]nce the connection has been allowed,
`
`the virtual host process invokes code 818 that performs protocol-based connection
`
`processing and, optionally, code 823 that performs channel processing,” such as
`
`“encryption, decryption, compression, decompression, etc.” (Id. at 17:1-5.)
`
`35. Wesinger is silent on how the connection request might arise
`
`following the DNS query, and does not describe the DNS-related “transparency” as
`
`extending to the connection request or to the resultant firewall allow/disallow,
`
`connection, and channel processing. To the contrary, Wesinger expressly discloses
`
`embodiments where the transparency does not extend to the connection request and
`
`related processing. For example, Wesinger describes a manual “out-of-band user
`
`authentication” technique that precludes any transparency in the connection request
`
`or resultant firewall processing discussed above. (Id. at 10:66-11: