throbber
Crowds: Anonymity for Web Transactions
`
`MICHAEL K. REITER
`Bell Laboratories, Lucent Technologies
`and
`AVIEL D. RUBIN
`AT&T Labs-Research
`
`In this paper we introduce a system called Crowds for protecting users' anonymity on the
`world-wide-web. Crowds, named for the notion of "blending into a crowd," operates by
`grouping users into a large and geographically diverse group (crowd) that collectively issues
`requests on behalf of its members. Web servers are unable to learn the true source of a request
`because it is equally likely to have originated from any member of the crowd, and even
`collaborating crowd members cannot distinguish the originator of a request from a member
`who is merely forwarding the request on behalf of another. We describe the design, implemen(cid:173)
`tation, security, performance, and scalability of our system. Our security analysis introduces
`degrees of anonymity as an important tool for describing and proving anonymity properties.
`Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: Gener(cid:173)
`al-Security and protection; C.2.2 [Computer-Communication Networks]: Network Proto(cid:173)
`cols-Applications; K.4.1 [Computers and Society]: Public Policy Issues-Privacy; K.4.4
`[Computers and Society]: Electronic Commerce-Security
`
`General Terms: Security
`
`Additional Key Words and Phrases: Anonymous communication, world-wide-web
`
`1. INTRODUCTION
`Every man should know that his conversations, his correspondence, and his
`personal life are private. -Lyndon B. Johnson, president of the United
`States, 1963-69
`The lack of privacy for transactions on the world-wide-web, or the
`Internet in general, is a well-documented fact [Brier 1997; Miller 1997].
`While encrypting communication to and from web servers (e.g., using SSL
`[Garfinkel and Spafford 1997, Ch. 12]) can hide the content of the transac(cid:173)
`tion from an eavesdropper (e.g., an Internet service provider, or a local
`
`Authors' addresses: M. K. Reiter, Bell Laboratories, Lucent Technologies, 700 Mountain
`Avenue, Room 2A-342, Murray Hill, NJ 07974; email: reiter@research.bell-labs.com; A. D.
`Rubin, AT&T Labs-Research, 180 Park Avenue, Room A239, Florham Park, NJ 07932-0972;
`email: rubin@research.att.com.
`Permission to make digital/hard copy of part or all of this work for personal or classroom use
`is granted without fee provided that the copies are not made or distributed for profit or
`commercial advantage, the copyright notice, the title of the publication, and its date appear,
`and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to
`republish, to post on servers, or to redistribute to lists, requires prior specific permission
`and/or a fee.
`© 1998 ACM 1094-9224/98/1100-0066 $5.00
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998, Pages 66-92.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 1 of 27
`
`

`

`Crowds: Anonymity for Web Transactions
`
`67
`
`system administrator), the eavesdropper can still learn the IP addresses of
`the client and server computers, the length of the data being exchanged,
`and the time and frequency of exchanges. Encryption also does little to
`protect the privacy of the client from the server. A web server can record
`the Internet addresses at which its clients reside, the servers that referred
`the clients to it, and the times and frequencies of accesses by its clients.
`With additional effort, this information can be combined with other data to
`invade the privacy of clients even further. For example, by automatically
`fingering the client computer shortly after an access and comparing the
`idle time for each user of the client computer with the server access time,
`the server administrator can often deduce the exact user with high likeli(cid:173)
`hood. Some consequences of such privacy abuses are described in Miller
`[1997].
`In this paper we introduce a new approach for increasing the privacy of
`web transactions and a system, called Crowds, that implements it. Our
`approach is based on the idea of "blending into a crowd," i.e., hiding one's
`actions within the actions of many others. To execute web transactions in
`our model, a user first joins a "crowd" of other users. The user's request to
`a web server is first passed to a random member of the crowd. That
`member can either submit the request directly to the end server or forward
`it to another randomly chosen member, and in the latter case the next
`member chooses to submit or forward independently. When the request is
`eventually submitted, it is submitted by a random member, thus prevent(cid:173)
`ing the end server from identifying its true initiator. Even crowd members
`cannot identify the initiator of the request, since the initiator is indistin(cid:173)
`guishable from a member that simply forwards a request from another.
`In studying the anonymity properties provided by this simple mecha(cid:173)
`nism, we introduce the notion of degrees of anonymity. We argue that the
`degree of anonymity provided against an attacker can be viewed as a
`continuum, ranging from no anonymity to complete anonymity and having
`several interesting points in between. We informally define these interme(cid:173)
`diate points and, for our Crowds mechanism described above, we refine
`these definitions and prove anonymity properties for our system. We expect
`these definitions and proofs to yield insights into proving anonymity
`properties for other approaches as well.
`An intriguing property of Crowds is that a member of a crowd may
`submit requests initiated by other users. This has both negative and
`positive consequences. On the negative side, the user may be incorrectly
`suspected of originating that request. On the positive side, this property
`suggests that the mere availability of Crowds offers the user some degree of
`deniability for her observed browsing behavior, if it is possible that she was
`using Crowds. Moreover, if Crowds becomes widely adopted, then the
`presumption that the computer from which a request is received is the
`computer that originated the request will become decreasingly valid (and
`thus decreasingly utilized).
`The anonymity provided by Crowds is subject to some caveats. For
`example, Crowds obviously cannot protect a user's anonymity if the content
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 2 of 27
`
`

`

`68
`
`M. K. Reiter and A. D. Rubin
`
`of her web transactions reveals her identity to the web server (e.g., if the
`user submits her name and credit card number in a web form). More subtly,
`Crowds can be undermined by executable web content that, if downloaded
`into the user's browser, can open network connections directly from the
`browser to web servers, thus bypassing Crowds altogether and exposing the
`user to the end server. In today's browsers, such executable content takes
`the form of Java applets and ActiveX controls. Therefore, when using
`Crowds, it is recommended that Java and ActiveX be disabled in the
`browser, which can typically be done via a simple preferences menu in the
`browser.
`The rest of this paper is structured as follows: In Section 2, we state the
`anonymity goals of our system more precisely and introduce the notion of
`degrees of anonymity. This gives us sufficient groundwork to compare our
`approach to other approaches to anonymity in Section 3. We describe the
`basic Crowds mechanism in Section 4 and analyze its security in Section 5.
`We describe the performance and scalability of our system in Sections 6
`and 7, respectively. We discuss crowd membership in Section 8, the
`system's user interface in Section 9, and the obstacles that firewalls
`present to wide scale adoption of Crowds in Section 10. We conclude in
`Section 11.
`
`2. GOALS
`
`2.1 Anonymity
`As discussed in Pfitzmann and Waidner [1987], there are three types of
`anonymous communication properties that can be provided: sender ano(cid:173)
`nymity, receiver anonymity, and unlinkability of sender and receiver.
`Sender anonymity means that the identity of the party who sent a message
`is hidden, while its receiver (and the message itselD might not be. Receiver
`anonymity similarly means that the identity of the receiver is hidden.
`Unlinkability of sender and receiver means that though the sender and
`receiver can each be identified as participating in some communication,
`they cannot be identified as communicating with each other.
`A second aspect of anonymous communication is the attackers against
`which these properties are achieved. The attacker might be an eavesdrop(cid:173)
`per that can observe some or all messages sent and received; collaborations
`consisting of some senders, receivers, and other parties; or variations of
`these [Pfitzmann and Waidner 1987].
`To these two aspects of anonymous communication, we add a third: the
`degree of anonymity. As shown in Figure 1, the degree of anonymity can be
`viewed as an informal continuum. For simplicity, we describe this contin(cid:173)
`uum with respect to sender anonymity, but it can naturally be extended to
`receiver anonymity and unlinkability as well. On one end of the spectrum is
`absolute privacy: absolute sender privacy against an attacker means that
`the attacker can in no way distinguish the situations in which a potential
`sender actually sent communication and those in which it did not. That is,
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 3 of 27
`
`

`

`Crowds: Anonymity for Web Transactions
`
`69
`
`absolute
`privacy
`
`beyond
`suspicion
`
`probable
`innocence
`
`possible
`innocence
`
`exposed
`
`provably
`exposed
`
`Fig. 1. Degrees of anonymity: Degrees range from absolute privacy, where the attacker
`cannot perceive the presence of communication, to provably exposed, where the attacker can
`prove the sender, receiver, or their relationship to others.
`
`sending a message results in no observable effects for the attacker. On the
`other end of the spectrum is provably exposed: the identity of a sender is
`provably exposed if the attacker can identify the sender of a message, and
`can also prove the identity of the sender to others.
`For the purposes of this pa per, the following three intermediate points of
`this spectrum are of interest, listed from strongest to weakest.
`-Beyond suspicion: A sender's anonymity is beyond suspicion if though
`the attacker can see evidence of a sent message, the sender appears no
`more likely to be the originator of that message than any other potential
`sender in the system.
`
`-Probable innocence: A sender is probably innocent if, from the attack(cid:173)
`er's point of view, the sender appears no more likely to be the originator
`than to not be the originator. This is weaker than beyond suspicion in
`that the attacker may have reason to suspect that the sender is more
`likely to be responsible than any other potential sender, but it still
`appears at least as likely that the sender is not responsible.
`
`-Possible innocence: A sender is possibly innocent if, from the attack(cid:173)
`er's point of view, there is a nontrivial probability that the real sender is
`someone else.
`It is possible to describe these intermediate points for receiver anonymity
`and sender/receiver unlinkability as well. When necessary, we define these
`intermediate points more precisely.
`Which degree of anonymity suffices for a user obviously depends on the
`user and her circumstances. Probable innocence sender anonymity should
`prevent many types of attackers from acting on their suspicions (therefore
`avoiding many abuses, e.g., cited in Miller [1997]) due to the high probabil(cid:173)
`ity that those suspicions are incorrect. However, if the user wishes to avoid
`any suspicion whatsoever-including even suspicions not sufficiently cer(cid:173)
`tain for the attacker to act upon-then she should insist on beyond
`suspicion sender anonymity.
`The default degree of anonymity on the web for most information and
`attackers is exposed, as described in Section 1. All recent versions of
`Netscape Navigator and Internet Explorer are configured to automatically
`identify the client computer to web servers, by passing information includ(cid:173)
`ing the IP address and the host platform in request headers.
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 4 of 27
`
`

`

`70
`
`M. K. Reiter and A. D. Rubin
`
`Table I. Anonymity Properties In Crowds
`
`Attacker
`
`Sender anonymity
`
`Receiver anonymity
`
`local eavesdropper
`
`exposed
`
`c collaborating members,
`
`probable innocence
`
`n 2: (p 11(pr - 1/2))(c + 1)
`end server
`
`?(absolute privacy) - - , 1
`n---'>OO
`beyond suspicion
`
`P(beyond suspicion) _ _ , 1
`n---'>oo
`?(absolute privacy) _ _ , 1
`n---'>oo
`
`NIA
`
`2.2 What Crowds Achieves
`As described in Section 1, our system consists of a dynamic collection of
`users, called a crowd. These users initiate web requests to various web
`servers (and receive replies from them), and thus the users are the
`"senders" and the servers are the "receivers". We consider the anonymity
`properties provided to an individual user against three distinct types of
`attackers:
`
`-A local eavesdropper is an attacker who can observe all (and only)
`communication to and from the user's computer.
`
`-Collaborating crowd members are other crowd members that can pool
`their information and even deviate from the prescribed protocol.
`
`-The end server is the web server to which the web transaction is
`directed.
`
`The above descriptions are intended to capture the full capabilities of each
`attacker. For example, collaborating members and the end server cannot
`eavesdrop on communication between other members. Similarly, a local
`eavesdropper cannot eavesdrop on messages other than those sent or
`received by the user's computer. A local eavesdropper is intended to model,
`e.g., an eavesdropper on the local area network of the user, such as an
`administrator monitoring web usage at a local firewall. However, if the
`same LAN also serves the end server, then the eavesdropper is effectively
`global, and we provide no protections against it.
`The security offered against each of these types of attackers is summa(cid:173)
`rized in Table 1 and justified in the remainder of the paper. As indicated by
`the omission of an "unlinkability of sender and receiver" column from this
`table, our system serves primarily to hide the sender or receiver from the
`attacker. In this table, n denotes the number of members in the crowd (for
`the moment we treat this as static) andp1 > 1/2 denotes the probability of
`forwarding, i.e., when a crowd member receives a request, the probability
`that it forwards the request to another member, rather than submitting it
`to the end server. (p 1 is explained more fully in Section 4.) The boldface
`claims in the table-i.e., probable innocence sender anonymity against
`collaborating members and beyond suspicion sender anonymity against the
`end server-are guarantees. The probability of beyond suspicion receiver
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 5 of 27
`
`

`

`Crowds: Anonymity for Web Transactions
`
`71
`
`anonymity against a local eavesdropper, on the other hand, only increases
`to one asymptotically as the crowd size increases to infinity. Put another
`way, if the local eavesdropper is sufficiently lucky, then it observes events
`that expose the receiver of a web request, and otherwise the receiver is
`beyond suspicion. However, the probability that it views these events
`decreases as a function of the size of the crowd. Similarly, a sender's
`assurance of absolute privacy against collaborating members also holds
`asymptotically with probability one as crowd size grows to infinity (for a
`constant number of collaborators). Thus, if the collaborators are unlucky,
`users achieve absolute privacy. We provide a more careful treatment of
`these notions in Section 5.
`Of course, against an attacker that is comprised of two or more of the
`attackers described above, our system yields degrees of sender and receiver
`anonymity that are the minimum among those provided against the attack(cid:173)
`ers present. For example, if a local eavesdropper and the end server to
`which the user's request is destined collaborate in an attack, then our
`techniques achieve neither sender anonymity nor receiver anonymity.
`Another caveat is that all of the claims of sender and receiver anonymity in
`this section, and their justifications in the remainder of this paper, require
`that neither message contents themselves nor a priori knowledge of sender
`behavior give clues to the sender's or receiver's identity.
`
`2.3 What Crowds Does Not Achieve
`Crowds makes no effort to defend against denial-of-service attacks by rogue
`crowd members. A crowd member could, e.g., accept messages from other
`crowd members and refuse to pass them along. In our system, such
`denial-of-service can result from malicious behavior, but typically does not
`result if (the process representing) a crowd member fails benignly or leaves
`the crowd. As a result, these attacks are detectable. More difficult to detect
`are active attacks where crowd members substitute wrong information in
`response to web requests that they receive from other crowd members.
`Such attacks are inherent in any system that uses intermediaries to
`forward unprotected information, but fortunately they cannot be utilized to
`compromise anonymity directly.
`
`3. RELATED WORK
`There are two basic approaches previously proposed for achieving anony(cid:173)
`mous web transactions. The first approach is to interpose an additional
`party (a proxy) between the sender and receiver to hide the sender's
`identity from the receiver. Examples of such proxies include the Anony(cid:173)
`mizer (http: //www.anonymizer.com/) and the Lucent Personalized Web
`Assistant [Gabber, Gibbons, Matias, and Mayer 1997] (http:// lpwa. com).
`Crowds provides protection against a wider range of attackers than proxies
`do. In particular, proxy-based systems are entirely vulnerable to a passive
`attacker in control of the proxy, since the attacker can monitor and record
`the senders and receivers of all communication. Our system presents no
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 6 of 27
`
`

`

`72
`
`M. K. Reiter and A. D. Rubin
`
`single point at which a passive attack can cripple all users' anonymity. In
`addition, a proxy is typically a single point of failure; i.e., if the proxy fails,
`then anonymous browsing cannot continue. In Crowds, no single failure
`discontinues all ongoing web transactions.
`A second approach to achieving anonymous web transactions is to use a
`mix [Chaum 1981]. A mix is actually an enhanced proxy that, in addition to
`hiding the sender from the receiver, also takes measures to provide sender
`and receiver unlinkability against a global eavesdropper. It does so by
`collecting messages of equal length from senders, cryptographically alter(cid:173)
`ing them (typically by decrypting them with its private key), and forward(cid:173)
`ing the messages to their recipients in a different order. These techniques
`make it difficult for an eavesdropper to determine which output messages
`correspond to which input messages. A natural extension is to interpose a
`sequence of mixes between the sender and receiver [Chaum 1981]. A
`sequence of mixes can tolerate colluding mixes, as any single correctly(cid:173)
`behaving mix server in the sequence prevents an eavesdropper from linking
`the sender and receiver. Mixes have been implemented to support many
`types of communication, for example, electronic mail [Gulcu and Tsudik
`1996], ISDN service [Pfitzmann, Pfitzmann, and Waidner 1991], and gen(cid:173)
`eral synchronous communication (including web browsing) [Syverson, Gold(cid:173)
`schlag, and Reed 1997].
`The properties offered by Crowds are different from those offered by
`mixes. As described above, Crowds provide (probable innocence) sender
`anonymity against collaborating crowd members. In contrast, in the closest
`analog to this attack in typical mix systems-i.e., a group of collaborating
`mix servers-mixes do not provide sender anonymity but do ensure sender
`and receiver unlinkability [Pfitzmann and Waidner 1987]. Another differ(cid:173)
`ence is that mixes provide sender and receiver unlinkability against a
`global eavesdropper. Crowds does not provide anonymity against global
`eavesdroppers. However, our intention is for a crowd to span multiple
`administrative domains, where the existence of a global eavesdropper is
`unlikely. Another difference is that mixes typically rely on public key
`encryption, the algebraic properties of which have been exploited to break
`some implementations [Pfitzmann and Pfitzmann 1990].
`Crowds' unique properties admit very efficient implementations in com(cid:173)
`parison to mixes. With mixes, the length of a message routed through a mix
`network grows proportionally to the number of mixes through which it is
`routed, and the mix network must pad messages to fixed lengths and
`generate decoy messages to foil traffic analysis. Moreover, in a typical mix
`implementation, routing a message through a sequence of n mixes incurs a
`cost of n public key encryptions and n private key decryptions on the
`critical path of the message, which are comparatively expensive operations.
`Thus, since the unlinkability provided by mixes is tolerant of up to n -
`l
`mixes colluding, increasing n improves anonymity but hurts performance.
`Privacy in Crowds can similarly be enhanced by increasing the average
`number of times a request is forwarded among members before being
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 7 of 27
`
`

`

`Crowds: Anonymity for Web Transactions
`
`73
`
`submitted to the end server, but this should impact performance less
`because there are no public/private key operations, no inflation of message
`transmission lengths (beyond a small, constant-size header), and no decoy
`messages needed.
`Another performance advantage of Crowds is that since each user ac(cid:173)
`tively participates in the function of the crowd, the throughput of a crowd
`grows as a function of the number of users. In fact, we show in Section 7
`that a crowd can scale almost limitlessly (in theory), in the sense that the
`load on each user's computer is expected to remain roughly constant as new
`users join the crowd. With a fixed network of mixes, the load of each server
`increases proportionally to the number of users, with a resulting linear
`decrease in throughput.
`
`4. CROWD OVERVIEW
`As discussed previously, a crowd can be thought of as a collection of users.
`A user is represented in a crowd by a process on her computer called a
`Janda (pronounced "John Doe" and meant to convey the image of a faceless
`participant). The user (or a local administrator) starts the jondo on the
`user's computer. When the jondo is started, it contacts a server called the
`blender to request admittance to the crowd. If admitted, the blender reports
`to this jondo the current membership of the crowd and information that
`enables this jondo to participate in the crowd. We defer further discussion
`of the blender and crowd membership maintenance to Section 8.
`The user selects this jondo as her web proxy by specifying its host name
`and port number in her web browser as the proxy for all services. Thus, any
`request coming from the browser is sent directly to the jondo. 1 Upon
`receiving the first user request from the browser, the jondo initiates the
`establishment of a random path of jondos that carries its users' transac(cid:173)
`tions to and from their intended web servers. More precisely, the jondo
`picks a jondo from the crowd (possibly itself) at random, and forwards the
`request to it. When this jondo receives the request, it flips a biased coin to
`determine whether or not to forward the request to another jondo; the coin
`indicates to forward with probability Pt· If the result is to forward, then the
`jondo selects a random jondo and forwards the request to it, and otherwise
`the jondo submits the request to the end server for which the request was
`destined. So, each request travels from the user's browser, through some
`number of jondos, and finally to the end server. A possible set of such paths
`is shown in Figure 2. In this figure, the paths are 1 -
`5 -
`server; 2 -
`server; 3 -
`1 -
`6 -
`server; 4 -
`4 -
`server; 5 -
`4 -
`6 -
`6 -
`2 -
`server; and 6 -
`3 -
`server. Subsequent requests initiated at the same
`
`1The services that must be proxied include Gopher, FTP, HTTP and SSL. Otherwise, e.g., FTP
`requests triggered by downloading a web page would not go through the crowd, and would
`thus reveal the user's IP address to the end server. Java and ActiveX should be disabled in the
`browser as well, because a Java applet or ActiveX control embedded in a retrieved web page
`could connect back to its server directly and reveal the user's IP address to that server.
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 8 of 27
`
`

`

`74
`
`M. K. Reiter and A. D. Rubin
`
`Crowd
`
`Web Servers
`
`Fig. 2. Paths in a crowd (the initiator and web server of each path are labeled the same).
`
`jondo follow the same path (except perhaps going to a different end server),
`and server replies traverse the same path as the requests, only in reverse.
`A pseudocode description of a jondo is presented in Figure 3. This figure
`describes a thread of execution that is executed per received request. This
`description uses client-server terminology, where one jondo is a client of its
`successor on the path. For each path, indicated by a path id, the value
`next[path_id] is the next jondo on the path. To assign next jondos for paths,
`each jondo maintains a set Jondos of jondos that it believes to be active
`(itself included). When it chooses to direct the path to another jondo, it
`selects the next jondo uniformly at random from this set (lines 6, 16, and
`
`26); i.e., "- S" denotes selection from the set S uniformly at random.
`R
`Subsequent sections shed greater light on the operation of a jondo and the
`pseudocode description of Figure 3.
`It is important for the jondo at each position in a path to hold a different
`path identifier for the path. That is, if a jondo receives a request marked
`with path_id from its predecessor in a path, then it replaces path_id with a
`different path identifier stored in translate[path_id] before forwarding the
`request to its successor (if a jondo). This enables a jondo that occupies
`multiple positions on a path to act independently in each position: if the
`path_id remained the same along the path, then the jondo would behave
`identically each time it received a message on the path, resulting in an
`infinite loop. Path identifiers should be unique and unpredictable; in our
`present implementation, new_path_id() (lines 5 and 15) returns a random
`128-bit value.
`Omitted from the description in Figure 3 is that fact that all communica(cid:173)
`tion between any two jondos is encrypted using a key known only to the two
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 9 of 27
`
`

`

`Crowds: Anonymity for Web Transactions
`
`75
`
`(1)
`(2)
`(3)
`(4)
`(5)
`(6)
`(7)
`(8)
`(9)
`(10)
`(11)
`(12)
`(13)
`(14)
`(15)
`(16)
`(17)
`(18)
`(19)
`(20)
`
`(21)
`(22)
`(23)
`(24)
`(25)
`(26)
`(27)
`(28)
`(29)
`
`(30)
`(31)
`(32)
`(33)
`
`client,request +- receive.request()
`if (client = browser)
`sanitize(request)
`if (my_path_id = 1-)
`my_path_id +- new_path_id()
`next[my_path_id] +-R Jondos
`forward.request ( my _path_id)
`
`else
`
`path_id +- remove_path_id(request)
`if ( translate[path_id] = 1-)
`coin +- coinJlip(p I)
`if ( coin = heads)
`translate[path_id] +- 'submit'
`
`else
`
`translate[path_id] +- new_path_id()
`next[translate[path_idl] +- R Jondos
`if (translate[path_id] = 'submit')
`submit.request()
`
`else
`
`forward.request( translate[pathJd])
`
`subroutine forward.request( out_path_id)
`send ouLpath_idllrequest to next[ouLpath_id]
`reply +- awaiLreply( ex,)
`if (reply = 'jondo failed')
`Jondos +- Jondos \ { next[ouLpath_id]}
`next[out.path_id] +- R Jondos
`forward.request ( out_path_id)
`
`else
`
`send reply to client
`
`/* strip cookies and identifying headers • /
`/* if my _path_id is not initialized ... • /
`
`/* client is a jondo • /
`/* remove "incoming" path id * /
`/* "incoming" path id is new * /
`/* tails with probability p I • /
`
`/* set "outgoing" path id • /
`/* select next jondo at random • /
`
`/* wait for reply or recognizable jondo failure * /
`/* jondo failed • /
`/* remove the jondo * /
`/* assign a new random jondo for this path • /
`/* try again * /
`/* received reply from jondo • /
`
`subroutine submit.request ()
`send request to destination(request)
`reply+- await_reply(timeout)
`send reply to client
`
`/* send to destination web server • /
`/* wait for reply, timeout, or server failure • /
`/* send reply or error message to client * /
`
`Fig. 3. Pseudocode description of a jondo.
`
`of them. Encryption keys are established as jondos join the crowd, as is
`discussed in Section 8.
`
`5. SECURITY ANALYSIS
`In this section we consider the question of what information an attacker
`can learn about the senders and receivers of web transactions, given the
`mechanisms we described in Section 4. The types of attackers we consider
`were described in Section 2. Our analysis begins with the two attackers for
`which analysis is more straightforward, namely a local eavesdropper and
`the end server. This is followed by an analysis of crowd security versus
`collaborating jondos.
`
`5.1 Local Eavesdropper
`Recall that a local eavesdropper is an attacker that can observe all (and
`only) communication emanating from an individual user's computer. When
`this user initiates a request, the fact that she did so is exposed to the local
`eavesdropper, since we make no effort to hide correlations between inputs
`to and outputs from the initiating computer. That is, the local eavesdropper
`observes that a request output by the user's computer did not result from a
`corresponding input. Thus, we offer no sender anonymity against a local
`eavesdropper.
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`Code200, UAB v. Bright Data Ltd.
`Code200's Exhibit 1006
`Page 10 of 27
`
`

`

`76
`
`M. K. Reiter and A. D. Rubin
`
`The mechanisms we described do, however, typically prevent a local
`eavesdropper from learning the intended receiver of a request, because
`every message forwarded on a path, except for the final request to the end
`server, is encrypted. Thus, while the eavesdropper is able to view any
`message emanating from the user's computer, it only views a message
`submitted to the end server (or equivalently a plaintext message containing
`the end server's address) if the user's jondo ultimately submits the user's
`request itself. Since the probability that the user's jondo ultimately sub(cid:173)
`mits the request is 1 / n where n is the size of the crowd when the path
`was created, the probability that the eavesdropper learns the identity of
`the receiver decreases as a function of crowd size. Moreover, when the
`user's jondo does not ultimately submit the request, the local eavesdropper
`sees only the encrypted address of the end server, which we suggest yields
`receiver anonymity that is (informally) beyond suspicion. Thus, P(beyond
`
`suspicion) ~ 1 for receiver anonymity.
`n-oo
`
`5.2 End Servers
`We now consider the security of our system against an attack by the end
`server only. Because the web server is the receiver, obviously receiver
`anonymity is not possible against this attacker. However, the anonymity
`for the path initiator is quite strong. In particular, since the path initiator
`first forwards to another jondo when creating its path (see Section 4), the
`end server is equally likely to receive the initiator's requests from any
`crowd member. That is, from the end server's perspective, all crowd
`member

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket