throbber
Association for
`Computing Machinery
`
`Advancing Computing as a Science & Profession
`
`DECLARATION BY SCOTT DELMAN
`
`I, Scott Delman, hereby declare as follows.
`
`1.
`
`Iam more than twenty-one (21) years of age and I am competent to makethis declaration.
`
`2.
`
`Iam employed by the Association for Computing Machinery as the Director of Publications.
`
`3. Among myresponsibilities as Director of Publications, I act as a custodian of records for ACM
`
`and I am familiar with its record-keeping practices.
`
`4. Aspart of its ordinary course of business, ACM,publishes and makesavailable technical articles
`
`and other writings. These publications are also madeavailable for public download through the
`
`ACM digital library.
`
`5.
`
`Ihave been askedto certify the publication of “Crowds: anonymity for Web transactions”,
`
`available at https://dl.acm.org/doi/abs/10.1145/290163.290168 (attached as Exhibit A).
`
`6. “Crowds: anonymity for Web transactions” was published in Volume 1, Issue 1 of ACM
`
`Transactions on Information and System Security in November1998.
`
`7. “Crowds: anonymity for Web transactions”, was published online on November 24, 1998 and was
`
`available to the public shortly thereafter.
`
`I declare under penalty of perjury under the laws of the United States of America that the
`
`foregoingis true and correct.
`
`Executed on this the 24th day of June 2020.
`
`eS\ QL
`
`Scott Delman
`
`1601 Broadway, 10" Floor
`New York, NY 10019-7434
`
`Tel: +1-212-869-7440
`Fax: +1-212-944-1318
`Code200 Exhibit 1007
`CODE200 ET AL. EXHIBIT 1012
`Page 1 of 33
`Page 1 of 33
`
`Code200 Exhibit 1007
`Page 1 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`
`MICHAEL K. RE!TER
`
`Bell Laboratories, Lucent Technologies
`and
`
`AVIEL D. RUBIN
`
`AT&T Labs—Research
`
`
`In this paper we introduce a system called Crowds for protecting users’ anonymity on the
`world-wide-web. Crowds, named for the notion of “blending into a crowd,” operates by
`grouping users into a large and geographically diverse group (crowd) that collectively issues
`requests on behalf of its members. Web servers are unable to learn the true source of a request
`because it is equally likely to have originated from any member of the crowd, and even
`collaborating crowd members cannot distinguish the originator of a request from a member
`whois merely forwarding the request on behalf of another. We describe the design, implemen-
`tation, security, performance, and scalability of our system. Our security analysis introduces
`degrees of anonymity as an important tool for describing and proving anonymity properties.
`Categories and Subject Descriptors: C.2.0 [Computer-Communication Networks]: Gener-
`al—Security and protection; C.2.2 [Computer-Communication Networks]: Network Proto-
`cols—Applications; K.4.1 [Computers and Society]: Public Policy Issues—Privacy; K.4.4
`[Computers and Society]: Electronic Commerce—Security
`Genera! Terms: Security
`
`Additional Key Words and Phrases: Anonymous communication, world-wide-web
`
`
`1.
`INTRODUCTION
`Every man should know that his conversations, his correspondence, and his
`personal life are private.—Lyndon B. Johnson, president of the United
`States, 1963-69
`The lack of privacy for transactions on the world-wide-web, or the
`Internet in general, is a well-documented fact [Brier 1997; Miller 1997].
`While encrypting communication to and from webservers(e.g., using SSL
`[Garfinkel and Spafford 1997, Ch. 12]) can hide the content of the transac-
`tion from an eavesdropper (e.g., an Internet service provider, or a local
`
`Authors’ addresses: M. K. Reiter, Bell Laboratories, Lucent Technologies, 700 Mountain
`Avenue, Room 2A-342, Murray Hill, NJ 07974; email: reiter@research.bell-labs.com; A. D.
`Rubin, AT&T Labs— Research, 180 Park Avenue, Room A239, Florham Park, NJ 07932-0972;
`email: rubin@research.att.com.
`Permission to make digital/hard copy of part or all of this work for personal or classroom use
`is granted without fee provided that the copies are not made or distributed for profit or
`commercial advantage, the copyright notice, the title of the publication, and its date appear,
`and notice is given that copying is by permission of the ACM, Inc. To copy otherwise, to
`republish, to post on servers, or to redistribute to lists, requires prior specific permission
`and/ora fee.
`© 1998 ACM 1094-9224/98/1100-0066 $5.00
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998, Pages 66-92.
`CODE200 ET AL. EXHIBIT 1012
`Page 2 of 33
`
`Code200 Exhibit 1007
`Page 2 of 33
`
`Code200 Exhibit 1007
`Page 2 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`

`
`67
`
`system administrator), the eavesdropper can still learn the IP addresses of
`the client and server computers, the length of the data being exchanged,
`and the time and frequency of exchanges. Encryption also does little to
`protect the privacy of the client from the server. A web server can record
`the Internet addresses at which its clients reside, the servers that referred
`the clients to it, and the times and frequencies of accesses by its clients.
`With additional effort, this information can be combined with other data to
`invade the privacy of clients even further. For example, by automatically
`fingering the client computer shortly after an access and comparing the
`idle time for each user of the client computer with the server access time,
`the server administrator can often deduce the exact user with high likeli-
`hood. Some consequences of such privacy abuses are described in Miller
`[1997].
`In this paper we introduce a new approach for increasing the privacy of
`web transactions and a system, called Crowds, that implements it. Our
`approach is based on the idea of “blending into a crowd,”i.e., hiding one’s
`actions within the actions of many others. To execute web transactions in
`our model, a userfirst joins a “crowd” of other users. The user’s request to
`a web server is first passed to a random member of the crowd. That
`membercan either submit the request directly to the end server or forward
`it to another randomly chosen member, and in the latter case the next
`member chooses to submit or forward independently. When the request is
`eventually submitted, it is submitted by a random member, thus prevent-
`ing the end server from identifying its true initiator. Even crowd members
`cannot identify the initiator of the request, since the initiator is indistin-
`guishable from a member that simply forwards a request from another.
`In studying the anonymity properties provided by this simple mecha-
`nism, we introduce the notion of degrees of anonymity. We argue that the
`degree of anonymity provided against an attacker can be viewed as a
`continuum, ranging from no anonymity to complete anonymity and having
`several interesting points in between. We informally define these interme-
`diate points and, for our Crowds mechanism described above, we refine
`these definitions and prove anonymity properties for our system. We expect
`these definitions and proofs to yield insights into proving anonymity
`properties for other approachesas well.
`An intriguing property of Crowds is that a member of a crowd may
`submit requests initiated by other users. This has both negative and
`positive consequences. On the negative side, the user may be incorrectly
`suspected of originating that request. On the positive side, this property
`suggests that the mereavailability of Crowdsoffers the user some degreeof
`deniability for her observed browsing behavior,if it is possible that she was
`using Crowds. Moreover,
`if Crowds becomes widely adopted,
`then the
`presumption that the computer from which a request is received is the
`computer that originated the request will become decreasingly valid (and
`thus decreasingly utilized).
`to some caveats. For
`The anonymity provided by Crowds is subject
`example, Crowds obviously cannot protect a user’s anonymity if the content
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 3 of 33
`
`Code200 Exhibit 1007
`Page 3 of 33
`
`Code200 Exhibit 1007
`Page 3 of 33
`
`

`

`68
`
`e
`
`M. K. Reiter and A. D. Rubin
`
`of her web transactions reveals her identity to the web server(e.g., if the
`user submits her name and credit card numberin a web form). More subtly,
`Crowds can be undermined by executable web content that, if downloaded
`into the user’s browser, can open network connections directly from the
`browser to web servers, thus bypassing Crowdsaltogether and exposing the
`user to the end server. In today’s browsers, such executable content takes
`the form of Java applets and ActiveX controls. Therefore, when using
`Crowds,
`it is recommended that Java and ActiveX be disabled in the
`browser, which can typically be done via a simple preferences menu in the
`browser.
`The rest of this paper is structured as follows: In Section 2, we state the
`anonymity goals of our system more precisely and introduce the notion of
`degrees of anonymity. This gives us sufficient groundwork to compare our
`approach to other approaches to anonymity in Section 3. We describe the
`basic Crowds mechanism in Section 4 and analyze its security in Section 5.
`We describe the performance and scalability of our system in Sections 6
`and 7,
`respectively. We discuss crowd membership in Section 8,
`the
`system’s user interface in Section 9, and the obstacles that firewalls
`present to wide scale adoption of Crowds in Section 10. We conclude in
`Section 11.
`
`2. GOALS
`
`2.1 Anonymity
`As discussed in Pfitzmann and Waidner [1987], there are three types of
`anonymous communication properties that can be provided: sender ano-
`nymity, receiver anonymity, and unlinkability of sender and receiver.
`Sender anonymity meansthat the identity of the party who sent a message
`is hidden, while its receiver (and the message itself) might not be. Receiver
`anonymity similarly means that the identity of the receiver is hidden.
`Unlinkability of sender and receiver means that though the sender and
`receiver can each be identified as participating in some communication,
`they cannot be identified as communicating with each other.
`A second aspect of anonymous communication is the attackers against
`which these properties are achieved. The attacker might be an eavesdrop-
`per that can observe someorall messages sent andreceived; collaborations
`consisting of some senders, receivers, and other parties; or variations of
`these [Pfitzmann and Waidner 1987].
`To these two aspects of anonymous communication, we add a third: the
`degree of anonymity. As shownin Figure 1, the degree of anonymity can be
`viewed as an informal continuum. For simplicity, we describe this contin-
`uum with respect to sender anonymity, but it can naturally be extended to
`receiver anonymity and unlinkability as well. On one end of the spectrum is
`absolute privacy: absolute sender privacy against an attacker means that
`the attacker can in no way distinguish the situations in which a potential
`sender actually sent communication and those in which it did not. Thatis,
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 4 of 33
`
`Code200 Exhibit 1007
`Page 4 of 33
`
`Code200 Exhibit 1007
`Page 4 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`

`
`69
`
`—t— ++
`
`
`absolute
`privacy
`
`beyond
`suspicion
`
`probable
`innocence
`
`possible
`innocence
`
`exposed
`
`provably
`exposed
`
`Fig. 1. Degrees of anonymity: Degrees range from absolute privacy, where the attacker
`cannot perceive the presence of communication, to provably exposed, where the attacker can
`prove the sender, receiver, or their relationship to others.
`
`sending a messageresults in no observable effects for the attacker. On the
`other end of the spectrum is provably exposed: the identity of a senderis
`provably exposedif the attacker can identify the sender of a message, and
`can also prove the identity of the sender to others.
`For the purposesof this paper, the following three intermediate points of
`this spectrum areof interest, listed from strongest to weakest.
`—Beyondsuspicion: A sender’s anonymity is beyond suspicion if though
`the attacker can see evidence of a sent message, the sender appears no
`morelikely to be the originator of that message than any other potential
`sender in the system.
`
`—Probable innocence:A senderis probably innocent if, from the attack-
`er’s point of view, the sender appears no morelikely to be the originator
`than to not be the originator. This is weaker than beyond suspicion in
`that the attacker may have reason to suspect that the sender is more
`likely to be responsible than any other potential sender, but it still
`appearsat least as likely that the sender is not responsible.
`—Possible innocence: A sender is possibly innocent if, from the attack-
`er’s point of view, there is a nontrivial probability that the real senderis
`someoneelse.
`
`It is possible to describe these intermediate points for receiver anonymity
`and sender/receiver unlinkability as well. When necessary, we define these
`intermediate points more precisely.
`Which degree of anonymity suffices for a user obviously depends on the
`user and her circumstances. Probable innocence sender anonymity should
`prevent many typesof attackers from acting on their suspicions (therefore
`avoiding many abuses, e.g., cited in Miller [1997]) due to the high probabil-
`ity that those suspicions are incorrect. However, if the user wishes to avoid
`any suspicion whatsoever—including even suspicions not sufficiently cer-
`tain for the attacker to act upon—then she should insist on beyond
`suspicion sender anonymity.
`The default degree of anonymity on the web for most information and
`attackers is exposed, as described in Section 1. All recent versions of
`Netscape Navigator and Internet Explorer are configured to automatically
`identify the client computer to web servers, by passing information includ-
`ing the IP address and the host platform in request headers.
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`
`CODE200 ET AL. EXHIBIT 1012
`Page 5 of 33
`
`Code200 Exhibit 1007
`Page 5 of 33
`
`Code200 Exhibit 1007
`Page 5 of 33
`
`

`

`70
`
`.
`
`M. K. Reiter and A. D. Rubin
`
`Table I. Anonymity Properties In Crowds
`
` Attacker Sender anonymity Receiver anonymity
`
`
`local eavesdropper
`
`exposed
`
`P(beyond suspicion) -—— 1
`nwv7o
`
`probable innocence
`¢ collaborating members
`P(absolute privacy) oe 1
`n = (p;/(pp — 1/2))(e + 1)
`
`
`beyond suspicionend server N/A
`
`P(absolute privacy) ~~ 1
`no
`
`’
`
`2.2 What Crowds Achieves
`As described in Section 1, our system consists of a dynamic collection of
`users, called a crowd. These users initiate web requests to various web
`servers (and receive replies from them), and thus the users are the
`“senders” and the servers are the “receivers”. We consider the anonymity
`properties provided to an individual user against three distinct types of
`attackers:
`
`—A local eavesdropper is an attacker who can observe all (and only)
`communication to and from the user’s computer.
`
`—Collaborating crowd membersare other crowd members that can pool
`their information and even deviate from the prescribed protocol.
`
`—The end server is the web server to which the web transaction is
`directed.
`
`The above descriptions are intended to capture the full capabilities of each
`attacker. For example, collaborating members and the end server cannot
`eavesdrop on communication between other members. Similarly, a local
`eavesdropper cannot eavesdrop on messages other than those sent or
`received by the user’s computer. A local eavesdropperis intended to model,
`e.g., an eavesdropper on the local area network of the user, such as an
`administrator monitoring web usage at a local firewall. However, if the
`same LAN also serves the end server, then the eavesdropperis effectively
`global, and weprovide noprotections againstit.
`The security offered against each of these types of attackers is summa-
`rized in Table 1 and justified in the remainderof the paper. As indicated by
`the omission of an “unlinkability of sender and receiver” column from this
`table, our system serves primarily to hide the sender or receiver from the
`attacker. In this table, n denotes the number of members in the crowd(for
`the momentwetreat this as static) and p; > 1/2 denotes the probability of
`forwarding, i.e., when a crowd memberreceives a request, the probability
`that it forwards the request to another member, rather than submitting it
`to the end server. (p; is explained more fully in Section 4.) The boldface
`claims in the table—i.e., probable innocence sender anonymity against
`collaborating members and beyond suspicion sender anonymity against the
`end server—are guarantees. The probability of beyond suspicion receiver
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 6 of 33
`
`Code200 Exhibit 1007
`Page 6 of 33
`
`Code200 Exhibit 1007
`Page 6 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`
`.
`
`71
`
`anonymity against a local eavesdropper, on the other hand, only increases
`to one asymptotically as the crowd size increases to infinity. Put another
`way, if the local eavesdropperis sufficiently lucky, then it observes events
`that expose the receiver of a web request, and otherwise the receiver is
`beyond suspicion. However,
`the probability that it views these events
`decreases as a function of the size of the crowd. Similarly, a sender’s
`assurance of absolute privacy against collaborating members also holds
`asymptotically with probability one as crowd size grows to infinity (for a
`constant numberof collaborators). Thus, if the collaborators are unlucky,
`users achieve absolute privacy. We provide a more careful treatment of
`these notions in Section 5.
`Of course, against an attacker that is comprised of two or more of the
`attackers described above, our system yields degrees of sender and receiver
`anonymity that are the minimum amongthoseprovided against the attack-
`ers present. For example, if a local eavesdropper and the end server to
`which the user’s request is destined collaborate in an attack, then our
`techniques achieve neither sender anonymity nor receiver anonymity.
`Anothercaveatis thatall of the claims of sender and receiver anonymity in
`this section, and their justifications in the remainderof this paper, require
`that neither message contents themselvesnor a priori knowledge of sender
`behavior give clues to the sender’s or receiver’s identity.
`
`2.3 What Crowds Does Not Achieve
`Crowds makesnoeffort to defend against denial-of-service attacks by rogue
`crowd members. A crowd membercould, e.g., accept messages from other
`crowd members and refuse to pass them along.
`In our system, such
`denial-of-service can result from malicious behavior, but typically does not
`result if (the process representing) a crowd memberfails benignly or leaves
`the crowd. Asa result, these attacks are detectable. More difficult to detect
`are active attacks where crowd memberssubstitute wrong information in
`response to web requests that they receive from other crowd members.
`Such attacks are inherent
`in any system that uses intermediaries to
`forward unprotected information, but fortunately they cannot be utilized to
`compromise anonymitydirectly.
`
`3. RELATED WORK
`There are two basic approaches previously proposed for achieving anony-
`mous web transactions. The first approach is to interpose an additional
`party (a proxy) between the sender and receiver to hide the sender’s
`identity from the receiver. Examples of such proxies include the Anony-
`mizer (http: //www.anonymizer.com/) and the Lucent Personalized Web
`Assistant [Gabber, Gibbons, Matias, and Mayer 1997] (http: //lpwa.com).
`Crowdsprovides protection against a wider rangeof attackers than proxies
`do. In particular, proxy-based systems are entirely vulnerable to a passive
`attacker in control of the proxy, since the attacker can monitor and record
`the senders and receivers of all communication. Our system presents no
`
`ACM Transactions on Infarmatinn and Suctam Soenrite Val
`1 Na 1 Mawamher 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 7 of 33
`
`Code200 Exhibit 1007
`Page 7 of 33
`
`Code200 Exhibit 1007
`Page 7 of 33
`
`

`

`72
`
`e
`
`M. K. Reiter and A. D. Rubin
`
`single point at which a passive attack can cripple all users’ anonymity. In
`addition, a proxy is typically a single pointoffailure; i.e., if the proxy fails,
`then anonymous browsing cannot continue. In Crowds, no single failure
`discontinues all ongoing web transactions.
`A second approach to achieving anonymous webtransactions is to use a
`mix [Chaum 1981]. A mix is actually an enhanced proxythat, in addition to
`hiding the sender from the receiver, also takes measures to provide sender
`and receiver unlinkability against a global eavesdropper. It does so by
`collecting messages of equal length from senders, cryptographically alter-
`ing them (typically by decrypting them with its private key), and forward-
`ing the messagesto their recipients in a different order. These techniques
`make it difficult for an eavesdropper to determine which output messages
`correspond to which input messages. A natural extension is to interpose a
`sequence of mixes between the sender and receiver [Chaum 1981]. A
`sequence of mixes can tolerate colluding mixes, as any single correctly-
`behaving mix server in the sequence prevents an eavesdropperfrom linking
`the sender and receiver. Mixes have been implemented to support many
`types of communication, for example, electronic mail [Gulcu and Tsudik
`1996], ISDN service [Pfitzmann, Pfitzmann, and Waidner 1991], and gen-
`eral synchronous communication (including web browsing) |Syverson, Gold-
`schlag, and Reed 1997].
`The properties offered by Crowds are different from those offered by
`mixes. As described above, Crowds provide (probable innocence) sender
`anonymity against collaborating crowd members. In contrast, in the closest
`analog to this attack in typical mix systems—i.e., a group of collaborating
`mix servers— mixes do not provide sender anonymity but do ensure sender
`and receiver unlinkability [Pfitzmann and Waidner 1987]. Another differ-
`ence is that mixes provide sender and receiver unlinkability against a
`global eavesdropper. Crowds does not provide anonymity against global
`eavesdroppers. However, our intention is for a crowd to span multiple
`administrative domains, where the existence of a global eavesdropperis
`unlikely. Another difference is that mixes typically rely on public key
`encryption, the algebraic properties of which have been exploited to break
`some implementations [Pfitzmann and Pfitzmann 1990].
`Crowds’ unique properties admit very efficient implementations in com-
`parison to mixes. With mixes, the length of a message routed through a mix
`network grows proportionally to the number of mixes through whichit is
`routed, and the mix network must pad messages to fixed lengths and
`generate decoy messagesto foil traffic analysis. Moreover, in a typical mix
`implementation, routing a message through a sequence of n mixes incurs a
`cost of n public key encryptions and n private key decryptions on the
`critical path of the message, which are comparatively expensive operations.
`Thus, since the unlinkability provided by mixesis tolerant of up ton — 1
`mixes colluding, increasing n improves anonymity but hurts performance.
`Privacy in Crowds can similarly be enhanced by increasing the average
`number of times a request is forwarded among members before being
`
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 8 of 33
`
`Code200 Exhibit 1007
`Page 8 of 33
`
`Code200 Exhibit 1007
`Page 8 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`
`.
`
`13
`
`submitted to the end server, but this should impact performance less
`because there are no public/private key operations, no inflation of message
`transmission lengths (beyond a small, constant-size header), and no decoy
`messages needed.
`Another performance advantage of Crowds is that since each user ac-
`tively participates in the function of the crowd, the throughput of a crowd
`grows as a function of the numberof users. In fact, we show in Section 7
`that a crowd can scale almost limitlessly (in theory), in the sense that the
`load on each user’s computer is expected to remain roughly constant as new
`users join the crowd. With a fixed network of mixes, the load of each server
`increases proportionally to the number of users, with a resulting linear
`decrease in throughput.
`
`4. CROWD OVERVIEW
`
`As discussed previously, a crowd can be thoughtofas a collection of users.
`A user is represented in a crowd by a process on her computercalled a
`Jondo (pronounced “John Doe” and meant to convey the imageof a faceless
`participant). The user (or a local administrator) starts the jondo on the
`user’s computer. When the jondois started, it contacts a server called the
`blender to request admittance to the crowd.If admitted, the blender reports
`to this jondo the current membership of the crowd and information that
`enables this jondo to participate in the crowd. We defer further discussion
`of the blender and crowd membership maintenance to Section 8.
`The user selects this jondo as her web proxy by specifying its host name
`and port numberin her web browseras the proxyforall services. Thus, any
`request coming from the browser is sent directly to the jondo.t Upon
`receiving the first user request from the browser, the jondo initiates the
`establishment of a random path of jondos that carries its users’ transac-
`tions to and from their intended web servers. More precisely, the jondo
`picks a jondo from the crowd (possibly itself) at random, and forwards the
`request to it. When this jondo receives the request, it flips a biased coin to
`determine whetheror not to forward the request to another jondo; the coin
`indicates to forward with probability p;. If the result is to forward, then the
`jondo selects a random jondo and forwardsthe request to it, and otherwise
`the jondo submits the request to the end server for which the request was
`destined. So, each request travels from the user’s browser, through some
`numberofjondos, andfinally to the end server. A possible set of such paths
`is shown in Figure 2. In this figure, the paths are 1 > 5 — server; 2 >
`6 — 2 — server; 3 > 1 — 6 — server; 4 — 4 > server; 5 > 4 > 6 >
`server; and 6 — 3 — server. Subsequent requests initiated at the same
`
`The services that must be proxied include Gopher, FTP, HTTP and SSL.Otherwise, e.g., FTP
`requests triggered by downloading a web page would not go through the crowd, and would
`thus reveal the user’s IP address to the end server. Java and ActiveX should be disabled in the
`browser as well, because a Java applet or ActiveX control embedded in a retrieved web page
`could connect back to its server directly and reveal the user’s IP address to that server.
`CODE200 ET AL. EXHIBIT 1012
`Page 9 of 33
`
`ACM Transactic
`
`Code200 Exhibit 1007
`Page 9 of 33
`
`Code200 Exhibit 1007
`Page 9 of 33
`
`

`

`74
`
`e
`
`M. K. Reiter and A. D. Rubin
`
`Crowd
`
`WebServers
`
` |
`So Ga
`
`.
`‘
`:
`
`7v
`
`|
`I
`1
`
`/
`
`Fig, 2. Paths in a crowd (the initiator and web server of each path are labeled the same).
`
`jondo follow the same path (except perhaps going to a different end server),
`and server replies traverse the same path as the requests, only in reverse.
`A pseudocode description of a jondo is presented in Figure 3. This figure
`describes a thread of execution that is executed per received request. This
`description uses client-server terminology, where one jondois a client ofits
`successor on the path. For each path, indicated by a path id, the value
`next[path_id] is the next jondo on the path. To assign next jondos for paths,
`each jondo maintains a set Jondos of jondos that it believes to be active
`(itself included). When it chooses to direct the path to another jondo, it
`selects the next jondo uniformly at random from this set (lines 6, 16, and
`26);
`i.e.,
`“<— S” denotes selection from the set S uniformly at random.
`Subsequent sections shed greater light on the operation of a jondo and the
`pseudocodedescription of Figure 3.
`It is important for the jondo at each position in a path to hold a different
`path identifier for the path. That is, if a jondo receives a request marked
`with path_id from its predecessor in a path, then it replaces path_id with a
`different path identifier stored in translate[path_id] before forwarding the
`request to its successor (if a jondo). This enables a jondo that occupies
`multiple positions on a path to act independently in each position: if the
`path_id remained the same along the path, then the jondo would behave
`identically each time it received a message on the path, resulting in an
`infinite loop. Path identifiers should be unique and unpredictable; in our
`present implementation, new_path_id() (lines 5 and 15) returns a random
`128-bit value.
`Omitted from the description in Figure 3 is that fact that all communica-
`tion between any two jondosis encrypted using a key knownonly to the two
`ACM Transactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 10 of 33
`
`Code200 Exhibit 1007
`Page 10 of 33
`
`Code200 Exhibit 1007
`Page 10 of 33
`
`

`

`Crowds: Anonymity for Web Transactions
`
`.
`
`75
`
`(1)
`(2)
`(3)
`(3)
`5
`(6)
`(7)
`(8)
`(9)
`(10)
`(3)
`(13)
`(14)
`(15)
`(16)
`(17)
`(18)
`(9)
`(20)
`
`(21)
`(22)
`(23)
`(24)
`(25)
`(26)
`(27)
`(30)
`(30)
`(31)
`(32)
`(33)
`
`client,request <- receive_request()
`if (client = browser)
`sanitize(request)
`if (my-pathid = +)
`iad)
`my-path_id
`«+ new_path- id
`next[my-_pathid] +, Jondos
`forward-request (my-path-id)
`
`else
`
`path.id « remove_path_id(request)
`if (translate[pathid] = L)
`Tie - emaaye?
`if
`(coin = heads
`translate[path_id] « ‘submit’
`else
`
`translate[path_id] «- new.pathid()
`next(translate[pathid]] +z Jondos
`if (translate[path.id] = ‘submit’)
`submit_request()
`else
`
`forward _request(translate[pathid])
`
`subroutine forward-request(out_path_id)
`send out_path.id||request to next[out_path_id]
`reply <~ await_reply(co)
`if (reply = ‘jondofailed’)
`Jondos + Jondos \ {next[out-path.id]}
`next(out.path_id] «~p Jondos
`forwardrequest (out_pathid)
`4
`;
`else
`send
`reply to client
`subroutine submit-_request ()
`send request to destination(request)
`reply ¢~ await-_reply(timeout)
`send reply to client
`
`/* strip cookies and identifying headers */
`/* if my-path-.id is not initialized ... */
`
`/* client is a jondo */
`/* remove “incoming” path id */
`/* “incoming” path id is new */
`/* tails with probability py */
`
`/* set “outgoing” path id */
`/* select next jondo at random */
`
`/* wait for reply or recognizable jondofailure */
`/* jondo failed */
`_/* remove the jondo */
`/* assign a new random jondofor this path */
`/* try again */
`;
`/* received reply from jondo */
`
`/* send to destination web server */
`* wait for reply, timeout, or server failure */
`/* send reply or error message to client */
`
`Fig. 3. Pseudocode description of a jondo.
`
`of them. Encryption keys are established as jondos join the crowd, as is
`discussed in Section 8.
`
`5. SECURITY ANALYSIS
`
`In this section we consider the question of what information an attacker
`can learn about the senders and receivers of web transactions, given the
`mechanisms wedescribed in Section 4. The types of attackers we consider
`were described in Section 2. Our analysis begins with the two attackers for
`which analysis is more straightforward, namely a local eavesdropper and
`the end server. This is followed by an analysis of crowd security versus
`collaborating jondos.
`
`5.1 Local Eavesdropper
`Recall that a local eavesdropper is an attacker that can observe all (and
`only) communication emanating from an individual user’s computer. When
`this user initiates a request, the fact that she did so is exposed to the local
`eavesdropper, since we makenoeffort to hide correlations between inputs
`to and outputs from the initiating computer. Thatis, the local eavesdropper
`observes that a request output by the user’s computerdid not result from a
`corresponding input. Thus, we offer no sender anonymity against a local
`eavesdropper.
`
`ACMTransactions on Information and System Security, Vol. 1, No. 1, November 1998.
`CODE200 ET AL. EXHIBIT 1012
`Page 11 of 33
`
`Code200 Exhibit 1007
`Page 11 of 33
`
`Code200 Exhibit 1007
`Page 11 of 33
`
`

`

`76
`

`
`M. K. Reiter and A. D. Rubin
`
`typically prevent a local
`The mechanisms we described do, however,
`eavesdropper from learning the intended receiver of a request, because
`every message forwarded on a path, except for the final request to the end
`server,
`is encrypted. Thus, while the eavesdropper is able to view any
`message emanating from the user’s computer, it only views a message
`submitted to the end server (or equivalently a plaintext message containing
`the end server’s address) if the user’s jondo ultimately submits the user’s
`request itself. Since the probability that the user

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket