throbber
Collective Intelligence and its Implementation on
`the Web: algorithms to develop a collective
`mental map
`
`Francis HEYLIGHEN*
`
`Center “Leo Apostel”, Free University of Brussels
`
`Address: Krijgskundestraat 33, B-1160 Brussels, Belgium
`E-mail: fheyligh@vub.ac.be
`home page: http://pespmc1.vub.ac.be/HEYL.html
`
`ABSTRACT.
`
`Collective intelligence is defined as the ability of a group to solve more
`problems than its individual members. It is argued that the obstacles
`created by individual cognitive limits and the difficulty of coordination
`can be overcome by using a collective mental map (CMM). A CMM is
`defined as an external memory with shared read/write access, that rep-
`resents problem states, actions and preferences for actions. It can be
`formalized as a weighted, directed graph. The creation of a network of
`pheromone trails by ant colonies points us to some basic mechanisms of
`CMM development: averaging of individual preferences, amplification
`of weak links by positive feedback, and integration of specialised sub-
`networks through division of labor. Similar mechanisms can be used to
`transform the World-Wide Web into a CMM, by supplementing it with
`weighted links. Two types of algorithms are explored: 1) the co-occur-
`rence of links in web pages or user selections can be used to compute a
`matrix of
`link strengths,
`thus generalizing
`the
`technique of
`“collaborative filtering”; 2) learning web rules extract information from
`a user’s sequential path through the web in order to change link
`strengths and create new links. The resulting weighted web can be used
`to facilitate problem-solving by suggesting related links to the user, or,
`more powerfully, by supporting a software agent that discovers relevant
`documents through spreading activation.
`
`1. Introduction
`
`With the growing interest in complex adaptive systems, artificial life, swarms and simu-
`lated societies, the concept of “collective intelligence” is coming more and more to the
`fore. The basic idea is that a group of individuals (e.g. people, insects, robots, or soft-
`ware agents) can be smart in a way that none of its members is. Complex, apparently in-
`telligent behavior may emerge from the synergy created by simple interactions between
`individuals that follow simple rules.
`
`* Research Associate FWO (Fund for Scientific Research-Flanders)
`
`- 1 -
`
`IPR2020-00686
`Apple EX1025 Page 1
`
`

`

`To be more accurate we can define intelligence as the ability to solve problems. A sys-
`tem is more intelligent than another system if in a given time interval it can solve more
`problems, or find better solutions to the same problems. A group can then be said to ex-
`hibit collective intelligence if it can find more or better solutions than the whole of all so-
`lutions that would be found by its members working individually.
`
`1.1. Examples of collective intelligence
`All organizations, whether they be firms, institutions or sporting teams, are created on the
`assumption that their members can do more together than they could do alone. Yet, most
`organizations have a hierarchical structure, with one individual at the top directing the ac-
`tivities of the other individuals at the levels below. Although no president, chief executive
`or general can oversee or control all the tasks performed by different individuals in a
`complex organization, one might still suspect that the intelligence of the organization is
`somehow merely a reflection or extension of the intelligence of its hierarchical head.
`This is no longer the case in small, closely interacting groups such as soccer or foot-
`ball teams, where the “captain” rarely gives orders to the other team members. The
`movements and tactics that emerge during a soccer match are not controlled by a single
`individual, but result from complex sequences of interactions. Still, they are simple
`enough for an individual to comprehend, and since soccer players are intrinsically intelli-
`gent individuals, it may appear that the team is not really more intelligent than its mem-
`bers.
`Things are very different in the world of social insects (Bonabeau et al. 1997;
`Bonabeau & Theraulaz 1994). The way that ants map out their environment, that bees
`decide which flower fields to exploit, or that termites build complex mounds, may create
`the impression that these are quite intelligent creatures. The opposite is true. Individual in-
`sects have extremely limited information processing capacities. Yet, the ant nest, bee hive
`or termite mound as a collective can cope with very complex situations.
`What social insects lack in individual capabilities, they seem to make up by their sheer
`numbers. In that respect, an insect collective behaves like the self-organizing systems
`studied in physics and chemistry (Bonabeau et al. 1997): very large numbers of simple
`components interacting locally produce global organization and adaptation. In human so-
`ciety, such self-organization can be found in the “invisible hand” of the market mecha-
`nism. The market is very efficient in allocating the factors of production so as to create a
`balance between supply and demand (cf. Heylighen 1997). Centralized planning of the
`economy to ensure the same balanced distribution would be confronted with a “calculation
`problem” so complex that it would surpass the capacity of any information processing
`system. Yet, an efficient market requires its participating agents to follow only the most
`simple rules. Simulations have shown that even markets with “zero intelligence” traders
`manage to reach equilibrium quite quickly (Gode & Sunder 1993).
`The examples we discussed show relatively low collective intelligence emerging from
`highly intelligent individual behavior (football teams) or high collective intelligence
`emerging from “dumb” individual behavior (insect societies and markets). The obvious
`question is whether high collective intelligence can also emerge from high individual in-
`telligence. Achieving this is everything but obvious, though. The difficulty is perhaps
`best illustrated by the frustration most people experience with committees and meetings.
`Bring a number of very competent people together in a room in order to devise a plan of
`action, tackle a problem or reach a decision. Yet, the result you get is rarely much better
`than the result you would have got if the different participants had tackled the problem
`individually. Although committees are obviously important and useful, in practice it
`appears difficult for them to realize their full potential. Let us therefore consider some of
`the main impediments to the emergence of collective intelligence in human groups.
`
`- 2 -
`
`IPR2020-00686
`Apple EX1025 Page 2
`
`

`

`1.2. Obstacles to collective intelligence
`First, however competent the participants, their individual intelligence is still limited, and
`this imposes a fundamental restriction on their ability to cooperate. Although an expert in
`his own field, Mr. Smith may be incapable to understand the approach proposed by Ms.
`Jones, whose expertise is different. Even if we assume that Mr. Smith would be able to
`grasp all the ramifications and details of Ms. Jones’s proposal, he probably would still
`misunderstand what she is saying, simply because he interprets the words she uses in a
`different way than the one she intended. Both verbal and non-verbal communication are
`notoriously fuzzy, noisy and dependent on the context or frame of reference. Even if
`everyone would perfectly understand everyone else, many important suggestions during a
`meeting would never be followed up. In spite of note taking, no group is able to
`completely memorize all the issues that have been discussed.
`Another recurrent problem is that people tend to play power games. Everybody would
`like to be recognized as the smartest or most important person in the group, and is there-
`fore inclined to dismiss any opinion different from his or her own. Such power games
`often end up with the establishment of a “pecking order”, where the one at the top can
`criticize everyone, while the one at the bottom can criticize no one. The result is that the
`people at the bottom are rarely ever paid attention to, however smart their suggestions.
`This constant competition to make one’s voice heard is exacerbated by the fact that
`linguistic communication is sequential: in a meeting, only one person can speak at a time.
`It seems that the problem might be tackled by splitting up the committee into small
`groups. Instead of a single speaker centrally directing the proceedings, the activities might
`now go on in parallel, thus allowing many more aspects to be discussed simultaneously.
`However, now a new problem arises: that of coordination. To tackle a problem collec-
`tively, the different subgroups must keep close contact. This implies a constant exchange
`of information so that the different groups would know what the others are doing, and
`can use each other’s results. But this again creates a great information load, taxing both
`the communication channels and the individual cognitive systems that must process all
`this incoming information. Such load only becomes larger as the number of participants
`or groups increases.
`For problems of information transmission, storage and processing, computer tech-
`nologies may come to the rescue. This has led to the creation of the field of Computer-
`Supported Cooperative Work (CSCW) (see e.g. Smith 1994), which aims at the design
`of Groupware or “Group Decision Support Systems”. CSCW systems can alleviate many
`of the problems we enumerated. By letting participants communicate anonymously via the
`system it can even tackle the problem of pecking order, so that all contributions get an
`even opportunity to be considered. However, CSCW systems are typically developed for
`small groups. They are not designed to support self-organizing collectives that involve
`thousands or millions of individuals.
`But there is a technology which can connect those millions: the global computer net-
`work. Although communities on the Internet appear to self-organize more efficiently than
`communities that do not use computers, the network seems merely to have accelerated
`existing social processes. As yet, it does not provide any active support for collective in-
`telligence. The present paper will investigate how such a support could be achieved, first
`by analysing the mechanisms through which collective intelligence emerges in other sys-
`tems, then by discussing how available technologies can be extended to implement such
`mechanisms on the network.
`
`- 3 -
`
`IPR2020-00686
`Apple EX1025 Page 3
`
`

`

`2. Collective Problem-Solving
`
`To better understand collective intelligence we must first analyse intelligence in general,
`that is, the ability to solve problems. A problem can be defined as a difference between
`the present situation, as perceived by some agent, and the situation desired by that agent.
`Problem-solving then means finding a sequence of actions that will transform the present
`state via a number of intermediate states into a goal state. Of course, there does not need
`to be a single, well-defined goal: the agent’s “goal” might be simply to get into any
`situation that is more pleasant, interesting or amusing than the present one. The only
`requirement is that the agent can distinguish between subjectively “better” (preferred) and
`“worse” situations (Heylighen 1988, 1990).
`To generalize this definition of a problem for a collective consisting of several agents it
`suffices to aggregate the desires of the different agents into a collective preference and
`their perceptions of the present situation into a collective perception. In economic terms,
`the aggregate desire becomes the market “demand” and the aggregate perception of the
`present situation becomes the “supply” (Heylighen, 1997). It must be noted, though, that
`what is preferable for an individual member is not necessarily what is preferable for a
`collective (Heylighen & Campbell, 1995): in general, a collective has emergent properties
`that cannot be reduced to mere sums of individual properties. (Therefore, the aggregation
`mechanism will need to have a non-linear component.) In section 3, we will discuss in
`more detail how such an aggregation mechanism might work.
`On way to solve a problem is by trial-and-error in the real world: just try out some
`action and see whether it brings about the desired effect. Such an approach is obviously
`inefficient for all but the most trivial problems. Intelligence is characterised by the fact that
`this exploration of possible actions takes place mentally, so that actions can be selected or
`rejected “inside one’s head”, before executing them in reality. The more efficient this
`mental exploration, that is, the less trial-and-error needed to find the solution, the more
`intelligent the problem-solver.
`
`2.1. Mental maps
`The efficiency of mental problem-solving depends on the way the problem is represented
`inside the cognitive system (Heylighen 1988, 1990). Representations typically consist of
`the following components: a set of problem states, a set of possible actions, and a
`preference function or “fitness” criterion for selecting the most adequate actions. The
`fitness criterion, of course, will vary with the specific goals or preferences of the agent.
`Even for a given preference, though, there are many ways to decompose a problem into
`states and actions. Changing the way a problem is represented, by considering different
`distinctions between the different features of a problem situation, may make an unsolvable
`problem trivial, or the other way around (Heylighen 1988, 1990).
`Actions can be represented as operators or transitions that map one state onto another
`one. A state that can be reached from another state by a single action can be seen as a
`neighbor of that state. Thus, the set of actions induces a topological structure on the set of
`states, transforming it into a problem space. The simplest model of such a space is a net-
`work, where the states correspond to the nodes of the network, and the actions to the
`edges or links that connect the nodes. The selection criterion, finally, can be represented
`by a preference function that attaches a particular weight to each link. This problem
`representation can be seen as the agent’s mental map of its problem environment.
`A mental map can be formalized as a weighted, directed graph M = {N, L, P},
`where N = {n1, n2, ..., nm} is the set of nodes, L (cid:204) N ·
` N is the set of links, and
`P: L fi
` [0, 1], is the preference function. A problem solution then is a connected path
`
`- 4 -
`
`IPR2020-00686
`Apple EX1025 Page 4
`
`

`

`C = (c1, ..., ck) (cid:204) N such that c1 is the initial state, ck is a goal state, and for all
`i ˛
` {1, ..., k }: (ci, ci+1) ˛ L .
`To solve a problem, you need a general heuristic or search algorithm, that is, a method
`for selecting a sequence of actions that is likely to lead as quickly as possible to the goal.
`If we assume that the agent has only a local awareness of the mental map, that is, that the
`agent can only evaluate actions and states that are directly connected to the present state,
`then the most basic heuristic it can use is some form of “hill-climbing” with backtracking.
`This heuristic works as follows: from the present state choose the link with the highest
`weight that has not been tried out yet to reach a new state; if all links have already been
`tried, backtrack to a state visited earlier which still has an untried link; repeat this
`procedure until a goal state has been reached or until all available links have been
`exhausted. The efficiency of this method will obviously depend on how well the nodes,
`links and preference function reflect the actual possibilities and constraints in the
`environment.
`The better the map, the more easily problems will be solved. Intelligent agents, then,
`are characterized by the quality of their mental maps, that is, by the knowledge and under-
`standing they have of their environment, their own capacities for action, and their goals.
`Increasing problem-solving ability will generally require two complementary processes:
`1) enlarging the map with additional states and actions, so that until now unimagined op-
`tions become reachable; 2) improving the preference function, so that the increase in total
`options is counterbalanced by a greater selectivity in the options that need to be explored
`to solve a given problem.
`
`2.2. Coordinating individual problem-solutions
`Let us apply this conceptual framework to collective problem-solving. Imagine a group of
`individuals trying to solve a problem together. Each individual can explore his or her own
`mental map in order to come up with a sequence of actions that constitutes part of the
`solution. It would then seem sufficient to combine these partial solutions into an overall
`solution. Assuming that the individuals are similar (e.g. all human beings or all ants), and
`that they live in the same environment, we may expect their mental maps to be similar as
`well. However, mental maps are not objective reflections of the real world “out there”:
`they are individual constructions, based on subjective preferences and experiences (cf.
`Heylighen 1999). Therefore, the maps will also be to an important degree different.
`This diversity is healthy, since it means that different individuals may complement
`each others’ weaknesses. Imagine that each individual would have exactly the same men-
`tal map. In that case, they would all find the same solutions in the same way, and little
`could be gained by a collective effort. (In the best case, the problem could be factorized
`into independent subproblems, which would then be divided among the participating in-
`dividuals. This would merely speed up the problem-solving process, though; it would not
`produce any novel solutions).
`Imagine now that each individual would have a different mental map. In that case, in-
`dividuals would need to communicate not only the (partial) solutions they have found, but
`the relevant parts of their mental maps as well, since a solution only makes sense within a
`given problem representation. This requires a very powerful medium for information ex-
`change, capable of transmitting a map of a complex problem domain. Moreover, it re-
`quires plenty of excess cognitive resources from the individuals who receive the trans-
`missions, since they would need to parse and store dozens of mental maps in addition to
`their own. Since an individual’s mental map reflects that individual’s total knowledge,
`gathered during a lifetime of experience, it seems very unlikely that such excess process-
`ing and storage capacity would be available. If it were, this would mean that the individ-
`ual has used only a fraction of his or her capacities for cognition, and this implies an in-
`
`- 5 -
`
`IPR2020-00686
`Apple EX1025 Page 5
`
`

`

`dividual who is very inexperienced or simply stupid. Finally, even if individuals could
`effectively communicate their views, there is no obvious mechanism to resolve the
`conflicts that would arise if their proposals contradict each other. It seems that we have
`come back to our problem where we have intelligent individuals but a dumb collective.
`Let us see whether investigations of existing intelligent collectives can help us to over-
`come this problem of coordination between individuals.
`
`Stigmergy
`2.3.
`While studying the way termites build their mounds, the French entomologist Pierre
`Grassé (1959) discovered an important mechanism, which he called “stigmergy”. He ob-
`served that at first different termites seem to drop mud more or less randomly. However,
`the presence of a heap of mud incites other termites to add mud to that heap, rather than
`start a heap of their own. The larger the heap, the more attractive it is to further termites.
`Thus, the small heaps will be abandoned, while the larger ones will grow into tall
`columns. Since the bias to add mud in those places where the concentration of mud is
`highest continues, the columns moreover have a tendency to grow towards each other,
`until they touch. This produces an arch, which will itself grow until it touches other
`arches. The end result is an intricate, cathedral-like structure of interlocking arches.
`This is obviously an example of collective intelligence. The individual termites follow
`extremely simple rules, and have no memory of either their own or other individual’s ac-
`tions. Yet, collectively they manage to coordinate their efforts so as to produce a complex,
`seemingly well-designed structure. The trick is that they coordinate their actions without
`direct termite-to-termite communication. The only “communication” is indirect: the mud
`left by one termite provides a signal for other termites to continue work on that mud.
`Thus, the term stigmergy, whose Greek components mean “mark” (stigma) and “work”
`(ergon).
`The fundamental mechanism here is that the environment is used as a shared medium
`for storing information so that it can be interpreted by other individuals. Unlike a message
`(e.g. a spoken communication) which is directed at a particular individual at a particular
`time, a stigmergic signal can be picked up by any individual at any time. A spoken mes-
`sage that does not reach its addressee, or is not understood, is lost forever. A stigmergic
`signal, on the other hand, remains, storing information in a stable medium that is acces-
`sible by everyone.
`The philosopher Pierre Lévy (1997) has proposed a related concept to understand
`collective intelligence, that of a shared “object”. For example, a typical object is the ball in
`a soccer game. Soccer players rarely need to communicate directly, e.g. by shouting di-
`rections at each other. Their activities are coordinated because they are all focused on the
`position and movement of the ball. The state of the ball incites them to execute particular
`actions, e.g. running toward the ball, passing it to another player, or having a shot at the
`goal. Thus, the ball functions as a stigmergic signal, albeit a much more dynamic one than
`the mud used by termites. Another typical “object” discussed by Lévy (1997) is money. It
`is the price, i.e. the amount of money you get for a particular good, which incites pro-
`ducers to supply either more or less of that good. Thus, money is the external signal
`which allows the different actors in the market to coordinate their actions (cf. Heylighen
`1997).
`The difference between Lévy’s “object” and Grassé’s stigmergic signal, perhaps, is
`that the former changes its state constantly, while the latter is relatively stable, accumulat-
`ing changes over the long term. The stigmergic signal functions like a long-term memory
`for the group, while the object functions like a working memory, whose changing state
`represents the present situation. In fact, you do not even need an external object to hold
`this information. The soccer players are not only influenced by the position and move-
`
`- 6 -
`
`IPR2020-00686
`Apple EX1025 Page 6
`
`

`

`ment of the ball, but also by the position and movement of the other players. This per-
`ceived state of the collective functions as a shared signal that coordinates the actions of the
`collective’s members. The coordinated actions exhibited by the individuals in a swarm
`(flocks of birds, shoals of fish, herds of sheep, etc.) are similarly based on a “real-time”
`reaction to the perceived state of the other individuals.
`
`2.4. Collective Mental Maps
`In the examples of stigmergy or shared objects we discussed until now, the problem-
`solving actions seem to be purely physical: amassing mud, kicking a ball towards the
`goal, producing goods. We might wonder whether stigmergy could also be used to sup-
`port problem-solving on the mental plane, where sequences of actions are first planned in
`the abstract before they are executed in reality. Again, insect societies can provide us with
`a most instructive example. Ants that come back from a food source to their nest leave a
`trail of chemical signals, pheromones, along their path. Ants that explore the surround-
`ings, looking for food, are more likely to follow a path with a strong pheromone scent. If
`this path leads them to a food source, they will come back along that path while adding
`more pheromone to the trail. Thus, trails that lead to sources with plenty of food are con-
`stantly reinforced, while trails that lead to exhausted sources will quickly evaporate.
`Imagine two parallel trails, A and B, leading to the same source. At first, an individual
`ant is as likely to choose A as it is to choose B. So, on average there will be as many ants
`leaving the nest through A as through B. Let us assume that path B is a little shorter than
`A. In that case, the ants that followed B will come back to the nest with food a little more
`quickly. Thus, the pheromones on B will be reinforced more quickly than those on A,
`and the trail will become relatively stronger. This will entice more ants to set out on B
`rather than A, further reinforcing the gains of B relative to A. Eventually, because of this
`positive feedback, the longer path A will be abandoned, while the shorter path B will at-
`tract all the traffic. Thus, the ants are constantly tracing and updating an intricate network
`of trails which indicate the most efficient ways to reach different food sources. Individual
`ants do not need to keep the locations of the different sources in memory, since the
`collectively developed trail network will always be there to guide them.
`This example may seem similar to the mud collecting termites. The difference is that
`the ants leaving pheromone are not making any physical contribution to the solution of
`their problem (collecting food), unlike the termites whose actions directly contribute to the
`mound building. They are merely providing the collective with a map to guide them
`through the terrain. In fact, the trail network functions like an external mental map, which
`is used and updated by all ants. We will call such an exteriorized, shared, cognitive
`system a collective mental map (CMM). Let us investigate this concept in more detail.
`A collective mental map functions first of all as a shared memory. Various discoveries
`by members of the collective are registered and stored in this memory, so that the infor-
`mation will remain available for as long as necessary. The storage capacity of this mem-
`ory is in general much larger than the capacities of the memories of the individual partici-
`pants. This is because the shared memory can potentially be inscribed over the whole of
`the physical surroundings, instead of being limited to a single, spatially localized nervous
`system. Thus, a collective mental map differs from cultural knowledge, such as the
`knowledge of a language or a religion, which is shared among different individuals in a
`cultural group but is limited by the amount of knowledge a single individual can bear in
`mind.
`In human evolution, the first step towards the development of a CMM was the inven-
`tion of writing. This allowed the storage of an unlimited amount of information outside of
`individuals brains. Unlike a real CMM, however, the information in books is shared only
`to a limited extent. Not all books can be accessed by all individuals. This was particularly
`
`- 7 -
`
`IPR2020-00686
`Apple EX1025 Page 7
`
`

`

`true before the invention of printing, when only a few copies of any given book existed in
`the world. Although libraries now provide a much wider access for people wishing to
`read books, there is still a very limited access for writing books. Although everybody
`could in principle write a book, very few books actually get published in such a way that
`they become accessible to a large number of people.
`In a CMM, such as the ants’ trail network, on the other hand, all individuals can
`equally contribute to the shared memory. They can in particular build on each others’
`achievements by elaborating, reinforcing or providing alternatives for part of the stored
`information. Books, on the other hand, are largely stand-alone pieces of knowledge, with
`very limited cross-references. It would be very difficult for me to take an existing book
`and start commenting, correcting or reinforcing on its content. If I want to add to the state
`of the art, I would rather need to write and publish a book from scratch, a very difficult
`and time-consuming affair.
`The need for a universally and dynamically shared memory has been well understood
`by researchers in Computer-Supported Cooperative Work (e.g. Smith 1994). Discussions
`over a CSCW system will typically keep a complete trace of everything that has been said,
`which can be consulted by all participants, and to which all participants can at any
`moment add personal annotations. This collective register of activities is often called a
`shared “blackboard”, “white board” or “workspace”. However, a record of all
`communications does not yet constitute a mental map. The more people participate in a
`discussion and the longer it lasts, the more the record will grow, and the more difficult it
`will become to distil any useful guidelines for action out of it. Of course, you can allow
`the participants to edit the record and erase notes that are no longer relevant, as you would
`do with scribbles on a blackboard. But this again presupposes that the participants would
`have a complete grasp of all the information that is explicitly or implicitly contained in the
`record. And that means that the size of the “controlled” content of the blackboard cannot
`grow beyond the cognitive capacities of an individual. This obviously makes the shared
`blackboard a poor model for an eventual Internet-based support for collective intelligence.
`A mental map is not merely a registry of events or an edited collection of notes, it is a
`highly selective representation of features relevant to problem-solving. The pheromone
`network does not record all movements made by all ants: it only registers those collective
`movements that are likely to help solve the ants’ main problem, finding food. A mental
`map consists of problem states, possible actions that lead from one state to another, and a
`preference function for choosing the best action at any moment. These are all implicit in
`the pheromone network: a particular patch of trail can be seen simultaneously as a location
`or problem state, as an action linking to other locations, and as a preference, measured by
`the concentration of pheromone, for that action over other available actions. As it is clear
`that a CMM cannot be developed by merely registering and editing individual contribu-
`tions, we will need to study different methods to collectively develop a mental map.
`
`3. Mechanisms of CMM Development
`
`3.1. Averaging preferences
`Probably the most basic method for reaching collective decisions and avoiding conflicts is
`voting. This method assumes that all options are known by all individuals, and that the
`remaining question is to determine their aggregate preference. In the simplest case, every
`individual has one vote, which is given to the options that this individual prefers above all
`others. Adding all the votes together determines the relative preferences of the different
`alternatives for actions. (Usually, after a vote only the highest scoring option is kept, but
`this is not relevant for our model, where all options remain available). This is to some
`
`- 8 -
`
`IPR2020-00686
`Apple EX1025 Page 8
`
`

`

`degree similar to the functioning of ant colonies, where the pheromone trail left by a
`particular ant can be seen as that ant’s “vote” in the discussion of where best to find food.
`In a more sophisticated version of the voting mechanism, individuals can distribute
`their voting power over different alternatives, in proportion to their individual preference
`functions. For example, alternative A might get a vote of 0.5, B 0.3, C 0.2 and D 0.0. In
`that case, the collective preference function Pcol becomes simply an average of the n indi-
`vidual preference functions Pi:
`
`(1)
`
`Pcol(l j ) = 1
`n
`
`n(cid:229)
`i=1
`
`Pi
`
`(l j ) = 1
`n
`
`n(cid:229)
`i=1
`
`i
`p j
`
`Johnson’s (1998; see also Johnson et al. 1998) simulation of collective problem-solving
`illustrates the power of this intrinsically simple averaging procedure. In the simulation, a
`number of agents try to find a route through a “maze”, from a fixed initial position to a
`fixed goal position. The maze consists of nodes randomly connected by links. In a first
`phase, the agents “learn” the layout of the maze by exploring it in a random order until
`they reach the goal. They do this by building up a preference function which attaches a
`weight to every link in the network they tried, but such that the last link used (before
`exiting the maze) in any given node gets the highest weight. In a second, “application”
`phase, they use this

This document is available on Docket Alarm but you must sign up to view it.


Or .

Accessing this document will incur an additional charge of $.

After purchase, you can access this document again without charge.

Accept $ Charge
throbber

Still Working On It

This document is taking longer than usual to download. This can happen if we need to contact the court directly to obtain the document and their servers are running slowly.

Give it another minute or two to complete, and then try the refresh button.

throbber

A few More Minutes ... Still Working

It can take up to 5 minutes for us to download a document if the court servers are running slowly.

Thank you for your continued patience.

This document could not be displayed.

We could not find this document within its docket. Please go back to the docket page and check the link. If that does not work, go back to the docket and refresh it to pull the newest information.

Your account does not support viewing this document.

You need a Paid Account to view this document. Click here to change your account type.

Your account does not support viewing this document.

Set your membership status to view this document.

With a Docket Alarm membership, you'll get a whole lot more, including:

  • Up-to-date information for this case.
  • Email alerts whenever there is an update.
  • Full text search for other cases.
  • Get email alerts whenever a new case matches your search.

Become a Member

One Moment Please

The filing “” is large (MB) and is being downloaded.

Please refresh this page in a few minutes to see if the filing has been downloaded. The filing will also be emailed to you when the download completes.

Your document is on its way!

If you do not receive the document in five minutes, contact support at support@docketalarm.com.

Sealed Document

We are unable to display this document, it may be under a court ordered seal.

If you have proper credentials to access the file, you may proceed directly to the court's system using your government issued username and password.


Access Government Site

We are redirecting you
to a mobile optimized page.





Document Unreadable or Corrupt

Refresh this Document
Go to the Docket

We are unable to display this document.

Refresh this Document
Go to the Docket