Reviewed by
Juliette Rouchier
Centre for Policy Modelling,
Manchester Metropolitan University.
This edition is a translation of the book formerly published in French in 1995 (Les systèmes multi-agents: Vers une intelligence collective, Inter Editions, Paris.) Even now, it is still the main reference for the French research community in multi-agent systems (MAS). The book is intended to be both a state of the art text and an introduction for people who are interested in capturing the main ideas of MAS. It deals mainly with the theoretical background, antecedents and applications. I will present a summary of the main ideas and the points that the author highlights as novel features arising from the multi-agent approach to computer science. When some details are very well developed in the book, I will just refer to them, since they are mainly useful for people who actually want to apply the techniques.
The book begins with a long introduction that sketches the historical origins of MAS research (mainly stating that Decentralised Artificial Intelligence is a complement to Artificial Intelligence and Artificial Life). Ferber gives a minimal definition of an agent and of an MAS, so that all branches of multi-agent research can accept it
An agent can be a physical or virtual entity that can act, perceive its environment (in a partial way) and communicate with others, is autonomous and has skills to achieve its goals and tendencies. It is in a multi-agent system (MAS) that contains an environment, objects and agents (the agents being the only ones to act), relations between all the entities, a set of operations that can be performed by the entities and the changes of the universe in time and due to these actions.
When the book was first published, there were some debates about which kind of MAS should be built. A classical opposition was drawn between reactive and cognitive agents: cognitive agents are those that can form plans for their behaviours, whereas reactive agents are those that just have reflexes. Ferber tries to show how both approaches can converge in the end, while emphasising different aspects: one kind of research focuses on the building of individual intelligences whose communication is organised, whereas the other imagines very simple entities whose co-ordination emerges in time without the agents being conscious of it. In fact, a huge number of different schools of MAS persist, all coming from different theoretical backgrounds. These include the American DAI school (Lesser, Gasser, Sycara), the Rational Agents branch (Rao and Georgeff, Shoham, Castelfranchi), the branch focusing on Speech Acts (Finin), on Petri nets (Estraillier), the Reactive Agents branch (Brooks, Steels, Drogoul, Ferber, Demazeau) and those focusing on learning (Weiss and Sen). These researches, although having different points of view, are very complementary, and each have their own applications.
The main application of multi-agent systems at the moment can be listed as follows:
Kenetics is thus the new science which Ferber would like to see appear, dealing with collective action and interaction. The main questions that can be asked are; what actions can the agent perform, what are its relationships to the world (perception and the way that this perception affects its inner state) and what are its interactions with other agents? The main feature defining an agent is its autonomy, for which it is ascribed a goal and a satisfaction function that gives rewards when the goal is satisfied. That last point is what establishes a real gap between MAS and some other fields like Artificial Intelligence, Systemics, Distributed Systems and robotics.
The book explores one by one the most important aspect of MAS. One notices that the author regularly expresses his wish not to split the MAS builders into different branches. For that reason, he gives criteria of analysis that he thinks can apply to both the major approaches which he calls reactive and cognitive.
The first chapter deals with interactions and provides a discussion of co-operation in interaction. The aim is to find a more objective definition of co-operation so that an observer can detect it in a system, whatever his or her knowledge of the mental states of the agents. The aim here is to make sure that the definition can be used to analyse any kind of MAS, be the co-operation an intention of the agent or not. Co-operation exists when the addition of new agents improves the situation in the system or when potential conflicts are solved. These aims can be reached by the explicit definition of co-operation for the agents and the distribution of tasks, by communication processes, by specialisation of the agents (so that they help each other) or by common decisions taken via negotiation. The choice about how co-operating systems can be constructed is not more precisely defined, since the aim here is to give less constraining specifications that nonetheless capture the fundamental state of MAS. Of course, Ferber's approach to co-operation is clearly different from the BDI one, where intentions are always taken into account.
The second chapter deals with organisations. It provides an exhaustive definition of the interaction processes that occur in groups, and of the actions that each agent can undertake. The chapter takes into account quite a few levels of analysis, describing the structure of organisations so as to show the importance of functions and roles. It seems important to keep in mind that an organisation can itself be an agent in a bigger organisation. Conversely, most of the time, complex agents are regarded as organisations that carry different functions within themselves. Analysing an organisation through this idea of functions, one can identify the most important: representational, organisational, conative, interactional, productive and preservative. Different roles are associated with each of these functions among the elements in the organisation. They are then given some tasks to enable the organisation to work. Like an agent, any organisation can be analysed through five different dimensions: physical, social, relational, environmental and personal. These dimensions are not all used in any particular organisation, but they help to describe the acts that are performed for each function. There is thus a description of each of the functions, and how it is translated for each of the dimensions.
I shall give two examples out of the thirty dimensions that the author distinguishes to guide the functional analysis of any organisation. The representational/personal dimension of a given organisation represents all the elements that are made explicit about the representation of the self (representation of its own capacity and goals). The conative/relational dimension describes demands or constraints due to other related organisations: a demand can be a request for actions or data and is performed as a result of communication.
All dimensions do not have to be represented in all organisations: this framework of analysis is, again, as wide as possible so that it gives a grid that can be applied to groups of humans, to sophisticated expert systems linked together or to very simple adaptive agents. In the structural approach, the idea is to choose what an agent is, and the sets of relationship that are defined to make sure that all tasks are performed by it. In the universe that defines constraints on the agents, the relationships between entities (acquaintance, communication, subordination ...) are very important. These interactions can be put into the system a priori or can emerge in time.
In this book, all the dimensions, functions, possible relationships and different meta-levels are quite straightforwardly explained, and the example of a robot society is taken to clarify the concepts. Ascribing roles to each agent is regarded as the last phase in building an organisation: once one knows which functions to assigned, the repartition of tasks is carried out by defining the redundancies and specialisations of each agent.
The third chapter deals with the notion of action. Ferber describes the different ways one can choose to consider what an action is, and how the notion of action can be and has been used in the building of MAS, also describing the limits of each approach. He introduces action as the transformation of a global state (which is a very classical approach); as a response to influences (with an influence, the internal state of the agent is modified and a reaction comes out of the agent to the environment); as a process in computer science (which is mainly used for finite state automata); as a physical displacement in a potential field (this approach seems very appealing to him); as a local modification (he describes the case of cellular automata, making a clear difference between one of these automata and an agent in a system) and as a command (in the cybernetic approach). He eventually arrives at a description of the two main trends in organising the actions of agents in MAS: tropistic and hysteretic. At this point, one must recall what an autonomous agent is. Its perception leads the agent to deliberate and choose one action. It will transform the environment and consequently its perception changes, which leads him to deliberate again ... and so on.
The tropistic agent corresponds to a reactive perspective on agency. It has perceptions that lead it to action through reflexes. In a group of agents, each small action on the environment changes it and all the agents around can quickly take account of that small change. This makes tropistic agents very interesting if we are dealing with situated problems, since they are very flexible. If one wants agents to perform certain tasks globally, one has to think about how to organise the environment. There must be some elements or pieces of information that provoke reflexes and thus lead to certain actions or locations. One can use exploration methods, the perception of wave-front like diffusion, some memory and a marking of the environment (for the marking agent and the others) so that the agent comes to know the whole reachable environment and act on it.
The hysteretic agent is one that doesn't need to perceive its environment continuously in order to act. It is an agent that treats memories as an important piece of information and is thus able to deal with that information in a formal way. Reaction is thus the conclusion to the process of action rather than its beginning, by contrast with the tropistic agent. Here, the author introduces BRIC, his formalism that helps to organise the building of a MAS on the basis of the principles he has just elaborated.
The next chapter deals with states of mind in an artificial system. An agent is said to be cognitive if it can give a meaning to what it perceives, and one represents that understanding of the world as a mental or cognitive state. A first way of treating cognition is to divide it into smaller elements, the cognitons. Each of these refers to a position of the mind towards the environment: it can be one of the well-known Belief, Desire or Intentions (that tell what I think to be true, what I would like to achieve and how I expect to do it). One can also consider another classical kind of cogniton, which is Commitment towards a goal or another agent, which indicates a capacity to be stubborn. It is mainly through interaction that mental states are built up by the agent. The autonomous perceptual faculty itself produces a problem of definition though: the classical view of a "perception of the reality that is out there" has been abandoned and now one considers rather how the individual actively goes and looks for information about its environment. But that more recent view about how perception depends on expectation of the world is quite difficult to translate into an artificial system.
Another philosophical question is quite important for the design of systems, and it is related to the definition of knowledge and of its representations. Firstly, knowledge can be classically represented through the manipulation of symbols that are combined with each other. There must then be a set of symbols with a semantic and a formal system of inference that makes that set of symbols evolve. Another part of the constitution of knowledge cannot be forgotten: learning. Elements of knowledge are grabbed through the interactions with others and with the environment. Several logics can be used to formalise that knowledge then, and revise it in time with new information being processed.
In the context of MAS, it is essential to organise knowledge in two respects: the set of beliefs that is needed and the way their use is guaranteed by the internal structures that decide action (the conative system). The necessary beliefs are about the environment, the social reality, the relational abilities of the agent (with others and with the environment), and about the agent itself. The conative system is supposed to be what makes the agent act consistently with some final goal of its own, and for that an idea of agent survival has to be introduced to make sure it will stay consistent. Then the agent has to find motivations to accomplish one action or the other. Again, one can decide to put the motivation in the agent (as pleasure), one can impose desires for objects in the environment, one can tell it to accept some stated rules of the group, or give it commitments so that it feels like helping some others. (This last point should be strongly emphasised, since it is typical of the idea of multi-agency.) Once the motivation for an action has appeared in the agent, it must realise the act. There is a wide range of ways of selecting actions and of monitoring and controlling whether the one performed gives the right results. Linking intention to action is apparently still a very controversial way of considering the human ability to act. Some formal approaches to the problem do exist however and that makes it quite a useful (and provably efficient) approach to MAS. Ferber refers here in a quite detailed way to the theory of rational action developed by Cohen and Levesque.
The seventh chapter describes the different means that are regularly used to organise collaboration and a proper distribution of tasks. Most of the individual characteristics of the agent are in themselves parameters that influence how that collaboration could be achieved. A task that must be performed by a group has first to be broken down into subtasks (an activity that only humans can perform for the moment), then roles have to be assigned, which can be done centrally or in a distributed way. The centralised way usually requires the use of a trader that gathers all requests by customers and makes them get in touch with the appropriate suppliers, for whom it has an evaluation of abilities. The problem with that method is that the number of message exchanges increases very quickly with the number of agents. The delegation of tasks can also be attained by distributed methods. The agents are part of a network of acquaintances and each has some skills that can be relevant for the others when they have to perform a task. To find out who is able to help, the agents send messages to each other to make requests or give the information that they can help. Either the agent does this directly, in which case it must have a representation of skills and of reliability for the agents it knows; or by asking anyone to help find a certain skill in the population. In that case, one agent that it knows should know different agents and ask them for help or delegate again. For these methods, one must be able to evaluate the requests and offers, and to organise the delegation and the acceptance of help by the one who asked for it. All these modules (request evaluation, delegation, proposition evaluation and decision reception) are very precisely described in the book, with clear schemas. The network has to be reorganised regularly too, and there are some algorithms for that. Another method is allocation by contract net, where an agent has the role of manager and helps all agents that need each other to establish contracts. This can work with one manager or several, and depending on the problem, it is possible to find an optimal organisation for it. But this protocol, although flexible, is still very centralised as an organisation and doesn't allow any diversity in agents. There also exist some protocols that are actually mixtures of the ones formerly described. A more complex system exists, the SAM system, in which the agents are not only given tasks and skills, but also have the ability to form mental state regarding the others and their environment. Very complex agents can thus be organised and help each other in an efficient way, but the more complex the cognitive abilities, the more centralised the organisation, with common languages and common understanding of mental states.
Distribution of tasks can also be carried out with very simple agents and emerge from reactive behaviours. In this case, the agents have to react to certain signals indicating that there is a need for a certain task to be performed. Then, anyone who can perform the task and perceives the signal can go and perform it, whereas an agent that knows that it cannot perform the task just avoids the place where help is needed. Usually the signal decreases with the distance to the task to be performed, and one can assign to each agent different states that make it react in different ways to the same signal depending on the moment. There are two main problems with these methods which are non-optimality, especially in movement, and the risk that some agents could be stuck in areas where they do not perceive any signal that interests them, and are thus completely "useless" to the system for long periods. An instance of a distributed task allocation is the MANTA system. It was built through collaboration between ethologists and computer scientists as a simulation tool for studying the behaviour of an ant colony. But the results can also be applied to the organisation of colonies of robots. The idea is to organise a distribution of tasks by having the agents react to signals in their environment. For most of these signals, doing the corresponding task reduces the strength of the signal. A learning process is incorporated in these agents since they tend to specialise in the tasks they perform. In the end, the colonies survive well when one sets the right parameters.
The last chapter explores different ways to organise the collaboration of actions between autonomous agents. In a system with agents that have their own objectives and schedules, when tasks are dependent on one another or when resources are to be shared, it can be important to add the function of co-ordination to the system. Otherwise there is a risk of redundancy or even of a "locked" situation occurring. Optimisation in the resolution of these problems is often very difficult when one treats them in a centralised way and implies that one must make choices about which agents should make more effort to adapt to the needs and actions of others. To find methods for efficient co-ordination, one has to study different characteristics of the system: temporal (rapidity, adaptability and predictiveness), organisational (more or less distributed, with different modes of communication and the agents being more or less free), quality and efficiency (improvement thanks to co-ordination, avoidance of conflicts, number of agents), implementation (quantity of data exchanged, degree of mutual representation needed, difficulty of implementation) and ultimately the generalisability of the method of co-ordination (heterogeneity of agents and generality). It is possible to co-ordinate agents by synchronisation (actions or access to resources), by planning (where partial plans are easier to co-ordinate), by regulation and by reaction. For example, partial plans for agents can be in contradiction, but resolution of contradiction can be reached simply by delaying certain actions. To organise autonomous agents, we can make the environment usable in a coherent way by generating environmental signals that don't have the same meaning or interest for different agents, we can develop different anti-collision techniques where one agent can detect others, we can have the agents put marks on their environment that can be used by others. Organising the co-ordination of actions can also require design choices when one comes to solve classical problems and one would like not to have to keep the resolution centralised. This methodology is called eco-problem solving. The methodology implies that we must have a good understanding of what it is to distribute activities, but we have this understanding, it is possible to imagine a system where the solution will appear as a stabilisation of agents' collective self-organisation. The eco-agents are very simple, they just look for local "satisfaction" defined by their internal state and perform local actions to achieve it. The author develops some examples of problems that can be solved by this method and shows how to address the difficulties of deciding the initial state of the system and the simple action sets that should be used.
In summary, one can say that the book describes a very wide range of different ways of building a MAS. In each case, there is a description of the advantages and constraints of any chosen method, as well as a very clear statement of the philosophical and historical background. This can be extremely useful for anyone who wants to use MAS without preconceived ideas about how to implement it. What the book also emphasises is the implications of MAS in contemporary thinking about cognition in general. Ferber's position is clearly much closer to the interactionist approach than the purely symbolic one. He explores most of the philosophical and psychological arguments that can help us in building conceptual models of intelligence. He then tries to see how far these concepts can be adapted to the actual building of an artificial intelligence in a machine. Considering the difficulty involved in this task, one can understand why his study is so exhaustive. Even if Ferber doesn't explore certain area (and nobody can do it all), his choice to take all kinds of agents into account, even the highly cognitive ones that he doesn't really believe to be useful, shows how long he sees the road ahead to be. Actually, Ferber has often been reproached for making his description as general as possible because some people saw in this an attempt to lead people to accept his favourite approach. For me, it is rather a book that tries to encourage new ideas and opens doors rather than limiting the field to a successful state of the art. The successes he refers to are always those systems that work, but his idea of MAS is stronger than building a few systems. His conclusion shows that he believes in the opportunity to integrate very diverse and even contradictory points of view on intelligence into a single framework.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2001