Reviewed by
Juliette Rouchier
Groupement de Recherche en Economie Quantitative d' Aix-Marseille
My reading of the book is from the social sciences point of view, meaning here economics or sociology which I know a bit. This is why some of the chapter are hardly commented, whereas I can point out the interest of others. The major part of the book is mainly oriented towards human-machine interfaces and attempts to represent in real time convincing human behaviour. One can note that a lot of applications are clearly oriented for the execution of army-based tasks.
For me, only few chapters are directly relevant to those doing simulations with agents to study social systems at a level which is larger than the group. However, some elements can be worth for curiosity. I do think that the statement by Ron Sun advising to use precise modelling of cognition for doing simulation with interacting agents can be discussed. I will do a little bit at the end of the description of third part (chapter 14, by Castelfranchi). I do not give the name of authors for each chapter nor the title, which can easily be found on internet.
The second part, included chapter 2 (by Taatgen, Lebiere, and Anderson), 3 (by Wray and Jones) and 4 (by Ron Sun), presents several cognitive architectures, ACT-R, Soar and CLARION. Those models are then used after that in the third part to develop examples, and this is why it is good to have those explanations as introduction. In this part, there is no clear demonstration on how those cognitive models could be use for implementing social agents in simulations. This part is perhaps too technical and it does allow no technical reader, as me, to really feel comfortable in judging which of the model are more relevant or useful for social science practitioners. In any case some of other chapters repeat the main features of the platform they use when presenting their research.
The third part deals with models and simulation of cognitive and social processes. Thus cognitive models are used there in interaction context through simulation.
Chapter 5 (by West, Lebiere, and Bothellis) a very interesting presentation of how ACT-R can be used to mimic human behaviour in imperfect information games, such as 'Paper, Rock, Scissors'. The argument is that humans are good competitors in evolution and hence one should consider their ability to develop strategic reactions in games as significant of their rationality in general. The proposed model is nice because it is simple and very generic. The difficulty is that testing it by validating it with human data involves a huge amount of data to be compared, a difficult job per se. For me this chapter is the only one which is clearly linked to economics and where it is obvious that the platform can be applied in research for this type of interaction – using the game theoretic approach.
In chapter 6 (by Naveh and Sun), CLARION model of cognition is used (and hence described again), the aim being to represent implicit knowledge in an organisation. Here, an organisation of cognitive agent faces a collective task of classification, for which several organisational designs and cognition are tested. The first benchmark is the adequacy of the results generated artificially with human behaviour. Then simulations are pursued so that to judge on the evolution of this organisation with this kind of cognition. Eventually cognitive parameters are varied so that to understand why organisations produce some observable situations or to detect properties of human cognition which are usually difficult to spot. It is , for example, possible to understand why some organisations (like hierarchies, needing a long training) perform more poorly than more flexible ones.
In chapter 7 (by Clancey, Sierhuis, Damer, and Brodsky), what is at stake is the conceptualisation of social behaviour as activities, having at the centre of what defines such a behaviour, norm, authority, definition of tasks. A model used by anthropologist describing social activities, Brahms, is presented, and an example is applied to the observation of the Mars analogue mission of 6 people of 2001. Then the description of fundamental elements that were identified and were injected into multi-agent simulations to represent human behaviour in a virtual environment. This experiment was useful for building the Brahms model, as well as revealing the importance of each other assessment on people ability to develop their activity in a group. The simulations reveal how each interaction is a composition of social, biological and task-oriented cognitive influences. The whole chapter is very rich and dense in terms of proposal for theoretical analysis and gathering of empirical data. It is a very good demonstration of the different levels being joined together in one model, mainly to study social psychology.
In chapter 8 (by Best and Lebiere), the issue is to develop agents that can evolve in an environment and realise team work. Quality of the representation of environment is hence important. The representation of the environment is also extremely important to navigate in an unknown space and organise teamwork with other agents. ACT-R is used as w cognitive model in two different simulation setting: a game platform (Unreal Tournament) and mobile robots teams (active media Robotic). The task is very specific and apparently originates from army request. It is MOUT, which assignment is to clear an L shape alley of its occupants. In both cases, ACT-R system has to be organised so that to seem to be real-time process. The difference is that robots have more to do in order to have a clear vision, and one has to judge how to choose between quality of perception and time of calculus. Representation of space, movement, communication and planning of teamwork are similar in both settings. Once more, ACT-R proves to be flexible and able to react quickly, cooperation can be managed easily and action plans are of good quality with this cognitive model.
Chapter 9 (by Gratch, Mao, and Marsella) gives a large part to logic dealing with emotions. Those emotions are to shown to have an important role in social interactions, and when it comes to teamwork and modification of plans, the ability to attribute blame and credit are central. This ability is one which is difficult to implement into artificial agents, and to do so, authors refer to appraisal theory. The model is EMA, written in SOAR, which is here described and for which code is given. The major point is to be able to attribute blame to the one who is really responsible rather than the one who executes an order. It is shown that this logic (with primitives, axioms, attribution rules, common sense inference, back-track algorithms) can achieve this aim? It enables an evolution of agents' appraisal in time (identification of relevant elements in context and coping with situations).
Chapter 10 (by Trafton, Schultz, Cassimatis, Hiatt, Perzanowski, Brock, Bugajska, and Adamsis) is based on the observation that the way artificial agents act and are defined via usual social "tags" can be determinant for the way people act upon it or interact with it. This is an important property to realise for all those who want to develop interfaces between humans and machines. For example it was proven that a computer that emits two different voices is seen as two distinct entities, whereas two computers emitting the same voice are seen as a single entity. When the aim is to establish collaboration between agent and human, where they both complete each other in task rather than having just an obedient machine, communication starts to be tricky. For example, interface is important for humans to be able to understand what the machine does, whereas a translation has to be made from representation of humans to representation of machines when it comes to spatial environment (machines using matrices and operations that are pretty different from our reasoning). Authors describe their work with mobile robots, the soft ware, hard ware, spatial image and cognitive methods for communication. The example is a hide and seek game where the agent must project itself in the other, understanding what are his perception during the hiding or seeking task, so that to anticipate its own search. The perspective taking exercise can then be extrapolated to astronauts and robots, and the Polyscheme and ACT-R platforms presented.
In chapter 11 (by Shell and Mataric) humanoid robots are used to validate humans' self referential theories about space. Humanoid robots are those that can use human environment and interact with humans. To organise learning built on interaction with humans, motor control and motor recognition has to be understood and programmed through primitives. Several layers of issues arise, such as natural structures of human motion, robot control and perception, skill acquisition through imitation (perception and classification being the building bricks of this learning). In the paradigm that is used here reasoning is not enough to learning: behaviour is the unit that has to be combined to structure in a complex way the representation of the world. Knowledge is thus acquired thanks to task achieving actions, ability to look ahead and draw conclusions from past mistakes. In this context one difficult element is observation and learning in social context since the actions of others give results that have to be interpreted as such, and the link between my own action and the result is harder to decipher. Hence, when developing collective behaviour based robots, it is necessary to add abilities to work together through the establishment of social rules that increase synergy and reduce negative effects of co-presence.
Leaving the field of spatial aspects, chapter 12 (by Schurr, Okamoto, Maheswaran, Scerri, and Tambegoes) goes through a model of teamwork. Authors present Machinetta, the new generation of proxies for STEAM implementation. This model is used to facilitate work in team among humans and robots, with an emphasize on role allocation where the environment is complex (i.e. some roles can disappear, agents can take several roles and limited resources to achieve them, several agents can have the same role). The proxies are made of different blocks, such as communication, communication, state, adjustable autonomy, RAP interface (with other team members). It can be operated in the context of Team Operating Plans, which is inspired by the intention theory. All those principles are described and examples are given (personal assistant agents and disaster response). The novel role allocation method in particular is described in details, with an algorithm called LA-DCOP (DCOP in decentralized context) and 3 examples are given in different simulation environments.
Chapter 13 (by Parisi and Nolfi) is a chapter that looks much closer to social simulation as understood by JASSS editorial lines. In that context I know much better what is at stake and I find it actually quite weak in terms of demonstration. The point of the authors is to develop a model where spatial aggregation, then communication, then culture, grow out of the gathering of situated neural networks (each network representing an agent). Agents embody a neural network and a genotype which evolves through selection pressure to represent biological evolution as well as (for me a huge supposition) cultural traits. Results are pretty much of a "Growing Artificial Society" type with agents put on a simple grid and emerging phenomena that appear through interaction. Of course the fact that learning is due to neural networks is the main different in the treatment made here. Social interactions are qualified as Type 1 when they occur non consciously thanks to interactions through the resource (this interaction was called "indirect communication" in the French multi-agent community at some point), and Type 2 when agents are conscious that other similar agents exist and can communicate with them ("direct communication"). In the chapter, some examples are given of emerging phenomena, which I do not find so surprising, but rather tautological (i.e. in an evolutionary perspective where the attitude of helping your own child is a genetic character. This character tends to survive more than the character of not helping them to survive), and the repetition of those elements makes the chapter a bit disappointing. Quite large conclusions about the emergence of language (in general, not only in the system) are drawn, and it is very doubtful that it was proven by the simulated results of the model at all. This chapter is also not so clear about the link between a cognitive representation as neural network and actual human cognitive abilities, since the reproduction of real data is absolutely not validated within the set of given examples. This chapter is maybe the closest to JASSS community and stands out in the book by presenting a representation of cognition which is the opposite from the others where logic is the central paradigm. Here cognition is not explicit, neither is plan building nor representation of an environment. These features are pleasant per se. However, it is not so clear that it is actually relevant or helpful because it does not address any central issue such as validation or similarity to any social and cognitive theory.
The last chapter of this part, chapter 14 (by Castelfranchi) is a very interesting summary of most cognitive abilities that have to be incorporated into an agent so that to have it able to perform a social life. The point of view is from social sciences with a good expertise of multi-agent simulation, and the whole chapter is not too much based on the demonstration of BDI centrality in representation (which could be "feared" when knowing the author). The idea of the paper is in line with the aim of the book: to show that the articulation of individual rationality and social behaviour needs a lot of complex cognitive manipulations, but also that individual rationality cannot be thought without the help of social thinking. The approach is different from the rest of the book in that it is much more abstract. The central points are: social life has to be accounted for through the mind of agents (mind is necessary), there is a need of rich computational models, social is more than individual and cognitive models (mind is not sufficient). What is needed in a multi-agent system is autonomy of agents (for example with BDI, including multi-layer cognition and emotions), intention must be present (with a representation of the others' rationality), social patterns must be present in individuals in some way, there are social sources for beliefs and goals, "we" must be understood by agents (hence groups exist), conventions are needed. This goes in line with most preceding chapters that integrate most of these elements. "Mind is necessary but not sufficient", here means that teamwork has to be enabled and for that, norms must exist, as well as social structures: they can be "objective" (like dependence networks), pre-existing or emerging where agents can perceive these emergence, or not identified by agents but constraining their actions. The integrative model is based on goal, since individual is goal driven (which is coherent with most economic thinking) but society is the main goal provider for agents (which is coherent with sociology. Hierarchies of goal are dealt with within each agent. Then some examples are given of emerging cooperation among unaware social agents (with an explanation of what is needed and how it is organised).
Here I put my own comment about the book, and then summarize the last chapters which aim is to give other authors this critical vision. As I said, the last chapter is very interesting since it is a good summary of most important elements that enable to think agents interacting in a social setting. However, I feel quite frustrated here as a social simulator (who tries to apply modelling sometimes to economics, sometimes to sociology) because the question of the completeness of sub models is never addressed. It seems to me that whenever one has to build a model about a specific point, it is necessary to make choices in the representation of the mind and social features that can be represented; otherwise the number of parameters is so high that it is impossible to perform any simulating work. In the whole book, all examples are based on extremely generic models and extremely specific examples, where the application is very narrow and choices for simplifying the representation of mind is always case-based (sometimes clearly justified sometimes less). However, my feeling is that one layer of analysis is really missing, which is the elicitation of intermediate sub-models (or simplified cognitive models) that could be sufficient but in accordance with generic models, to be applied to certain classes of social issues. I am here thinking about (for example) taking the approach of chapter 14 (by Castelfranchi) and describe social situations in terms of mind, objective social structure, emerging ones, communication skills, intention recognition, applied to generic issues such as an economic issue or a sociological question. Hence this would loop back the generic complex framework into lighter models that could be used in a tractable way for those who want to explore simulations. I felt a bit sorry not to have such kind of proposal here: at the same time case specific and easy to apply and coherent with global vision.
The last part of the book is a symposium, where different authors discuss the issues that can be addressed when mixing good representation of cognition and social interactions in simulations.
Chapter 15 (by Scott Moss) describes why adding more cognitive precision to the description of agents' behaviour, when the model have been validated empirically, can help to attain the aim of doing good science with models. Chapter 16 (by Panzarasa and Jennings) goes back to the main issue of the book, which is how to link two levels of decision - individual and collective - which are necessarily but grow almost independent dynamics. The chapter is then a demonstration on how to enable collective cognition to be seen as an emergent property of interacting individual cognitions. First, individual cognition needs to be able to take into consideration "other" and to reason about what is perceived of this other. Then, there must be some organisations to define the rules of agent interactions. Eventually it is necessary to provide a way to observe the novelty of the emerging phenomena of collective cognition which is by definition independently observed from individual one and has a downward impact on the individual level of cognition. Chapter 17 (by Burns and Roszkowska) shows how social cognition can be incorporated in a description of individual cognition in multi-agent systems through the concept of judgement. Generalized Game Theory is at the basis of the demonstration in this chapter, which is based on logic predicates. The chapter is a bit self-referred and not very clear for its applicability to any kind of social simulation.
Eventually the chapter 18 (by Frank Ritter and Emma Norling) takes into account an important aspect of human cognition which is its variability among subjects, and considers its impact on team work. The variability is present in all dimensions of rationality, like cognition, perception, action and physiology, and can be captured also in the fact that numerous models try to represent this human cognition (Clarion, Soar, etc...). All behaviours in a simulation can then be seen as moderated by a moderator, that can be internal to the agent or external (rules) or based on the task that has to be performed, evolving as time passes. They conclude on a framework for modelling that can represent this formal approach to team work.
The discussion of the last chapter (by Nigel Glbert) deals with the central question of the book: do we really need to represent the psychological level of humans when dealing with social issues. The main answer of the chapter is that "it depends on the question and on the model". Concerning the issue: showing that this issue is general to many research fields where aggregation is at stake, the author gives examples of relevance and irrelevance of the use of mixed representation where psychological level would make a social effect disappear (Schelling's model) and other where it is useful (like in chapter 12 on the volume). Another point that is made by the author (which he regularly puts forward in his work) is the relevance of a model itself. A model is rarely proven to be valid in different fields of application in psychology, and the fact that it matches data does not prove that it is the unique model to represent cognition - which makes it sometimes dangerous to use in contexts from which it has not been precisely extracted. I feel very touched by this last methodological argument.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2007