Reviewed by
This is by no means the first conference volume in the field of social simulation, but it is easily the most comprehensive. In addition to a substantive and substantial introduction by the editors, there are 40 papers of varying length and, inevitably in a volume such as this, variable quality. Although some of the papers are extended abstracts, some could clearly have gone just as appropriately into a good refereed journal. Moreover, the first half dozen papers, including the introduction, are concerned with the direction in which the communal activity of social simulation should proceed. So taking the book as a whole, it presents both a certain amount of soul-searching about what we in the social simulation communities should be doing and a representative collection of papers spanning a wide range of the kinds of simulations undertaken by social scientists. I found no papers on the sort of multi-agent systems that computer scientists build and analyse although a number of authors cite that literature.
In their introduction ("Social Simulation - A New Disciplinary Synthesis", pp. 1-17), Conte, Hegselmann and Terna draw some broad distinctions among the various social sciences. The principal differences are the levels of aggregation; the direction of causation between individuals on the one hand and social institutions on the other; the use (or not) of formalisms to express theories and a concern with pragmatic success (prediction and policy prescription) or with epistemic truth (analysis of causation). In effect, economists are reductionist and pragmatic with a strong interest in causal explanation whereas sociologists are more holist and epistemically oriented although with considerable policy influence. They also point out that, while economics has a stable (some would say ossified) theoretical core, sociology is influenced (some would say wracked) by successive alternative approaches to the analysis of socialisation and social interaction. They assert that simulation modelling can provide a basis for a meeting of the minds of economists, sociologists and those between.
The editors argue that economics should be brought into closer harmony with politics, psychology, sociology, law and the theory of organisations or, if harmony is not possible, as competitors within an overarching framework provided by social simulation. They cite a few heterodox economists (Sam Bowles and Brian Arthur, for example) as evidence that economics is moving away from its position of self-imposed self-containment. This seems a dangerous line to take. Mainstream economics is simply different from the rest of the social sciences. The source of this difference is that economists are concerned with some form of equilibrium analysis while the rest of us are interested in social processes. A consequence of that concern with equilibrium states is that developments in economic theory are intended to elaborate or modify previous developments. Frequently, mainstream economists will motivate some twist on existing equilibrium theory by asserting that their idea corresponds to some phenomenon observed in the real world. Because such phenomena are never observed in an equilibrium state - because no such states are observed - the nature or extent of such links to reality cannot be tested directly. Social simulation deriving from the interests of sociologists, anthropologists and some heterodox economists is concerned with the emergence of behavioural patterns and social processes. Frequently, as this volume shows, simulations are used to determine the consistency and test the feasibility of theories generated in quite independent disciplines. These differences will be taken up in detail below.
The other claims made for social simulation seem more supportable. A key development is the integration of social science modelling with cognitive science. Papers investigating the appropriateness of various organisational forms using computational implementations of cognitive theories (Carley and Prietula 1996, Ye and Carley 1995) have been around for a few years now although, unfortunately, none of these approaches are represented in this volume. However, other integrative aspects of social simulation are well represented, including the use of neural networks and genetic algorithms to represent learning by agents. Historical studies and understandings can be both an inspiration of and a test for social simulations. The property of social simulations that is common to nearly all of the papers in the volume (apart from some of those contributed by economists) is that of emergence. Some papers are concerned with emerging behaviour patterns and some with emerging organisational structures and institutions (including market structures). A few papers restate and test social theories in a simulation framework and succeed in providing them with a new clarity of meaning and giving the community confidence in their validity (or lack thereof).
While the editors' introduction is thoughtful and, indeed, thought provoking, it offers no overview of the papers in the volume. In part this is no doubt a reflection of the short lag between the deadline for papers and the publication of the conference volume which was available at the conference.
It follows that a review of this volume presents an opportunity, if not a duty, to assess the arguments of the introduction in relation to the papers comprising the rest of the volume. This review will begin by considering the role of social simulation indicated by the papers in the volume, then turn to the scope for integrating mainstream economics and other approaches to social simulation and finally address the nature of emergence as presented in this collection.
Justice can hardly be done to 40 different papers in a single review. I have opted instead to develop (on the basis of some of the contributions to this volume) an extension of the issues considered by the editors in their introduction. Those papers which deserve more discussion include Jim Doran's contribution on "Foreknowledge in Artificial Societies" (pp. 457-467) which is both interesting and very different from any of the other papers in the volume, some of the papers on organisations and the only empirical paper, "Models and Scenarios for European Freight Transport Based on Neural Networks and Logit Analysis" (pp. 319-325) by Nijkamp, Reggiani and Tsang. There is also a section on statistical techniques using simulation which this reviewer did not feel competent to judge.
The papers concerned with the role and purpose of social simulation identified four general areas in which simulation is an important addition to the tools of social scientific analysis.
Axelrod ("Advancing the Art of Simulation in the Social Sciences", pp. 21-40) argues that simulation supports emergence where deduction would be intractable and that the emergent properties of the model can lead inductively to generalisation, based on replications and multiplicity of similar simulation models. He urges simplicity in the specification of models on the grounds that, otherwise, it will be hard to understand why properties of the simulated behaviours and societies emerge as they do.
If models are implemented in procedural programming languages such as C++ or Pascal (as Axelrod suggests on p. 26), then there is little alternative to keeping them simple if one is to understand the relationships giving rise to their behaviour. An alternative is to use languages that conform to either some theory or some logical formalism. Soar and ACT-R are obvious examples of programming environments that are implementations of cognitive theories. Alternatively, logic programming languages offer the possibility of applying deductive techniques to determine whether the emergent properties of models are general relative to the underlying logic of the language. For complicated models, theorem provers exist to support deductive analysis in even complex cases and complicated models. In general, it would be hard to disagree that simulation enables us to analyse emergent behaviour which would not be deduced by conventional means. Indeed, many of the papers in this volume are concerned precisely with issues of emergence. The additional point being made here is that computational models can be complemented by computational theorem provers to deduce those emergent properties, provided that the implementation in which the properties emerge corresponds to a set of axioms and rules of inference. Procedural languages do not support such proofs of emergence.
Another role for social simulation is to make the qualitative analysis of results from complicated models more tractable. Troitzsch ("Social Science Simulation - Origins, Prospects, Purposes", pp. 41-54) demonstrates this point clearly using the linear differential equation model of an arms race proposed by Richardson (1948). The mathematical form is:
Simulating with the model yields this:
Clearly, the wire basket shapes give a better intuitive sense of the nature of the instabilities implied by the model than do the differential equations.
A third role for social simulation, also described by Troitzsch, is to inform the analyst's understanding of the structure and processes of the system. Troitzsch gives little emphasis to the third role, calling it structural validity and making it one of three elements in assessing the validity of a model (the others being replication and prediction). It seems unlikely that any simulation modellers would disagree with the importance of comparing model outputs with real world data. The point being made here is that simulation models provide a new capacity for the analysis of social structures and processes.
A fourth role for simulation is testing the consistency, meaning and implications of theories heretofore specified only verbally. A particularly interesting example for this reviewer, who struggled vainly with writings of Talcott Parsons some 35 years ago, is the attempt by Jacobsen and Bronson ("Computer Simulated Empirical Tests of Social Theory", pp. 97-102) to implement a simulation model of Parsons' General Theory of Action (Parsons 1937). They "found it could not be modeled at all because of the unclarity and inconsistencies in Parsons' use of concepts" (p. 98). How much agony would have been spared generations of sociology and politics students had social simulation been common all those years ago!
One of Axelrod's points, made almost en passant, is that some emergent properties of models can be proved deductively on the assumption of rational behaviour in the sense, although he is not explicit on this, that agents employ constrained optimisation algorithms to determine their actions. He considers emergence because agents are adaptive and emergence because of evolutionary selection. He writes:
This suggestion - that the rational choice assumption is unrealistic but tractable - is hardly new. Nonetheless, Axelrod's observation points up a danger for the developing area of social simulation, were we actually to realise the possibility (identified in the introduction by Conte et al.) that simulation could bring the social sciences closer to rational choice economics. To make this point, the differences between mainstream, rational choice economics and the social sciences will be given greater weight in this essay than they were in the volume under review.
Mainstream (or rational choice) economics and social simulation are neither complements nor competitors. They are just different in both content and applicability. To demonstrate this, I will begin with a comparison of two papers in the volume under review: the paper by Pedone and Parisi ("In What Kinds of Social Groups Can 'Altruistic' Behaviors Evolve?", pp. 195-201) and "Bargaining Between Automata" (pp. 113-132) by Binmore, Piccione and Samuelson. One problem with this comparison is that the papers use different techniques to analyse different issues. In order to keep the points involved here clear, I will introduce a preliminary comparison in which the issues are the same and the difference is in the analytical stance. Specifically, I will compare the Pedone and Parisi paper with the most recent paper on altruism that I could find in a world-class mainstream economics journal. That paper is "Growth Effects of Taxation under Altruism and Low Elasticity of Intertemporal Substitution" by Jordi Caballé in the January 1998 issue of The Economic Journal.
Having elucidated the differences of analytic method and purpose in these two papers, I will then argue that, although the form of the argument is different, the underlying method and purpose of the Binmore et al. paper is the same as that of Caballé. The purpose of this two step approach is to keep as separate as as possible differences of subject matter from differences of analytical style and method. The argument will then be restated in relation to the common elements in specifications of agents with reference to contributions to the volume under review.
The paper by Pedone and Parisi addresses the question of why we observe altruistic behaviour in societies where such behaviour reduces the survival and reproductive success of the altruistic individual. Why, in effect, is altruism not eliminated by evolutionary forces? Kin selection theory suggests that, in the words of Pedone and Parisi, "Altruism can be preserved in a population if altruistic behaviors reduce the reproductive chances of the altruist but increase the reproductive chances of other individuals that are genetically related to the altruist." This proposition is tested in the paper by setting up an environment in which individuals learn to find food and to food-growing tools. The individuals learn over their lifetimes through a neural network. In one set of simulations, individuals live alone and consume any food that they cause to grow. Over time, they learn to allocate their time to consumption and growth in a manner which is appropriate for maintaining their energy levels. In a second set of experiments, agents live in groups. One set of groups has agents that are "genetically related" in the sense that they inherit their neural network weights from the same parents, allowing for some mutation. In the other set of groups, the agents are unrelated. Pedone and Parisi confirm that altruism survives in kinship groups but not, in these experiments, in groups of unrelated agents. In the latter groups, food production vanishes if food producers cannot appropriate the food they produce. Pedone and Parisi also simulated societies in which behavioural norms are such that all individuals are food producers. In that case, food production is a permanent feature of the groups but if the groups are mixed - some producing food and others not - then food production is eliminated.
The particular environment that was specified for these simulation experiments was a grid in which food was distributed to random cells and tools were distributed to other random cells. If an agent moved to a cell with food, its energy level increased by one unit. When an agent moved to a cell with a tool, a new item of food was created on some other randomly determined cell and would be consumed by an agent that moved to or, presumably, was already in that cell. It is important to note that the behaviour of the agents in terms of consumption or production is specified independently of the environmental specification. Whether the evolution of behaviour is affected by the environment is another matter, but the design of the agents per se can be applied to a range of environmental situations. This, as we shall see presently, is not the case for an economist's specification of altruistic behaviour.
For Caballé (going back in an unbroken line to Barro 1974), altruism is driven by the utility an individual receives from leaving a bequest to his or her heirs. The analysis is motivated by previous articles offering analytical models of the effects of tax rates on economic growth. His extenstion of that literature is to focus on bequests. This focus is motivated by an estimate that between 45 and 80 per cent of household wealth in the United States has been the subject of "intergenerational transfers". So Caballé wants to see if allowing for bequests establishes a relationship between tax rates on property income on the one hand and, on the other, the rate of economic growth.
Note that there is a single, unambiguous rate of growth in this model. This is because Caballe assumes "balanced growth" in which everything in the economy grows at the same rate. That isn't too hard to achieve since there is only one produced good in the economy which serves as both capital and consumption good and no agent differs in any way from any other. The number of agents must grow at the same rate as the stock and the output of the good. This is in a overlapping generations (OLG model) where everyone lives for two periods and overlaps with an older generation in the first period and a younger generation in the second period. If you think I am overstating the restrictiveness of the specified environment, look at this from Caballe (1998, p. 94):
In addition to the identical agents in a continuum, the identical nature of firms and the presumption of equilibrium is shown in this (ibid., p. 96):
Note that equation (5) in the above passage is implied by the equilibrium assumption. Note also that the behaviour of the agent is specified by the exogenously asserted utility function and the resulting demands can only be determined as a property of the balanced growth equilibrium solution for the economy.
There is nothing in this article which constitutes an investigation of process. It is entirely a question of thinking up, out of one's own head, a process which can be imposed on a single agent working for a single firm using a single good to reproduce that same good for both production and consumption. The necessary characteristic of the behaviour for the paper to be publishable is that it should be compatible with a sort of social equilibrium the world has yet to observe. In fact, this article is not explaining anything at all. This could hardly be more different from the search by Pedone and Parisi for a process which is consistent with a theory that is quite independent of their model (kin selection theory) and which yields a result frequently observed in nature and society (altruism within species). Note, additionally, that the behavioural basis of altruism in the Caballé paper could not be taken out of the particular equilibrium framework and applied generally in the context of other environmental specifications.
The above abstract from Binmore et al. (p. 113) makes clear that the purpose of the paper is prove that Rubinstein's results are not restricted to an environment in Nash equilibrium. The Rubinstein result, as is common in iterated game theory, was obtained by backward induction. That is, the analyst identifies the optimal result at some arbitrary end-point of the game and then finds the optimal state at the previous game point from the possible initial conditions of the final optimum and so on back to some start point. Binmore et al. state that "this paper asks how much of Rubinstein's result can be preserved if we abandon backward-induction reasoning as well as the infinite divisibility of the surplus." The preservation of this result is said by the authors to be interesting because software agents face similar problems when they meet other software agents that are "badly programmed" in the sense that they are not rational in the sense (I think) that economic agents are assumed to be rational.
The agents in this game are finite automata selected by opposing metaplayers. Finite automata are sets of possible initial states, functions for transforming those initial states into final states and output functions determining the actions implied by the final states. The functions and states of any automaton are fixed. Certainly in the Binmore et al. paper, there is no learning or cognitive behaviour assumed which could change the states recognised by the automata (the initial states) nor the actions taken by the automata in any state (the transformation and output functions).
There are two points to be made about this paper. The first is that, like Caballé but unlike Pedone and Parisi, Binmore et al. are not seeking to identify any actual social process. So much is seen from the following:
The second point is that the suggestion that software agents would find a Nash equilibrium in a game against any arbitrarily programmed competing software agent is simply fanciful. No less fanciful is the suggestion that a metaplayer - presumably a human sending its softbot out onto the Internet to achieve some specified objective - would be able to design a finite automata to deal with any automaton designed and programmed by any arbitrary competing metaplayer. These results do not seem to be related either to observed bargaining processes or to software engineering. Like Caballé's paper, this paper is extending within the broad framework of economic theory a previous set of results from the same body of theory. Agents are optimisers in an optimised world. What Binmore et al. actually prove is that the class of finite automata they are considering will exist in a kind of evolutionarily stable equilibrium just as Caballé's altruists will exist in a balanced growth equilibrium. The difference here is that Caballé proves his result with an algorithm in which the agent in its multiple but identical manifestations determines its optimum actions at a stroke while Binmore et al. prove their result on the assumption of a round-robin tournament. Both are analytical artefacts with no useful real world referent.
While Binmore et al. do not specifically make this claim, their use of round robin tournaments is frequently asserted to represent a dynamic process of the sort one might find in a competitive environment. This sort of argument is emphatically rejected by Tomas Klos ("Spatially Coevolving Automata Play the Repeated Prisoner's Dilemma", pp. 153-159.) in the volume under review.
Although motivated by his criticism of Miller (1996), Klos also directly contradicts Binmore et al., in arguing
Each of these observations contravenes the definition of the round-robin tournament.
To avoid these characteristics of such tournaments, Klos conducts simulation experiments in which agents are located spatially and play iterated prisoners' dilemma games only against their immediate neighbours. The agents then change their strategies when the fitnesses of competitors are higher than their own as a result of play. Agents seek to imitate their more successful neighbours, although crossover and mutation do create the random variation on which genetic algorithms depend. Klos finds that quite simple finite automata (with a small number of internal states), playing against and learning from their neighbours, perform approximately as well as Miller's automata playing in a round-robin tournament with all possible initial states and with transition functions and output functions evolving as global genetic algorithms.
A similar approach was taken by Xavier Vilà ("Adaptive Artificial Agents Play a Finitely Repeated Discrete Principal-Agent Game", pp. 437-456) who also used finite automata (and Moore machines in particular) which play a limited number of rounds in a round-robin tournament. Vila finds that his agents converge to a uniform set of automata (some principals and some agents) which are not too different from the analytical result with utility maximising agents.
It seems likely to this reviewer that the latter result is going to turn out to be general and robust since the same result has been obtained in simulations of actual organisations and in empirically plausible representations of transition economies (Moss et al., 1995, 1997a, 1997b). In general terms, learning from those you can observe and who appear to be more successful seems a more effective learning procedure than does optimisation by sampling randomly from the whole model space. The contribution to this volume by Chattoe and Gilbert ("A Simulation of Adaptation Mechanisms in Budgetary Decision Making", pp. 401-418, see below) also found that imitation of apparently successful behaviour improved the results of decisions. Whether or not the best attained results from selective imitation are as good as those implied by rational choice theory or global search methods such as genetic algorithms, it will still be a more efficient and rewarding mechanism in conditions where equilibria do not in practice prevail - the world we live in, for example.
We conclude that mainstream economists - even those as diverse as Caballé on the one hand and Binmore and Samuelson on the other - are working within a closed set of theories (not open to other theoretical traditions) in order to explain empirically observed phenomena. The editors' suggestion (pp. 8-11) that economics is becoming more open to ideas and issues originating in other disciplines reflects, I believe, a failure to understand the element of economics which makes it simply and irreconcilably different from the rest of the social sciences with the exception of those elements of, for example, political science or management science which are applications of mainstream economics. That key differentia of economics from the rest of the social sciences is its wholly imperative basis. Economic modellers always specify the behavioural process in order to determine the characteristics of the states that emerge. These states always have some equilibrium properties such as unchanging proportions of the economic system as in balanced growth equilibrium or unchanging strategies as in the Binmore-Samuelson models of evolutionarily stable strategies. Modellers who are not in the mainstream economic tradition - including some economists - model in a more declarative framework so that responses to sets of conditions are specified and behaviour emerges. That behaviour, naturally, implies new states but those states then imply the resulting behaviour. Consequently, the interplay of behavioural processes and states leads to emergence of behavioural patterns and, possibly, norms. The outcomes from such behaviour need not be equilibrium states (whether market or selection equilibria). Certainly the behaviour is not specified in relation to equilibrium states as typified by Caballé.
There are several papers in this volume which demonstrate clearly that social simulation can usefully inform economic analysis. Two of these papers are particularly telling - one simulating a market with properties known to prevail in an actual market and one simulating learning behaviour to determine more general and abstract relations.
The first of these papers, "Market Organisation" by Weisbuch, Kirman and Herreiner (pp. 221-240) has been circulating for a few years now and has already been widely cited in the agent-based economics literature. They use a model based on the Marseilles fish market to analyse the effects of different market organisations on transactions patterns. They also take into account the effects of staggered transactions that result, for example, from the ability to buy less fish in the morning when buyers have learned that the supply characteristics will be more favourable for them in the afternoon.
The other, more abstract, is Terna's contribution on agent-based computational economics ("A Laboratory for Agent Based Computational Economics: The Self-development of Consistency in Agents' Behaviour", pp. 73-88). Terna represents agents' cognitive processes as artificial neural networks. In one of his two reported models, agents start out bartering goods with different degrees of durability and finds that one of them, the least durable, ends up being bartered for each of the other three but none of the other three are bartered for one another. In effect, the least durable good of all becomes a kind of commodity money. This is a remarkable result when considered in the light of the conventional presumption in monetary economics that money must be, above all, highly durable. There is also the historical experience of the emergence of gold and silver, both as durable as they get, as media of exchange. The reasons for Terna's result here turn on the particular assumptions about the nature of transactions costs and the assumption that agents minimise the costs of holding assets and have, in effect, no competing objectives.
Although Terna does not address the discrepancy between his results and the historical record, it is important to note that these can be seen as the first (or at least the first published) step in a process of identifying the sort of behavioural processes that lead to the emergence of commodity money and, thence perhaps, to sets of institutions such as banks (and then central banks) that issue and maintain the value of fiduciary money. This is surely a value of social simulation that has not been given much emphasis in the volume under review - the interplay between simulations and the historical record. In building mainstream economic models, the modeller naturally tries a simple formulation, sees how the results compare with existing results, tweaks the model here and there and builds up a picture of the relationships between the assumptions and the equilibrium properties of the model. The econometric modeller tries a number of specifications of, for example, regression models and chooses the specification yielding the best fit. Thereafter, particularly in using such models for forecasting purposes, forecasts are changed (and usually improved) by changing the model in the light of forecaster judgement. This process was documented by Wallis (1993) and modelled by Moss, Artis and Ormerod (1994). The problem is that mainstream economic models do not support engagement between the model builder and the analysis of the processes which generate the observations though the forecasting process does that in a non-formal and hence non-rigorous way.
In the case of Terna's model, we can ask whether the characteristics of the commodities on which he focused are the appropriate characteristics. In particular we can look at his specification of durability, the sources of transactions costs and so on. We can even look at the historical record of instances in which assets came to be media of exchange. Social simulations such as Terna's offer the opportunity to engage in an integrated analysis of economic as well as other social phenomena by appealing to cognitive science, psychological experiments, anthropological and other historical data.
Even where equilibrium is not a feature of the model or analysis, it appears that the habits of thought imposed by practice or training in mainstream economics lead some economists to take a very different view of the importance of plausibility in the specification of behaviour. An example of such a difference is found in the paper by Andersen ("An Evolutionary Approach to Structural Economic Dynamics", pp. 287-293). Andersen looks in that paper at issues of structural economic change. Those of us who have been simulating transition economies have been concerned with changing behaviour in privatised enterprises and shifts in the composition of outputs. Andersen specifies a model with a single good produced by labour and "knowledge", one type of agent (which is both producer and consumer), a lexicographic utility function, a factor supply called "workers" attached irrevocably to a firm and where all firms have the same number of workers and nobody learns anything. It is hard to see what can be learned from this model about structural economic change.
There is, it must be said, one gem of a paper drawing on mainstream economics: "Market Organisations for Controlling Smart Matter" (pp. 241-257) by Guenther, Hogg and Huberman. The problem they address is the control of physical systems in an essentially unstable configuration. Their example is a set of balls balanced on sticks and connected to each other by a spring. Standard physics gives them the physical properties of the system and the basis for a simulation of controllers to maintain the balls in an upright position. Each controller has a stock of energy which it can augment by purchasing more from other agents and, of course, reduce by using the energy to keep its ball upright when it starts to fall over. Each agent has a utility function determining its trades in energy at given prices. Positive utility is obtained from keeping the ball upright and from holding wealth. External power sources supply the energy to the controllers. Profits obtained by the power sources are distributed equally back to the controllers as purchasing power. Guenther, et al. demonstrate that this market mechanism keeps all of the linked balls close to their upright positions with greater precision and less energy use than individual controllers not linked by the market mechanism.
This is precisely the sort of situation in which agents behaving as constrained optimisers do what is required of them to be in, or close to, equilibrium. For one thing there is only a single market with well behaved utility and production functions so that there are none of the discontinuities and multiple equilibria associated with general equilibrium models. Second, there is no uncertainty in the sense of events that could occur which have not been anticipated by the agents.
A further development they report is a comparison of different market structures. The market comprises actuators who get information about ball displacements from sensors. The market structure specifies the strength of signal sent by each sensor to each actuator. Considering only information from a very narrow range of sensors, those closest to the actuator within an organisation, results in more power consumption and more disequilibrium than getting information from a wider range of sensors where the distance between sensor and actuator was a function of organisational structure.
This paper demonstrates that mainstream economic market and organisational models do have their uses. Of course, there is no reason to believe that these uses are in the analysis of actual markets and organisations.
We have already looked at emergence as manifest in the papers by Terna, by Weisbuch et al., by Pedone and Parisi and by Klos. There are a number of other papers in this volume studying emergent behaviour. Conte and Paolucci, for example ("Tributes or Norms? The Context-dependent Rationality of Social Control", pp. 187-193) consider the circumstances in which aggressive behaviour can persistently maintain social control. Agents have several different approaches: some observe norms of non-aggression, some issue ultimata and some proceed strategically by threatening only the weaker and giving way to the stronger. Although the behaviour itself is not emergent, the characteristics of the society in terms of efficiency and fairness are.
There are several papers in which agent cognitive capacities are represented by neural networks. Pistolesi, Veneziano and Castelfranchi ("Global vs Local Social Search in the Formation of Coalitions", pp. 203-209) use this device to investigate the implications of social dependence networks in which there are no limitations on the ability to communicate with any other agent in the network and, alternatively, conditions in which agents have limited abilities to communicate and are therefore restricted to more local communication. Agents were located on a two-dimensional torus and were given resources to exchange with other agents. The pattern and social characteristics of the coalitions that emerged were different in each case. Ballot, Merlateau and Meurs ("Personnel Policies, Long Term Unemployment and Growth: An Evolutionary Model", pp. 259-278) find that, when firms hire workers in order of efficiency, growth is smoother and unemployment lower. There is also less long-term unemployment. The model is an OLG model (as was Caballé's) but individuals do not know enough about their environment to maximise utility although households do choose whether to become educated (and give up income while being educated) on the basis of a utility function and an estimate of the likely earnings given education or no education. Education enables them to have any of a wider range of jobs, some of which are more skilled and better paying than others. It is the firms that are represented by neural networks and, so, learn and generalise their learning over the simulation run. The model is extensive and contains elements from both mainstream economics and from artificial intelligence. Macro results are generated by micro behaviour. And clearly, the authors were surprised by their results - a key index of the extent of emergence in our models.
Taking the range of papers in the volume apart from those particularly concerned with statistical analysis, it appears that the specification of the agents is the crucial distinction between simulations which support emergent individual and social behaviour and whose which do not. All agents comprise perceptors, processors and effectors or, equivalently, sensors, processors and actuators. There does not seem to be much consideration of the importance of the properties of the effectors. Agents do whatever it is they intend to do in every paper that actually specifies agents (a couple of papers on macroeconomics do not). The range of perception has certainly been one issue as we have seen in relation to the papers by Klos and by Pistolesi et al. More important for supporting emergence of behaviour is the nature of the processors. Neural networks as processors were used by Ballot et al. as noted above. Genetic algorithms were also used in several of the papers. Vila's use of a GA has already been noted. Chattoe and Gilbert motivated their use of the GA by an analysis of interview data on budgeting in retired households. The particular form of GA they use is intended to capture realistic aspects of imitation and, therefore, social adaptation which, as noted above, generates better individual decisions than adaptation based on one's own experience alone.
It was, in the opinion of this reviewer, a pity that a wider range of adaptive processors were not reported in agent specifications. It is nonetheless clear that emergence and adaptation are key elements in the developing social simulation literature and these require representations of agent processors that entail learning and, so, change the range of events that are perceived, the range of actions that can be effected and the relationships between perception and actuation. It is natural in this regard to note that the difference between the mainstream economics type of papers and the rest is that the former assume that everything can be perceived and the processors are fixed.
Social simulation is a relatively new and still developing field. It encompasses a wide range of different approaches to the specifications of agents and environments. The volume under review is a significant contribution to the developing toolset and concerns of the social simulation community. It will certainly support discussion, debate and controversy about the direction or, as suggested above, the independent directions which the various constituents of the social simulation community should take. The volume's coverage of the various social simulation approaches currently being developed is wide, but not complete. However, taken together with the earlier conference volume edited by Gilbert and Conte (1995) we have a fairly complete record of the state of social simulation in the mid 1990s. Such volumes are especially important in a field where, as Axelrod emphasised in his contribution, papers are published widely over journals of different disciplines. Journals such as JASSS may, one hopes will, provide a common point of reference for members of the social simulation community interested in their widely differing applications. In the meantime, conferences volumes such as that under review provide the canon of the field. This particular volume is a creditable contribution to that canon.
My colleague Bruce Edmonds and Edmund Chattoe offered a number of wise observations on an earlier draft. I also acknowledge with gratitude the financial support of the Economic and Social Research Council under contract number R000236179.
BARRO R. 1974. Are Government Bonds Net Wealth?, Journal of Political Economy, 82:1095-1117.
CARLEY K. and M. Prietula. 1994. ACTS Theory: Extending the Model of Bounded Rationality. In K. Carley and M. Prietula, editors, Computational Organization Theory, Lawrence Erlbaum Associates, Hillsdale, NJ.
GILBERT N. and R. Conte. 1995. Artificial Societies: The Computer Simulation of Social Life, UCL Press, London.
MILLER J. H. 1996. The Coevolution of Automata in the Repeated Prisoner's Dilemma, Journal of Economic Behavior and Organization, 29:87-112.
MOSS S., M. Artis and P. Ormerod. 1994, A Smart Automated Macroeconomic Forecasting System, Journal of Forecasting, 13:299-312.
MOSS S. and O. Kuznetsova. 1995. Modelling the Process of Market Emergence. In J. W. Owsinski and Z. Nahorski, editors, Modelling and Analysing Economies in Transition, MODEST, Warsaw, 125-138 and Report Number 95-12, Centre for Policy Modelling, Manchester Metropolitan University.
MOSS S., B. Edmonds and S. Wallis. 1997a. Validation and Verification of Computational Models with Multiple Cognitive Agents, Report Number 97-25, Centre for Policy Modelling, Manchester Metropolitan University.
MOSS S. and E.-M. Sent. 1997b. Boundedly versus Procedurally Rational Expectations, Report Number 97-30, Centre for Policy Modelling, Manchester Metropolitan University and forthcoming in A. Hallet Hughes and P. McAdam, editors, New Directions in Macroeconomic Modelling, Kluwer.
PARSONS T. 1937. The Structure of Social Action, Free Press, New York, NY.
RICHARDSON L. F. 1948. War Moods, Psychometrika, 13:147-174, 13:197-232.
WALLIS K. 1993. Comparing Macroeconometric Models, Economica, 60:225-237.
YE M. and K. Carley. 1995. Radar-Soar: Towards an Artificial Organisation Composed of Intelligent Agents, Journal of Mathematical Sociology, 20:219-246.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1998