Pietro Terna (1998)
Journal of Artificial Societies and Social Simulation vol. 1, no. 2, <https://www.jasss.org/1/2/4.html>
To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary
Received: 6-Mar-1998 Published: 31-Mar-1998
Social scientists are not computer scientists, but their skills in the field have to become better and better to cope with the growing field of social simulation and agent based modelling techniques. A way to reduce the weight of software development is to employ generalised agent development tools, accepting both the boundaries necessarily existing in the various packages and the subtle and dangerous differences existing in the concept of agent in computer science, artificial intelligence and social sciences. The choice of tools based on the object oriented paradigm that offer libraries of functions and graphic widgets is a good compromise. A product with this kind of capability is Swarm, developed at the Santa Fe Institute and freely available, under the terms of the GNU license.
A small example of a model developed in Swarm is introduced, in order to show directly the possibilities arising from the use of these techniques, both as software libraries and methodological guidelines. With simple agents - interacting in a Swarm context to solve both memory and time simulation problems - we observe the emergence of chaotic sequences of transaction prices.
Herbert Simon is fond of arguing that the social sciences are, in fact, the hard sciences. For one, many crucially important social processes are complex. They are not neatly decomposable into separate subprocesses--economic, demographic, cultural, spatial--whose isolated analyses can be aggregated to give an adequate analysis of the social process as a whole. And yet, this is exactly how social science is organized, into more or less insular departments and journals of economics, demography, political science, and so forth. Of course, most social scientists would readily agree that these divisions are artificial. But, they would argue, there is no natural methodology for studying these processes together, as they coevolve.The social sciences are also hard because certain kinds of controlled experimentation are hard. In particular, it is difficult to test hypotheses concerning the relationship of individual behaviors to macroscopic regularities, hypotheses of the form: If individuals behave in thus and such a way--that is, follow certain specific rules--then society as a whole will exhibit some particular property. How does the heterogeneous micro-world of individual behaviors generate the global macroscopic regularities of the society?
Another fundamental concern of most social scientists is that the rational actor--a perfectly informed individual with infinite computing capacity who maximizes a fixed (nonevolving) exogenous utility function--bears little relation to a human being. Yet, there has been no natural methodology for relaxing these assumptions about the individual.
Relatedly, it is standard practice in the social sciences to suppress real-world agent heterogeneity in model-building. This is done either explicitly, as in representative agent models in macroeconomics (Kirman, 1992), or implicitly, as when highly aggregate models are used to represent social processes. While such models can offer powerful insights, they "filter out" all consequences of heterogeneity. Few social scientists would deny that these consequences can be crucially important, but there has been no natural methodology for systematically studying highly heterogeneous populations.
Finally, it is fair to say that, by and large, social science, especially game theory and general equilibrium theory, has been preoccupied with static equilibria, and has essentially ignored time dynamics. Again, while granting the point, many social scientists would claim that there has been no natural methodology for studying nonequilibrium dynamics in social systems.
A Weak Notion of Agency. Perhaps the most general way in which the term agent is used is to denote a hardware or (more usually) software-based computer system that enjoys the following properties: (i) Autonomy: agents operate without the direct intervention of humans or others, and have some kind of control over their actions and internal state; (ii) social ability: Agents interact with other agents (and possibly humans) via some kind of agent-communication language (. . .); (iii) reactivity: Agents perceive their environment, (which may be the physical world, a user via a graphical user interface, a collection of other agents, the INTERNET, or perhaps all of these combined), and respond in a timely fashion to changes that occur in it; (iv) pro-activeness: Agents do not simply act in response to their environment, they are able to exhibit goal-directed behaviour by taking the initiative. A simple way of conceptualising an agent is thus as a kind of UNIX-like software process, that exhibits the properties listed above. This weak notion of agency has found currency with a surprisingly wide range of researchers. For example, in mainstream computer science, the notion of an agent as a self-contained, concurrently executing software process, that encapsulates some state and is able to communicate with other agents via message passing, is seen as a natural development of the object-based concurrent programming paradigm (. . .) This weak notion of agency is also that used in the emerging discipline of agent-based software engineering: [Agents] communicate with their peers by exchanging messages in an expressive agent communication language. While agents can be as simple as subroutines, typically they are larger entities with some sort of persistent control.
A Stronger Notion of Agency. For some researchers - particularly those working in AI - the term 'agent' has a stronger and more specific meaning than that sketched out above. These researchers generally mean an agent to be a computer system that, in addition to having the properties identified above, is either conceptualised or implemented using concepts that are more usually applied to humans. For example, it is quite common in AI to characterise an agent using mentalistic notions, such as knowledge, belief, intention, and obligation (. . . ) Some AI researchers have gone further, and considered emotional agents (. . .) (Lest the reader suppose that this is just pointless anthropomorphism, it should be noted that there are good arguments in favour of designing and building agents in terms of human-like mental states . . .).
[targetObject message Arg1: var1 Arg2: var2]
where targetObject is the recipient of the message, messageArg1:Arg2: is the message to send to that object, and var1 and var2 are arguments to pass along with the message.
Objective C messages are keyword/value oriented, that is why the message name message Arg1: Arg2: is interspersed with the arguments. The idea of Swarm is to provide an execution context within which a large number of objects can "live their lives" and interact with one another in a distributed, concurrent manner.
In the context of the Swarm simulation system, the generic outline of an experimental procedure takes the following form.
i. Create an artificial universe replete with space, time, and objects that can be located, within reason, to certain "points" in the overall structure of space and time within the universe., and allow these objects to determine their own behavior according to their own rules and internal state in concert with sampling the state of the world, usually only sparsely.
ii. Create a number of objects which will serve to observe, record, and analyze data produced by the behavior of the objects in the artificial universe implemented in step i.
iii. Run the universe, moving both the simulation and observation objects forward in time under some explicit model of concurrency.
iv. Interact with the experiment via the data produced by the instrumentation objects to perform a series of controlled experimental runs of the system.
The important part (. . .) is that the published paper includes enough detail about the experimental setup and how it was run so that other labs with access to the same equipment can recreate the experiment and test the repeatability of the results. This is hardly ever done (or even possible) in the context of experiments run in computers, and the crucial process of independent verification via replication of results is almost unheard of in computer simulation. One goal of Swarm is to bring simulation writing up to a higher level of expression, writing applications with reference to a standard set of simulation tools.
Our socioeconomic system is a complicated structure containing millions of interacting units, such as individuals, households, and firms. It is these units which actually make decisions about spending and saving, investing and producing, marrying and having children. It seems reasonable to expect that our predictions would be more successful if they were based on knowledge about these elemental decision: How they behave, how they respond to changes in their situations, and how they interact. In comparison to agent-based modeling, micro-simulation has more of a top-down character since it models behavior via equations statistically estimated from aggregate data, not as resulting from simple local rules.
BELTRATTI, A., Margarita S., Terna P. 1996, Neural Networks for Economic and Financial Modelling. London: ITCP.
CONTE, R., Hegselmann, R., Terna, P. (eds.) 1997, Simulating Social Phenomena. Berlin: Springer.
EPSTEIN, M.E. and Axtell, R. 1996. Growing Artificial Societies - Social Science from the Bottom Up.Washington: Brookings Institution Press. Cambridge, MA: MIT Press.
GESSLER, N. 1997, Growing Artificial Societies - Social Science from the Bottom Up. Artificial Life 3: 237-42.
KIRMAN, A. 1992. Whom or What Does the Representative Agent Represent. Journal of Economic Perspectives 6: 126-39.
MINAR, M., Burkhart, R., Langton, C., Askenazy, M. 1996, The Swarm Simulation System: A Toolkit for Building Multi-agent Simulations. Santa Fe Institute.
RUSSELL, S.J and Norvig, P. 1995. Artificial Intelligence - A Modern Approach. Upper Saddle River: Prentice Hall.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1998