Reviewed by
Bruce Edmonds
Centre for Policy Modelling, Manchester Metropolitan University
Regardless of the existence and nature of indeterminacy at the micro-physical level, individuals and society are rarely, in practice, deterministic. That is to say, systems of interacting individuals that are very similar will often develop in different directions. If there were a level of indeterminism, then in some circumstances the more similar that two social situations were then the more similar the outcomes from these would be. However this is not true of most of the social phenomena we observe[1]. Thus, even in cases where there is a very large data sample and we are dealing with severely restricted behavioural options (such as voting in UK general elections) we half expect surprising outcomes[2]. One could even go as far as saying the study of indeterminacy in social science is like a study of all non-pink things - what is ruled out (the deterministic) is so rare as to be the exception and everything else is included.
Ironically, in social simulation we are using technology that is specifically designed to be deterministic (programs running on computers) in order to understand phenomena that is pervaded by indeterminism[3]. This is the reason that almost all social simulations utilise random number generators as part of their code[4]. Thus we deliberately make our simulations indeterministic, at least partly because this improves their face-validity[5]. Thus one reason for adding indeterminacy into our simulations is match observed indeterminacy.
However such indeterminacy is not the most common kind in social simulations. Rather it is of the following sort: when an element or process is either (a) unknown to the simulator or (b) judged to be irrelevant to it, then a random generator is inserted as a "stand-in" for it in the simulation code. It is this stand-in that adds indeterminacy into the simulation outcomes. For example, in most agent decision making algorithms there is some random element even though there is no evidence that humans commonly use such randomness. Occasionally the safety of this substitution has some backing from the evidence (e.g. some data about the process outcomes looks random), but more commonly it is done without such backing. Presumably, in the latter case, it is simply assumed that since the stand-in is random then its influence will also be random and thus can be eliminated by averaging the results over enough independent runs. That this need not be the case in social situations can be seen in the case where the outcome depends on an individual detecting or guessing a pattern - a situation where a guessed- at pattern is unknown to the simulator but not random may well have a different outcome to one where it is substituted by an effectively unguessable generator (which includes most pseudo-random number generators).
Such basic decisions as to the relevance of an input to a model and an output from it go to the heart of theory-making in the social sciences. In a case where all possible inputs are crucially relevant to all outputs of interest then modelling or theorising is hopeless. However if there are some inputs are either irrelevant or unimportant to the outputs of concern then it will be possible to swap these for an arbitrary input (e.g. a random one) and thus simplify the model to the connections between those that are left. The very feasibility of social simulation (in the sense of representing an observed social process well enough for the simulation to be useful in understanding or predicting aspects of that process) depends on the possibility of at least some simplifications. Otherwise one is in a situation where the identifiable causes of any event or property include almost any of the possible inputs or conditions of that situation - a phenomena Michael Wheeler called 'causal spread' (Wheeler and Clark 1999).
The main way of obtaining some limitation in causal spread is to restrict the intended scope of a simulation model to a recognisable "context". Thus, for example, the scope of a model of collective decision making might be restricted to occasions when everyone has a high probability of talking to everyone else and that the valuations of the different possible outcomes are similarly evaluated by all participants. In a sense the context identifies which factors (the "background" factors) are either constant or can be safely assumed to be irrelevant to the "foreground" features which are the target of the modelling. It seems that the heuristic of dividing our reasoning and learning about the world to separate contexts does seem to work and is fundamental to many human cognitive abilities (Edmonds 1999). Most modelling, especially social simulation, occurs within a context, and does not attempt to relate phenomena across contexts. It is the careful selection of context that can make social simulation feasible.
Within a context, there are possible sources of (in practice) indeterminacy due to chaos or other unpredictability of the processes there. There are also those sources of indeterminacy that come from without the context - that is, they are a kind of "leakage" into the context from factors outside. This extra-contextual leakage is a good characterisation of noise (Edmonds 2009). Dealing with in-context and extra-contextual indeterminacy requires different approaches due to their different natures.
Unfortunately none of the issues above are dealt with in this book - in fact, indeterminacy itself is not really dealt with in this book. Rather this is a collection of papers that resulted from a series of seminars at Penn-State University loosely organised around the topic. This is despite the fact that the word "indeterminacy" is the book title, appears in the title of almost every chapter, and the editor makes a big effort to link all the contributions into this theme in his introductory and concluding chapters.
Thus chapter 3, "Indeterminacy and Basic Rationality" by Russell Hardin, is really an argument against the obsession in Economics with reaching a particular equilibrium. Chapter 5 "Reliable Cribs: Decipherment, Learnability, and Indeterminacy" by Robin Clark, argues that communication is better understood as a socially situated process rather than some kind of "noisy telepathy". Chapter 6, "Vagueness, Indeterminacy, and Uncertainty" by Steven Gross, is concerned with vagueness and, in particular the "sorities paradox" - it examines various methods of formalising vagueness (many-valued logic fuzzy logic etc.) and concludes that none of them solve the paradox. Chapter 9, "Function and Indeterminacy: Brain and Behavior" by Ruben Gur, Diego Contreras and Raquel Gur argue that present techniques to directly measure the brain are insufficient to determine the exact state of the neural network and thus that either theoretical constraints or a statistic approach to identifying regularities are necessary. Chapter 11 "Context, Choice, and Issues of Perceived Determinism in Music" by Jay Reise, looks at the illusion of determinancy from the point of view of a listener. Chapter 12, "History and Indeterminacy: Making Sense of Pasts Imperfect" by Warren Breckman, surveys various conceptions of historical contingency and thus calling for a "historicisation" of these approaches themselves. Chapter 13, "Adaptive Planning in Dynamic Societies - Giving Sense to Futures Conditional" by Anthony Tomazinis, argues for a participatory approach to urban planning.
The remaining chapters do, at least, touch upon indeterminacy. Three of them deal with indeterminacy in the restricted sense of chaos. Chapter 7 ("Chaos, Complexity, and Indeterminism" by Vadim Batitsky and Zoltan Domotor) and 8 ("Structure and Indeterminacy in Dynamical Systems" by Zoltan Domotor) are closely linked in content. They distinguish between ontological and epistemic indeterminacy basically arguing that only the later is necessary (in the former chapter by arguing that it is almost impossible to show that a process is chaotic and in the later by showing there is a meta-framework in which an indeterministic process can be formalised as a deterministic one). Chapter 10, "Process Unpredictability in Deterministic Systems" by Haim Bau and Yochanan Shachmurove, is a straightforward survey of chaos adding the now traditional hope that, since some chaotic outcomes result from fairly simple systems that many other systems will also turn out to have simple origins - which is like observing some water flowing down a straight pipe and then hoping that many observed flows of water can be modelled as a straight line.
The three remaining chapters address some aspects of indeterminacy in a broader sense. Chapter 2, "Indeterminacy and Freedom of the Will" by Paul Guyer, looks at the ethical consequences of indeterminism, arguing that although determinism is often taken as an indication that one can not be responsible for anything, that indeterminism is no basis for responsibility either. Chapter 4, "Interpretation and Indeterminacy" by Aryeh Botwinick, is a rather rambling piece claiming that the very concept of underdetermination (a variety of indeterminism) is underdetermined and goes on to argue for more expansive forms of logic. Finally, Chapter 14, "Four (In)Determinabilities, Not One" by Klaus Krippendorff, considers determinability and concludes there are at least 4 kinds of this: observational determinability - predictability from past observations; synthetic determinability - realisability from the available resources; hermeneutic determinability - useability and understandability within a community; and constitutive determinability - reconstitutability or viability of constitutional identity. This expands the concept of determinability in useful ways but goes a considerable distance from the original topic.
Some books give one new insight and clarity into a phenomenon of interest, this book does not. Instead it provides a series of views about a host of topics only loosely connected with indeterminacy and limited by its particular academic and geographic horizons[6].
2 This is not a contradiction in terms, it is the nature of a particular outcome that can be surprising not the fact that it is surprising.
3 Whether this indeterminism is fundamental to social phenomena or it is simply swamped by processes that are indeterministic in practice is an interesting but irrelevant debate here.
4 The difference between pseudo-random and 'truly' random is another interesting but ultimately irrelevant issue which I am not going to address here.
5 That is they intuitively look and feel more similar to us.
6 All the contributors have a close connection with the University of Pennsylvania and almost all are East Coast academics.
EDMONDS, B (2009) 'The Nature of Noise.' In Squazzoni, F. (Ed.) Epistemological Aspects of Computer Simulation in the Social Sciences. Lecture Notes in Artificial Intelligence, 5466, pp. 169-182
WHEELER, M and Clark, A (1999) Genic representation: reconciling content and causal complexity. British Journal for the Philosophy of Science, 50, 1, pp. 103-135
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 2010