Wendelin Reich (2004)
Reasoning About Other Agents: a Plea for Logic-Based Methods
Journal of Artificial Societies and Social Simulation
vol. 7, no. 4
<https://www.jasss.org/7/4/4.html>
To cite articles published in the Journal of Artificial Societies and Social Simulation, reference the above information and include paragraph numbers if necessary
Received: 28-Dec-2003 Accepted: 05-Mar-2004 Published: 31-Oct-2004
Transcript 1 Agent 1: "(My price is) $500" Agent 2: "(I offer) $300" Agent 1: "(My price is) $350" Agent 2: "(I offer) $300" Agent 1: "(My price is) $300" | Transcript 2 Agent 1: "(My price is) $500" Agent 2: "(I offer) $300" Agent 1: "(My price is) $450" Agent 2: "(I offer) $320" Agent 1: "(My price is) $320" |
Hypotheses for Transcript 1
| Hypotheses for Transcript 2
|
(1) Parametric input sensitivity. Values which the model does or does not process in the form of parameters, sensitize or desensitize it to variations in its application or execution context. The exact mathematical form of the parameters defines the shape, the range and the complexity these variations may take.
(2) Range and mathematical structure of output values. The internal logic of the model and the specific details of its implementation constrain the range of, and the possible or enforced mathematical relationships between, its output values. In principle, it is always possible to construct input-output models that are functionally equivalent (i.e., mapping identical input to identical output) yet based on different transformative mechanisms. The remaining four criteria provide guidance in the process of selecting between such models.
(3*) Degree of realism. The implementation details of the transformative mechanism defined by a model represent the model's ontology (its "worldview"). It is sometimes argued that, other things being equal, a tight isomorphism between the transformative mechanism and the modeled domain is preferable. In practice, two alternative techniques almost never fulfill the other-things-being-equal-clause; therefore, we consider this criterion too weak to warrant application in subsequent sections. Specifically, as some readers may expect that we will criticize non-logic-based representations of social meta-reasoning for their "lack of realism", we state beforehand that this is not the case. From a psychological point of view, such criticism would be debatable at any rate.
(3) Understandability. As an alternative to evaluating the degree of realism built into a model, it does seem reasonable to assess whether the transformative mechanism of the model is easily or at least generally understandable by the modeler. Whatever else "understanding" a model's transformative mechanism may mean, it is clear that it involves being able to explain in broad outline how variations in the modeled domain translate into variations in the output of the model. Thus, a model is only understandable if it allows us to establish a (possibly complex or far-fetched) isomorphism between model and modeled domain. This means that the criterion of understandability is actually a subjectivized form of the criterion of realism. However, in contrast to this latter criterion, understandability affects directly how suitable the model is for explaining empirical observations with reference to the modeled domain.
(4) Changeability. As mentioned, any model is arbitrarily changeable in a trivial sense because it is always possible to construct an alternative but functionally equivalent model. Nonetheless, two equivalent models may well diverge with respect to the ease with which the modeler can make smaller changes to the model's internal structure in order to better understand this structure, in order to experiment with the model's behavior, in order to correct the model or in order to adapt it to variations in (or new information about) the modeled domain. Some models may support certain types of changes whereas alternative models support other types - in short, changeability is not a property of the model as such but relative to the specific adaptation that needs to be carried out. Thus, for the comparisons that follow, we will keep in mind that in order to compare two models with respect to their changeability, one must have an idea about what changes are most likely to occur.
(5) Implementability and executability. In ABSS, we generally want to implement the model in an available programming language or modeling environment and run it through a series of simulations - both for demonstrating the correctness of the model and for experimenting with its behavior (Macy and Willer 2002: p. 149). Although modern personal computers are powerful enough, in practice, to fulfill virtually all the computational needs arising in ABSS, some formal representations are too complex to warrant tailor-made implementations of suitable software. Similarly, off-the-peg programs are usually not available for every technique or calculus. Therefore, available or easily implementable software is a pragmatic advantage that can be difficult to disregard when choosing among alternative techniques.
2 Any continuous function can be approximated with arbitrary precision by a multilayer perceptron with a single hidden layer; two hidden layers are sufficient to approximate any discontinuous function (Haykin 1999: p. 209).
3 Instead of providing links that may soon become invalid, we suggest that the interested reader use any of these names on one on the Internet's better search engines - it is likely that the first hit will be the right link.
4 We have used LeanTAP for developing software for ABSS and will be most happy to share our experiences with readers.
BILLINGS D, DAVIDSON A, SCHAEFFER J and SZAFRON D (2002). The Challenge of Poker. Artificial Intelligence, 134. pp. 201-240.
CARPENTER J P (2000). Evolutionary Models of Bargaining: Comparing Agent-based Computational and Analytical Approaches to Understanding Convention Evolution. Computational Economics, 19. pp. 25-49.
DIGNUM F and SONENBERG L (2004). A Dialogical Argument for the Usefulness of Logic in MAS, RASTA 2003. Berlin: Springer.
EDMONDS B (2004a). Comments on "A Dialogical Argument for the Usefulness of Logic in MAS", RASTA 2003. Berlin: Springer.
EDMONDS B (2004b). How Formal Logic Can Fail to be Useful for Modelling or Designing MAS, RASTA 2003. Berlin: Springer.
EDMONDS B and MOSS S (2001). The Importance of Representing Cognitive Processes in Multi-agent Models. In DORFFNER G, BISCHOF H and HORNIK K (Eds.), ICANN 2001 (pp. 759-766). Berlin: Springer.
EDMONDS B, MOSS S and WALLIS S (1996). Logic, Reasoning and A Programming Language for Simulating Economic and Business Processes with Artificially Intelligent Agents. In EIN-DOR P (Ed.), Artificial Intelligence in Economics and Management (pp. 221-230). Boston: Kluwer.
FAGIN R, HALPERN J Y, MOSES Y and VARDI M Y (1995). Reasoning about Knowledge. Cambridge, MA: MIT Press.
FARATIN P, SIERRA C and JENNINGS N R (2002). Using Similarity Criteria to Make Issue Trade-offs in Automated Negotiations. Artificial Intelligence, 142. pp. 205-237.
GIGERENZER G and TODD P M (1999). Simple Heuristics That Make Us Smart. New York/Oxford: Oxford Univ. Press.
GMYTRASIEWICZ P J and DURFEE E H (1995). A Rigorous, Operational Formalization of Recursive Modeling, Proceedings of the First International Conference on Multi-Agent Systems (pp. 125-132). Menlo Park: AAAI Press/The MIT Press.
GRICE H P (1989). Meaning. In GRICE H P (Ed.), Studies in the Way of Words (pp. 213-223). Cambridge, MA: Harvard Univ. Press.
GUMPERZ J J (1982). Discourse Strategies. Cambridge: Cambridge Univ. Press.
GUMPERZ J J (1995). Mutual Inferencing in Conversation. In MARKOV I, GRAUMANN C F and FOPPA K (Eds.), Mutualities in Dialogue (pp. 101-123). Cambridge: Cambridge Univ. Press.
HARRIS P (1996). Desires, Beliefs, and Language. In CARRUTHERS P and SMITH P K (Eds.), Theories of Theories of Mind (pp. 200-220). Cambridge: Cambridge Univ. Press.
HAYKIN S (1999). Neural Networks: A Comprehensive Foundation. Englewood Cliffs, NJ: Prentice-Hall.
HEAP S H and VAROUFAKIS Y (1995). Game Theory: A Critical Introduction. London: Routledge.
HOEK W V D (2001). Logical Foundations of Agent-based Computing. In LUCK M (Ed.), ACAI 2001 (pp. 50-73). Berlin: Springer.
HUBER M J, DURFEE E H and WELLMAN M P (1994). The Automated Mapping of Plans for Plan Recognition. In MNTARAS R L D and POOLE D (Eds.), UAI '94: Proceedings of the Tenth Annual Conference on Uncertainty in Artificial Intelligence (pp. 344-351). San Francisco: Morgan Kaufmann.
LEVINSON S C (2000). Presumptive Meanings: The Theory of Generalized Conversational Implicature. Cambridge, MA: MIT Press.
MACY M and WILLER R (2002). From Factors to Actors: Computational Sociology and Agent-based Modeling. Annual Review of Sociology, 28. pp. 143-166.
MAUDET N (2003). Negotiating Dialogue Games. Autonomous Agents and Multi-Agent Systems, 7. pp. 229-233.
MOSS S (2001). Game Theory: Limitations and An Alternative. Journal of Artificial Societies and Social Simulation, 4(2) https://www.jasss.org/4/2/2.html
OWEN G (1995). Game Theory. San Diego: Academic Press.
REICH W (2003). Dialogue and Shared Knowledge: How Verbal Interaction Renders Mental States Socially Observable, Uppsala University.
ROMP G (1997). Game Theory: Introduction and Applications. New York/Oxford: Oxford Univ. Press.
RUSSELL S J and NORVIG P (2003). Artificial Intelligence: A Modern Approach. Englewood Cliffs, NJ: Prentice-Hall.
SURYADI D and GMYTRASIEWICZ P J (1999). Learning Models of Other Agents using Influence Diagrams, Proceedings of the Seventh International Conference on User Modeling (pp. 223-232). Wien/New York: Springer.
THOYER S, MORARDET S, RIO P, SIMON L, GOODHUE R and RAUSSER G (2001). A Bargaining Model to Simulate Negotiations between Water Users. Journal of Artificial Societies and Social Simulation, 4(2)https://www.jasss.org/4/2/6.html.
WOOLDRIDGE M (2000). Reasoning About Rational Agents. Cambridge, MA: MIT Press.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, [2004]