Michael Barber, Philippe Blanchard, Eva Buchinger, Bruno Cessac and Ludwig Streit (2006)
Expectation-Driven Interaction: a Model Based on Luhmann's Contingency Approach
Journal of Artificial Societies and Social Simulation
vol. 9, no. 4
<https://www.jasss.org/9/4/5.html>
For information about citing this article, click here
Received: 29-Aug-2005 Accepted: 14-Sep-2006 Published: 31-Oct-2006
These concepts will be integrated in a model which explores interaction strategies using different types of interconnected memories. The interconnectedness of the memories is a precondition for obtaining agents “capable of acting.”
That information be measured by entropy is, after all, natural when we remember that information, in communication theory, is associated with the amount of freedom of choice we have in constructing messages. Thus for a communication source one can say, just as he would also say it of a thermodynamic ensemble, ‘This situation is highly organized, it is not characterized by a large degree of randomness or of choice—that is to say, the information (or the entropy) is low.’ (Weaver 1949, pg. 13)
By information we mean an event that selects system states. This is possible only for structures that delimit and presort possibilities. Information presupposes structure, yet is not itself a structure, but rather an event that actualizes the use of structures.... Time itself, in other words, demands that meaning and information must be distinguished, although all meaning reproduction occurs via information (and to this extent can be called information processing), and all information has meaning.... a history of meaning has already consolidated structures that we treat as self-evident today. (Luhmann 1995, pg. 67)
According to social systems theory, the historically evolved general meaning structure is represented on the micro level (i.e., psychic systems, agent level) in the form of the personal life-world (Luhmann 1995, pg. 70). A personal meaning-world represents a structurally pre-selected repertoire of possible references. Although the repertoire of possibilities is limited, selection is necessary to produce information. Meaning structure provides a mental map for selection but does not replace it. All together, the one is not possible without the other—information production presupposes meaning structure and the actualization of meaning is done by information production.
Within the present version of the ME model, agents do not have the option to reject a message or to refuse to answer. Their “freedom” is incorporated in the process of message selection. Agents can be distinguished by the number of messages they are able to use and by their selection strategies. The ongoing exchange of messages creates an interaction sequence with a limited—but possibly high—number of steps.
In social systems, expectations are the temporal form in which structures develop. But as structures of social systems expectations acquire social relevance and thus suitability only if, on their part, they can be anticipated. Only in this way can situations with double contingency be ordered. Expectations must become reflexive: it must be able to relate to itself, not only in the sense of a diffuse accompanying consciousness but so that it knows it is anticipated as anticipating. This is how expectation can order a social field that includes more than one participant. Ego must be able to anticipate what alter anticipates of him to make his own anticipations and behavior agree with alter’s anticipation. (Luhmann 1995, pg. 303)
| (1) |
| (2) |
We use similar notation for all the memories (and other probabilities) in this work.
| (3) |
for all and . In Eq. (3), we assume that the memory has an infinite capacity, able to exactly treat any number of stimulus-response pairs. A natural and desirable modification is to consider memories with a finite capacity.
| (4) |
for all and . The memory of a particular message transmission decays exponentially.
| (5) |
However, Eq. (5) holds only if the agent memories are both infinite or . The corresponding relation holds as well when the memories are both infinite or .
| (6) |
The base of the logarithm is traditionally taken as 2, measuring the entropy in bits, but in this work we take the base to be , the number of different messages. With this choice, the entropy take on values from the interval , regardless of the number of messages.
| (7) |
The parameters c(A) 1 , c(A) 2 , and c(A) 3 reflect the relative importance of the three factors discussed above, while c(A) 4 is an offset that provides a randomizing element. The randomizing element provides a mechanism for, e.g., introducing novel messages or transitions. It also plays an important role on mathematical grounds (see the appendix).
| (8) |
| (9) |
| (10) |
| (11) |
| (12) |
With the marginal probability of initial messages and the conditional probability for responses , stochastic simulations of the model system are relatively straightforward to implement programmatically.
We address these briefly below. Many other questions are, of course, possible.
(a) agent A |
(b) agent B |
Figure 1. Block-structured response dispositions. The size of the circle is proportional to the value of the corresponding entry |
| (13) |
The constraint is in no way fundamental, but is useful for graphical presentation of simulation results.
Figure 2. Evolution of the distance between the vector (t) A and the first mode of U(∞) A∣BU(∞) B∣A as the number of interaction sequences increases |
Figure 3. Evolution of the joint entropy of alter and ego memory with infinite (λ() = λ() = 1) and finite (λ() = λ() = 0.99) memories |
Figure 4. Average asymptotic memories for agent A after 128000 steps, with infinite memories. The matrices shown here are calculated by averaging over 10 initial conditions. The sizes of the red circles are proportional to the corresponding matrix elements, while the sizes of the blue squares are proportional to the mean square deviations |
Figure 5. Average asymptotic memories for agent A after 128000 steps, with finite memories (λ() = λ() = 0.99). The matrices shown here are calculated by averaging over 10 initial conditions. The sizes of the red circles are proportional to the corresponding matrix elements, while the sizes of the blue squares are proportional to the mean square deviations |
(a) Infinite memory (λ() = λ() = 1) |
(b) Finite memory (λ() = λ() = 0.99). |
Figure 6. Asymptotic joint entropy for alter memory of agent A, with (a) infinite memories and (b) finite memories. The plane represents c3 = 0. The colored lines in the c1-c2 plane are level lines for the joint entropy, while the black line shows where the c3 = 0 plane intersects the c1-c2 plane |
Figure 7. Banded response disposition. The size of the circle is proportional to the value of the corresponding entry |
| (14) |
The value of always lies in the interval .
(a) Target distance |
(b) Target distance |
Figure 8. Rates of change for the alter memory of the student agent approaching the response disposition of the teacher agent. The colored lines in the c1-c2 plane are level lines for the rates |
| (15) |
for all and . Note that Eq. (15) utilizes all of the messages exchanged, regardless of whether the agent ego and alter memories have infinite memories (i.e., ) or undergo “forgetting” (i.e., or ). This is due to an assumed period of reflection in which details of the interaction can be considered at greater length.
| (16) |
Since and are both probability distributions, the distance must lie in the interval The value of the distance must be below an acceptance threshold that reflects the absorbative capacity of agent A, limiting what interaction sequences agent A accepts.
| (17) |
where the update rate is in the interval . The update rule in Eq. (17) is applied if and only if .
| (18) |
where is a settable parameter and is the distance defined in Eq. (14). The affinities can be viewed as the weights in a graph describing the agent network.
2For long interaction sequences, the response disposition could be updated periodically—but less frequently than the ego and alter memories—during the sequence.
| (19) |
where is
| (20) |
Note that this term does not depend on . The normalization factor is given by
| (21) |
and does not depend on time when . Moreover,
| (22) |
It will be useful in the following to write in matrix form, so that
| (23) |
with a corresponding equation for the . The matrix is the uniform conditional probability, with the form
| (24) |
We use the notation for the matrix with the element given by .
| (25) |
Thus,
| (26) |
Since agent B selects the initial message, we have . As well, the joint probability of the sequential messages and is
| (27) |
| (28) |
for , where
| (29) |
with being the indicatrix function ( has the same form, see section 3.3.1). For the initial pair is drawn using the response disposition as described in the text. Call the corresponding initial probability .
| (30) |
and, ,
| (31) |
From this relation, we can define a probability on the cylinders by
| (32) |
This measure extends then on the space of trajectories by Kolmogorov’s theorem. It depends on the probability of choice for the first symbol (hence it depends on the response disposition of the starting agent). A stationary state is then a shift invariant measure on the set of infinite sequences. We have not yet been able to find rigorous conditions ensuring the existence of a stationary state, but some arguments are given below. In the following, we assume this existence.
| (33) |
which is the uniform probability vector, corresponding to maximal entropy. Consequently, is a projector onto .
| (34) |
and is the transpose of . It follows that the uncertainty term has a 0 eigenvalue with multiplicity and an eigenvalue with corresponding eigenvector . It is also apparent that, for any probability vector , we have .
| (35) |
The expression in Eq. (35) warrants several remarks. Recall that all the vectors above have positive entries. Therefore the noise term tends to “push” in the direction of the vector of maximal entropy, with the effect of increasing the entropy whatever the initial probability and the value of the coefficients. The uncertainty term plays a somewhat similar role in the sense that it also has its image on a particular vector. However, this vector is not static, instead depending on the evolution via the alter memory. Further, the coefficient may have either a positive or a negative value. A positive increases the contribution of but a negative decreases the contribution. Consequently, we expect drastic changes in the model evolution when we change the sign of —see section 4.2 and especially Fig. 6 for a striking demonstration of these changes.
| (36) |
| (37) |
Thus, the converge to a limit , where
| (38) |
From Eq. (27), we have . Hence,
| (39) |
| (40) |
and
| (41) |
| (42) |
It follows that the asymptotic probability is an eigenvector of corresponding to the eigenvalue 1. We will call this eigenvector the first mode of the corresponding matrix. Therefore, the marginal ego memory of agent A converges to the first mode of . A numerical example is provided in section 4.2.2.
| (43) |
| (44) |
Therefore, using the relation , we have
| (45) |
| (46) |
| (47) |
With this definition, Eqs. (45) and (46) become:
| (48) |
| (49) |
We next plug Eq. (49) into Eq. (48). After some manipulation, we obtain
| (50) |
which uncouples the expression for from that for . In some sense, Eq. (50) provides a solution of the model with two agents, since it captures the asymptotic behavior of the ego and alter memories (provided the stationary regime exists). However, the solution to Eq. (50) is difficult to obtain for the general case and depends on all the parameters and . Below, we discuss a few specific situations.
| (51) |
When is small, (∞) A∣B becomes a nonlinear function of the conditional response disposition for agent B, so that
| (52) |
An explicit, unique solution exists in this case.
| (53) |
The right hand side is therefore independent of and . It is thus constant and corresponds to a uniform . Hence, using Eq. (43), the asymptotic selection probability is also uniform and the asymptotic marginal probability of messages is the uniform probability distribution, as described in section A.5. Consistent with the choice of the coefficients, has maximal entropy.
ARNOLDI J (2001) Niklas Luhmann: an introduction. Theory, Culture & Society, 18(1):1–13.
BAECKER D (2001) Why systems? Theory, Culture & Society, 18(1):59–74.
BLANCHARD Ph, Krueger A, Krueger T and Martin P (2005) The epidemics of corruption. http://arxiv.org/physics/0505031. Submitted to Phys. Rev. E.
DITTRICH P, Kron T and Banzhaf W (2003). On the scalability of social order: Modeling the problem of double and multi contingency following Luhmann. JASSS, 6(1). https://www.jasss.org/6/1/3.html.
FORTUNATO S and Stauffer D (2005) Computer simulations of opinions. In Albeverio S, Jentsch V, and Kantz H, editors, Extreme Events in Nature and Society. Springer Verlag, Berlin-Heidelberg. http://arxiv.org/cond-mat/0501730.
LUHMANN N (1984)Soziale Systeme. Suhrkamp.
LUHMANN N (1990) The improbability of communication. In Essays on Self-Reference, chapter 4, pages 86–98. Columbia University Press, New York, NY.
LUHMANN N (1995) Social Systems. Stanford University Press.
LUHMANN N (2004) Einfhrung in die Systemtheorie. Carl-Auer-Systeme Verlag, second edition.
MAILLARD G (2003) Chaînes è liaisons complètes et mesures de Gibbs unidimensionnelles. PhD thesis, Rouen, France.
SHANNON C E (1948) A mathematical theory of communication. The Bell System Technical Journal, 27:379–423, 623–656. http://cm.bell-labs.com/cm/ms/what/shannonday/paper.html.
STAUFFER D (2003) How many different parties can join into one stable government? http://arxiv.org/cond-mat/0307352. Preprint.
STAUFFER D, Hohnisch M and Pittnauer S (2004) The coevolution of individual economic characteristics and socioeconomic networks. http://arxiv.org/cond-mat/0402670. Preprint.
WEAVER W (1949) Some recent contributions to the mathematical theory of communication. In Shannon C E and Weaver W, editors, The mathematical theory of communication. University of Illinois Press, Urbana.
WEISBUCH G (2004) Bounded confidence and social networks. Eur. Phys. J. B, 38:339–343. DOI: 10.1140/epjb/e2004-00126-9. http://arxiv.org/cond-mat/0311279.
WEISBUCH G, Deffuant G and Amblard F (2005) Persuasion dynamics. Physica A: Statistical and Theoretical Physics. DOI: 10.1016/j.physa.2005.01.054. http://arxiv.org/cond-mat/0410200. In press.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, [2006]