Jaime Simão Sichman (1998)
Journal of Artificial Societies and Social Simulation vol. 1, no. 2, <https://www.jasss.org/1/2/3.html>
To cite articles published in the Journal of Artificial Societies and Social Simulation, please reference the above information and include paragraph numbers if necessary
Received: 14-Jan-1998 Accepted:15-Feb-1998 Published: 31-Mar-1998
Principle 1 Principle of Non-Benevolence agents are not presumed to help each other: they decide autonomously whether or not to cooperate with others;
Principle 2 Principle of Sincerity agents do not try to exploit each other: they never offer erroneous information deliberately and always communicate information in which they believe;
Principle 3 Principle of Self-Knowledge agents have a complete and correct representation of themselves: their goals, their expertise etc. However, agents may have beliefs about others that are either incorrect or incomplete;
Principle 4 Principle of Consistency agents do not maintain contradictory beliefs about others. Once an inconsistency is detected, they revise their beliefs in order to reestablish a consistent state.
Agent | Goals | Actions | Plans |
---|---|---|---|
ag5 | write_mas_paper | write_mas_section | write_mas_paper() := write_mas_section(), process_latex(). |
write_ss_mas_paper | analyse_mas_paper | write_ss_mas_paper() := write_ss_section(), write_mas_section(), process_latex(). | |
review_oop_paper | analyse_oop_paper | review_oop_paper() := analyse_oop_paper(). | |
review_mas_paper() := analyse_mas_paper(). | |||
ag6 | write_tel_paper | write_tel_section | write_tel_paper() := write_tel_section(), process_latex(). |
review_sig_paper | analyse_tel_paper | review_sig_paper() := analyse_sig_paper(). | |
review_se_paper | process_latex | review_se_paper() := analyse_se_paper(). | |
review_tel_paper() := analyse_tel_paper(). | |||
ag7 | write_sig_paper | write_sig_section | write_sig_paper() := write_sig_section(), process_latex(). |
review_tel_paper | analyse_sig_paper | review_tel_paper() := analyse_tel_paper(). | |
review_se_paper | process_latex | review_se_paper() := analyse_se_paper(). | |
review_sig_paper() := analyse_sig_paper(). | |||
ag8 | write_mas_paper | process_latex | --- |
ag9 | write_ss_mas_paper | --- | --- |
Agents ag6, ag7 and ag8 are well acquainted with the LATEX language, while agents ag5 and ag9 are not. The external description of this society, containing the agents' goals, actions and plans, is shown in table 1.
ag6 <ag6> ---------- write_tel_paper |---------- write_tel_paper:=write_tel_section(), | | process_latex(). | |---------- A-AUTONOMOUS | |---------- | review_sig_paper |---------- review_sig_paper:=analyse_sig_paper(). | |---------- analyse_sig_paper | |---------- ag7 | |---------- | review_se_paper |---------- review_se_paper:=analyse_se_paper(). |---------- analyse_se_paper |---------- UNKNOWN |----------
Agent | Goal | G-SIT |
---|---|---|
ag5 | write_mas_paper | DEP |
write_ss_mas_paper | DEP | |
review_oop_paper | AUT | |
ag6 | write_tel_paper | AUT |
review_sig_paper | DEP | |
review_se_paper | DEP | |
ag7 | write_sig_paper | AUT |
review_tel_paper | DEP | |
review_se_paper | DEP | |
ag8 | write_mas_paper | NP |
ag9 | write_ss_mas_paper | NP |
D-SIT | Agents | |||
---|---|---|---|---|
me | goal | ag5 | ag6 | ag7 |
ag5 | write_mas_paper | -- | UD | UD |
write_ss_mas_paper | -- | UD | UD | |
ag6 | review_sig_paper | IND | -- | MBRD |
review_se_paper | IND | -- | IND | |
ag7 | review_tel_paper | IND | MBRD | -- |
review_se_paper | IND | IND | -- |
The <communication> field defines the implementation aspects of the adjacent distributed system layer: the identification of the sender and receiver, the message identification etc.
The <multi-agents> field is linked specifically to the multi-agent dimension. Based on speech act theory (Searle, 1969), it defines the type, nature and illocutionary force of the message and the identification of the interaction protocol being used (Demazeau, 1995). For the type of interaction, the primitives proposed in Gaspar (1991) were used: request, answer and inform. For the illocutionary force, a subset of the communication tones proposed in Campbell and D'Inverno (1990) was used, ranging from commanding (maximal priority) to informing, which characterizes a simple information exchange. For the nature of the interaction (Boissier, 1993), the status of the information being sent was specified in terms of dec (goals), ada (plans), com (actions) and obs (working hypothesis).
Finally, the <application> field must be instantiated for each application, containing the terms of the application domain.
After sending a proposal, the agent waits until he receives a reply. This reply may be of three different types:
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): y ===== Reasoning about goals ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | | process_latex(). | |---------- process_latex | |---------- UNKNOWN | |---------- | review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). | |---------- A-AUTONOMOUS | |---------- | write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** UNKNOWN | |---------- | process_latex |********** UNKNOWN |---------- My current list of possible goals is : write_mas_paper(20) non achievable review_oop_paper(10) achievable write_ss_mas_paper(30) non achievable ===== Deciding about goals ... The goal selected is : review_oop_paper (10) ===== Reasoning about plans ... My dependence network is: ag5 <ag5> ---------- review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). |---------- A-AUTONOMOUS |---------- My current list of possible plans is: review_oop_paper:=analyse_oop_paper().(10) feasible ===== Deciding about plans ... The plan selected is : review_oop_paper:=analyse_oop_paper(). (10) ===== Reasoning about partners ... My dependence network is: ag5 <ag5> ---------- review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). |---------- A-AUTONOMOUS |---------- My goal situation is AUT I do not need any actions in the committed plan ===== Deciding about partners ... I am autonomous for the committed plan, no need of partners
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): y ===== Reasoning about goals ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | | process_latex(). | |---------- process_latex | |---------- ag6 | |---------- | | ag7 | |---------- | | review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). | |---------- A-AUTONOMOUS | |---------- | write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** UNKNOWN | |---------- | process_latex |********** ag6 |---------- | ag7 |---------- My current list of possible goals is : write_mas_paper(20) achievable review_oop_paper(10) achievable write_ss_mas_paper(30) non achievable ===== Deciding about goals ... The goal selected is : write_mas_paper (20) ===== Reasoning about plans ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | process_latex(). |---------- process_latex |---------- ag6 |---------- | ag7 |---------- My current list of possible plans is: write_mas_paper:=write_mas_section(), process_latex().(20) feasible ===== Deciding about plans ... The plan selected is : write_mas_paper:=write_mas_section(), process_latex(). (20) ===== Reasoning about partners ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | process_latex(). |---------- process_latex |---------- ag6 |---------- | ag7 |---------- My goal situation is DEP My needed action is process_latex My current list of partners is : (ag6) UD NONE NONE (ag7) UD NONE NONE ===== Deciding about partners ... The partner selected is : (ag6) UD NONE NONE ===== Sending a message ... ---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92" ---> Destinataire : you ---> Type : request ---> Ressource : matter=dec,put,moi,proposal, There is only one possible transition in the protocol ===== Trying to receive a message ...
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): n Do you want the agent to leave the society? (y/n): n ===== Inferring properties about other agents ... Do you want the agent to infer in this cycle? (y/n): n ===== Perceiving properties of other agents ... Do you want the agent to perceive in this cycle? (y/n): n ===== Trying to receive a message ... The message received is: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_mas_paper process_latex UD NONE NONE ) ===== Reasoning about messages ... I have received a proposal of coalition: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_mas_paper process_latex UD NONE NONE ) The partner has not offered any goal ===== Deciding about proposals ... I will refuse the proposal, because there is nothing being proposed to me ===== Sending a message ... Si error ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=obs,put,moi,revision, Si !error&&!best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=dec,put,moi,refusal, Si !error&&best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ----> Type : answer ---> Ressource : matter=dec,put,moi,acceptance, The transition chosen is !error&&!best_option
===== Trying to receive a message ... The message received is: ( REFUSAL < ag6 polaris.imag.fr 13893 > ) ===== Reasoning about messages ... The partner has refused to form a coalition ===== Reasoning about partners ... The committed partner has refused to form a coalition Removing the partner (ag6) from the list of possible partners My current list of partners is : (ag7) UD NONE NONE ===== Deciding about partners ... The partner selected is : (ag7) UD NONE NONE ===== Sending a message ... ---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92" ---> Destinataire : you ---> Type : request ---> Ressource : matter=dec put moi proposal There is only one possible transition in the protocol ===== Trying to receive a message ... The message received is: ( REFUSAL < ag7 polaris.imag.fr 13894 > ) ===== Reasoning about messages ... The partner has refused to form a coalition ===== Reasoning about partners ... The committed partner has refused to form a coalition Removing the partner (ag7) from the list of possible partners My current list of possible partners is empty ===== Deciding about partners ... There are no more partners for the committed plan ===== Reasoning about plans ... The committed plan is no longer feasible Removing the plan write_mas_paper:=write_mas_section(), process_latex(). from the list of possible plans My current list of possible plans is empty ===== Deciding about plans ... I do not have any more plans to achieve the committed goal ===== Reasoning about goals ... The committed goal is no longer achievable Removing the goal write_mas_paper from the list of possible goals My current list of possible goals is : review_oop_paper(10) achievable ===== Deciding about goals ... The goal selected is : review_oop_paper (10)
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): y ===== Reasoning about goals ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | | process_latex(). | |---------- process_latex | |---------- ag6 | |---------- | | ag7 | |---------- | | ag8 | |---------- | review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). | |---------- A-AUTONOMOUS | |---------- | write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** UNKNOWN | |---------- | process_latex |********** ag6 |---------- | ag7 |---------- | ag8 |---------- My current list of possible goals is : write_mas_paper(20) achievable review_oop_paper(10) achievable write_ss_mas_paper(30) non achievable ===== Deciding about goals ... The goal selected is : write_mas_paper (20) ===== Reasoning about plans ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | process_latex(). |---------- process_latex |---------- ag6 |---------- | ag7 |---------- | ag8 |---------- My current list of possible plans is: write_mas_paper:=write_mas_section(), process_latex().(20) feasible ===== Deciding about plans ... The plan selected is : write_mas_paper:=write_mas_section(), process_latex(). (20) ===== Reasoning about partners ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | process_latex(). |---------- process_latex |---------- ag6 |---------- | ag7 |---------- | ag8 |---------- My goal situation is DEP My needed action is process_latex My current list of partners is : (ag6) UD NONE NONE (ag7) UD NONE NONE (ag8) LBMD write_mas_paper write_mas_section ===== Deciding about partners ... The partner selected is : (ag8) LBMD write_mas_paper write_mas_section ===== Sending a message ... ---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92" ---> Destinataire : you ---> Type : request ---> Ressource : matter=dec put moi proposal There is only one possible transition in the protocol ===== Trying to receive a message ...
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): n Do you want the agent to leave the society? (y/n): n ===== Inferring properties about other agents ... Do you want the agent to infer in this cycle? (y/n): n ===== Perceiving properties of other agents ... Do you want the agent to perceive in this cycle? (y/n): n ===== Trying to receive a message ... The message received is: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_mas_paper process_latex LBMD write_mas_paper write_mas_section ) ===== Reasoning about messages ... I have received a proposal of coalition: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_mas_paper process_latex LBMD write_mas_paper write_mas_section ) My dependence network is: ag8 <ag8> ---------- write_mas_paper (10) |---------- NO-PLANS |---------- My goal situation is NP ===== Deciding about proposals ... I will accept the proposal, because I do not have a plan for this goal ===== Sending a message ... Si error ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=obs,put,moi,revision, Si !error&&!best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=dec,put,moi,refusal, Si !error&&best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=dec,put,moi,acceptance, The transition chosen is !error&&best_option
===== Trying to receive a message ... The message received is: ( ACCEPTANCE < ag8 polaris.imag.fr 13895 > ) ===== Reasoning about messages ... *** The partner has accepted to form a coalition ***
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): n Do you want the agent to leave the society? (y/n): n ===== Inferring properties about other agents ... Do you want the agent to infer in this cycle? (y/n): y Type the name of the agent to be selected: ag9 Inference may be about goals, actions, resources or plans Type the selected option (G/A/R/P): a The current actions of agent < ag9 polaris.imag.fr 14015 > are: Entries may be inserted or removed Type the selected option (I/R): i Type the INCOMPLETE ACTION: write_ss_section Type the cost of the ACTION: 10 ===== Reasoning about the others ... I must revise the following information: ( (ag9) ACTION INCOMPLETE write_ss_section ) ===== Deciding about the others ... Incomplete information is always updated ===== Revising information about the Others ... Updating the external description Action write_ss_section was included in the external description entry of agent < ag9 polaris.imag.fr 14015 >
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): y ===== Reasoning about goals ... My dependence network is: ag5 <ag5> ---------- write_mas_paper (20) |---------- write_mas_paper:=write_mas_section(), | | process_latex(). | |---------- process_latex | |---------- ag9 | |---------- | review_oop_paper (10) |---------- review_oop_paper:=analyse_oop_paper(). | |---------- A-AUTONOMOUS | |---------- | write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** ag9 | |---------- | process_latex |********** ag9 |---------- My current list of possible goals is : write_mas_paper(20) achievable review_oop_paper(10) achievable write_ss_mas_paper(30) achievable ===== Deciding about goals ... The goal selected is : write_ss_mas_paper (30) ===== Reasoning about plans ... My dependence network is: ag5 <ag5> ---------- write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** ag9 | |---------- | process_latex |********** ag9 |---------- My current list of possible plans is: write_ss_mas_paper:=write_ss_section(), write_mas_section(), process_latex().(30) feasible ===== Deciding about plans ... The plan selected is : write_ss_mas_paper:=write_ss_section(), write_mas_section(), process_latex(). (30) ===== Reasoning about partners ... My dependence network is: ag5 <ag5> ---------- write_ss_mas_paper (30) |---------- write_ss_mas_paper:=write_ss_section(), | write_mas_section(), | process_latex(). |---------- write_ss_section |********** ag9 | |---------- | process_latex |********** ag9 |---------- My goal situation is DEP My needed action is write_ss_section My current list of partners is : (ag9) LBMD write_ss_mas_paper write_mas_section ===== Deciding about partners ... The partner selected is : (ag9) LBMD write_ss_mas_paper write_mas_section ==== Sending a message ... ---> Etape : conv1 DEPINTPROPOSITION init receive "5.5.92" ---> Destinataire : you ---> Type : request ---> Ressource : matter=dec,put,moi,proposal, There is only one possible transition in the protocol ===== Trying to receive a message ...
===== Initial state ... Do you want the agent to be active in this cycle? (y/n): n Do you want the agent to leave the society? (y/n): n ===== Inferring properties about other agents ... Do you want the agent to infer in this cycle? (y/n): n ===== Perceiving properties of other agents ... Do you want the agent to perceive in this cycle? (y/n): n ===== Trying to receive a message ... The message received is: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_ss_mas_paper write_ss_section LBMD write_ss_mas_paper write_mas_section ) ===== Reasoning about messages ... I have received a proposal of coalition: ( PROPOSAL < ag5 polaris.imag.fr 13892 > (write_ss_mas_paper write_ss_section LBMD write_ss_mas_paper write_mas_section ) My dependence network is: ag9 <ag9> ---------- write_ss_mas_paper (10) |---------- NO-PLANS |---------- My goal situation is NP ===== Deciding about proposals ... I will refuse the proposal, because I do not have the needed action ===== Sending a message ... Si error ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=obs,put,moi,revision, Si !error&&!best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=dec,put,moi,refusal, Si !error&&best_option ---> Etape : conv1 DEPINTPROPOSITION init end "5.5.92" ---> Destinataire : you ---> Type : answer ---> Ressource : matter=dec,put,moi,acceptance, The transition chosen is error
Adopting this criterion of context choice, ag5 will prefer to drop action write_ss_section from the external description entry of ag9, because the new information source is more credible than the previous one. This procedure is shown next:
===== Trying to receive a message ... The message received is: ( REVISION < ag9 polaris.imag.fr 14015 > ( (ag9) ACTION INCORRECT write_ss_section ) ===== Reasoning about messages ... The partner has asked me to do a revision ===== Reasoning about the others ... I must revise the following information: ( (ag9) ACTION INCORRECT write_ss_section ) ===== Deciding about the others ... Topic is (ag9) Previous source was: <Inference> New source is <Communication(ag9)> New source is preferable ===== Revising information about the Others ... Updating the external description Action write_ss_section was removed from the external description entry of agent < ag9 polaris.imag.fr 14015 >
1 We call an MAS 'open' when agents may enter or leave the society at any moment, without any global control. This framework is developed to cope with this kind of system.
2 Forsimplicity's sake, the notion of resource will not be used in this example.
3 For simplicity's sake, the term a-autonomous is here used as a synonym of autonomous. A more comprehensive definition of these terms may be found in Sichman (1995).
4 In this framework, agents do not perform on-line planning, they use predefined plans, in a case-based reasoning style.
5 One must remember that the external description is a private structure and, as a consequence, agents may have incorrect beliefs about others.
6 The term mutual belief used in this context does not denote the idea found in the literature, for example, Levesque et al. (1990). The term here denotes the fact that the reasoning agent believes that his partner is also aware of their bilateral dependence relation.
7 Currently, only some parts of this development environment are fully implemented.
8 The extension to multi-partners is quite simple and is detailed in Sichman (1995).
9 Currently, there is no domain level inference in the system. Both the inference and perception mechanisms are restricted to the information stored in the external description.
10 For sake of clarity, the results of this simulation phase are not presented.
11 This fact is justified by the non-benevolence principle (P1).
AXELROD, R. 1984. The Evolution of Cooperation. New York: Basic Books.
BOISSIER, Olivier. 1993 (January). Problème du Contrôle dans un Système Integré de Vision. Utilisation d'un Système Multi-Agents. Thèse de Doctorat, Institut National Polytechnique de Grenoble, Grenoble, France.
BOISSIER, Olivier and Demazeau, Yves. 1994 (August). ASIC: An Architecture for Social and Individual Control and its Application to Computer Vision. Pages 107-118 of: .Proceedings of the 6th European Workshop on Modelling Autonomous Agents in a Multi-Agent World.
CAMPBELL, John A. and D'Inverno, Mark P. 1990. Knowledge Interchange Protocols. Pages 63-80 of Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. Amsterdam, NL: Elsevier Science Publishers B. V.
CARDOZO, Eleri and Sichman, Jaime Simão and Demazeau, Yves. 1993 (November). Using the active object model to implement multi-agent systems. Pages 70-77 of: Proceedings of the 5th IEEE International Conference on Tools with Artificial Intelligence.
CARLE, Patrice and Collinot, Anne and Zeghal, Karim. 1994 (December). Concevoir des Organisations: La Méthode Cassiopée. In: .Actes de la 3ème Journée Systèmes Multi-Agents du PRC-GDR Intelligence Artificielle.
CASTELFRANCHI, Cristiano. 1990. Social Power: A Point Missed in Multi-Agent, DAI and HCI. Pages 49-62 of: Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. Amsterdam, NL: Elsevier Science Publishers B. V.
CASTELFRANCHI, Cristiano and Micelli, Maria and Cesta, Amedeo. 1992. Dependence Relations Among Autonomous Agents. Pages 215-227 of: Werner, Eric and Demazeau, Yves(eds.), Decentralized A. I. 3. Amsterdam, NL: Elsevier Science Publishers B. V.
Conte, Rosaria and Sichman, Jaime Simão. 1995. DEPNET: How to benefit from social dependence. Journal of Mathematical Sociology, 20(2-3), 161-177.
CONTE, Rosaria and Castelfranchi, Cristiano. 1992 (April). Mind is not Enough: Precognitive Bases of Social Interaction. Pages 93-110 of:Proceedings of 1992 Symposium on Simulating Societies.
DEMAZEAU, Yves and Boissier, Olivier and Koning, Jean-Luc. 1994 (October). Using interaction protocols to control vision systems. In: Proceedings of the IEEE International Conference on Systems, Man and Cybernetics.
DEMAZEAU, Yves. 1995 (March). From interactions to collective behaviour in agent-based systems. In: Proceedings of the 1st European Conference on Cognitive Science.
GASPAR, Graça. 1991. Communication and Belief Changes in a Society of Agents: Towards a Formal Model of an Autonomous Agent. Pages 245-255 of: Demazeau, Yves and Muller, Jean-Pierre (eds.), Decentralized A. I. 2. Amsterdam, NL: Elsevier Science Publishers B. V.
LEVESQUE, Hector J. and Cohen, Philip R. and Nunes, José H. T. 1990. On acting together. Pages 94-99 of: Proceedings of the 8th National Conference on Artificial Intelligence. Boston: Morgan Kaufmann Publishers, Inc. <
LUCE, R. D. and Raiffa, H. 1957. Games and Decisions: Introduction and Critical Survey. John Wiley & Sons Ltd.
MINSKY, Naftaly H. 1989 (April). The Imposition of Protocols over Open Distributed Systems. Technical report LCSR-TR-154. Laboratory for Computer Science Research, Rutgers University, New Jersey, USA.
POPULAIRE, Philippe and Boissier, Olivier and Sichman, Jaime Simão. 1993 (April). Description et Implementation de Protocoles de Communication en Univers Multi-Agents. Pages 241-252 of: Actes des 1ères Journées Francophones Intelligence Artificielle Distribuée & Systèmes Multi-Agents.
SEARLE, John. 1969. Speech Acts. Cambridge University Press.
SICHMAN, Jaime Simão. 1995. Du Raisonnement Social Chez les Agents: Une Approche Fondée sur la Théorie de la Dépendance. Thèse de Doctorat, Institut National Polytechnique de Grenoble, Grenoble, France.
SICHMAN, Jaime Simão. 1996 (October). On achievable goals and feasible plans in open multi-agent systems. Pages 16-30 of Proceedings of the 1st Ibero-American Workshop on DAI/MAS.
SICHMAN, Jaime Simão and Demazeau, Yves. 1995. Exploiting Social Reasoning to Deal with Agency Level Inconsistency. Pages 352-359 of: .Proceedings of the 1st International Conference on Multi-Agent Systems. San Francisco, USA: MIT Press.
SICHMAN, Jaime Simão and Demazeau, Yves. 1995. Exploiting Social Reasoning to Enhance Adaptation in Open Multi-Agent Systems. Pages 253-263 of: Wainer, Jacques and Carvalho, Ariadne(eds.), Advances in AI. Lecture Notes in Artificial Intelligence, vol. 991. Berlin, DE: Springer-Verlag.
SICHMAN, Jaime Simão and Demazeau, Yves. 1996. A model for the decision phase of autonomous belief revision in open multi-agent systems. Journal of the Brazilian Computer Society, 3(1), 40-50.
SKVORETZ, John and Willer, David. 1993.
Exclusion and power: A test of four theories of power in exchange networks.
American Sociological Review, 58(December), 801-818.
SMITH, Reid G. 1980.
The contract net protocol: High-level communication and control in a distributed
problem solver.
IEEE Transactions on Computers, 29(12), 1104-1113.
WOOLDRIDGE, Michael and Jennings, Nicholas R. 1994 (August).
Towards a Theory of Cooperative Problem Solving.
Pages 15-26 of: .Proceedings of the 6th European Workshop on
Modelling Autonomous Agents in a Multi-Agent World.
Yu, ERIC S. K. and Mylopoulos, John. 1993.
An Actor Dependency Model of Organizational Work with Application to
Business Process Reengineering.
Pages 258-268 of:Proceedings of the Conference on Organizational
Computing Systems (COOCS'93).
Milpitas, CA: ACM Press.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1998