This
survey is part of a requirements analysis activity in the SimCog
project. The SimCog research project (Simulation of Cognitive Agents) aims to
develop a generic agent-based-simulation platform. The project wider goals are: (i)
to propose a reference model for the requirements specification of
multi-agent-based platforms, promoting the identification of shared
requirements in the scientific community, and addressing these requirements to
the development of integrated projects; (ii) to analyse existing methods and
assess methodological and epistemological questions in the intersection of
agent-based simulation, computer science and social/natural sciences; (iii) to
inform the specification and the development of the SimCog platform.
The role of the survey is to advance the prospection of technical-operational
and high-level requirements for multi-agent-based simulation (MABS) platforms,
and to identify general methodological principles that will lead to the
development of agent-based simulations.
THE
QUESTIONNAIRE
This
questionnaire is directed to researchers who use multi-agent-based simulation
in various scientific fields, from social and natural science to mathematical,
cognitive and computer science. The final results will be available at our
site, and your privacy will be preserved.
The
following sections compose the first module of a two-module questionnaire.
This first module assesses a requirements analysis of multi-agent-based
simulation platforms. The second module focus on general methodological and
epistemological topics in agent-based simulation, and it will be available at
our site soon. All those who answer the first module will be invited to answer
the second.
In
the following sections, you will find six possible choices for each possible
answer. We believe it will be easy for you to answer. We expect that it will
require approximately 20 minutes of your time. All descriptions and
questions are self-contained. Nevertheless, if you want further information
please consult this paper.
Thank
you for your attention!
The SimCog team.
PERSONAL
INFORMATION
(*) Required Field
*Name:
*Email:
*Institution:
* Please, choose bellow how you
came to know the survey:
Other
A1)
Your interest in agent-based simulation (can be more than one):
Engineering
Industrial
Research
Policy
Education
Business
A2)
Intended use of simulation (can be more than one):
To model and simulate artificial societies that do not necessarily
reference a concrete target or specific theory about the real world,
but only some proposed idea of abstract nature.
To model socio-cognitive or sociological theories and implement
computational animations of logical formalisms, in order to refine/extend
social theories and check its consistency.
To model and simulate concrete social systems based on direct observation
and statistical data, in order to understand social and institutional
processes and phenomena.
To model and simulate multi-agent systems to explore multi-agent system
requirements and intended behaviours, for use in real environments
and general-purpose engineering.
Others:
REQUIREMENTS
ANALYSIS
INFORMATION
A requirement is a feature of a system or a description of something the
system is capable of doing in order to reach its objective. In this
questionnaire, each requirement is classified in two categories: functional
(denoted with F) and non-functional (denoted with NF). Functional
requirements describe the behaviour and the services that the user expects
a system to provide. Non-functional requirements describe restrictions,
qualities and/or benefits associated with the system. We have grouped the
requirements in five different classes of facilities, called Technological,
Domain, Development, Analysis and Exploration. If you want to know more
about this categorization you should
consult our documents at our website, but that is not necessary to
answer this questionnaire.
You should classify each requirement according to the following options:
- Imperative: if you consider that a
platform is useless without this requirement, or its absence can
inhibit crucial modelling or implementation activities.
- Important: if you consider that the
requirement is very useful and important.
- Desirable:if you consider that the requirement may
be useful, but it is still reasonably trouble-free to model and
implement informative simulations without such requirement.
- Undesirable:if you consider that the requirement is
undesirable.
- Domain Dependent:if you
consider that the requirement is important only for some specific
domain.
- Not Necessary: if
you consider that the requirement does not significantly affect (or
does not affect at all) the modelling and implementation process of
simulations.
TECHNOLOGICAL
FACILITIES
R1
MANAGE
AGENTS
LIFE
CYCLE (F)
A platform should provide services to control the life cycle of
agents, considering a multitude of agent status, such as active,
suspended, sleeping, inactive, killed.
Related With: Manage Mobility, Launch Agents,Develop Agent
Architectures.
R2
MANAGE
COMMUNICATION (F)
A platform should provide services for asynchronous and synchronous
message passing between agents.
A platform should support controlled simulations and allow
repeatability. To this end it should provide (i) libraries including
at least one common used scheduling technique, like discrete-time
simulation or event-based simulation; (ii) mechanisms to cluster
agents in groups and apply different scheduling techniques to each
group.
Related With: Manage
Communication, Integrate Controlled and Non-Controlled
Environments, Develop Agent
Architectures.
R4
MANAGE
SECURITY (F)
This requirement is related with distributed execution of
simulations. For instance, the platform should prevent the
acceptance of messages that proceed from non-authorized agents or
components. To comply with this requirement a platform should provide
architectures and services such as (i) adequate agent low-level architectures; (ii) libraries of
encryption algorithms and public key infrastructures.
MANAGE
MOBILITY (F) Mobility defines the agents' ability to
navigate within electronic communication networks. To comply with this
requirement a platform should provide services such as (i) transmit and
receive agents through the network; (ii) persistence mechanisms to
maintain the agents state when travelling.
MODEL
SCALABILITY
(NF) Roughly, scalability assesses how well
the system does useful work as the size and/or complexity of the
system increases. Ideally, a platform should efficiently process
large-scale systems in a reasonable amount of time, and according to
the users' needs. Some metrics to assess and control scalability are: (i)
number of agents that the system is able to control with reasonable
qualitative results; (ii) an
ordered increase in the number of agents, for instance, clustering
agents in groups associated with logical or physical patterns. To
fulfil this requirement a platform should be structured in order to
maintain and/or improve its performance while increasing the size,
complexity and workload of its components.
LAUNCH
AGENTS (F) A platform should provide agent
templates related with different manners to launch agents like
threads, applets and objects. For instance, platforms that do not
provide multi-threaded agents may difficult the modelling and
simulation of distributed systems.
MANAGE
INTENTIONAL
FAILURES (F) From a technical-operational point of view
there are two classes of intentional failures that can be manipulated
in a simulation. The first class, called operational failures, works
with disturbances in the technical-operational infrastructure (corrupted
messages, server failures, etc.). The second, called logical failures,
manipulate patterns of behaviour that can be viewed as dysfunctional
exceptions in the simulated system. Operational failures can be used
to build specific scenarios and serve as the base to build more
general logical failures. Logical failures are strongly domain
dependent, and the user may have to engage in further implementation
work in order to utilize them. The platform should offer (i) libraries
to manipulate basic operational failures; (ii) mechanisms to
store and search templates of logical failures created by users.
INTEGRATE
CONTROLLED AND
NON-
CONTROLLED
ENVIRONMENTS (F) Typically, the simulation environment
must be totally controlled, every event in the simulation world must
be performed under the control of the simulator. These situations
characterize what we call controlled environments. There may be
cases where the agents can (or must) perform actions outside the
controlled environment, in real environments. To support this
functionality a platform should offer agent architectures that
separate the agent domain-dependent behaviour from the simulator
design patterns. Also, in order to keep the simulation consistent
and guarantee a good level of repeatability, when agents are running
in non-controlled environments some of their events should be
notified to the simulator, which should update its local view.
MODEL THE
PLATFORM
EXECUTION
MODE
(NF) To support simulations that demand high
processing and/or number of components, a platform should be executed
in different processors and machines, in a distributed way. The
platform should be designed with a non-centralized architecture, for
which the simulation components could be distributed across different
physical
hosts and/or processors. Issues like network protocols and
synchronization are crucial in such environments.
PROVIDE
GRAPHICAL
REPRESENTATION OF
DOMAIN(s)
(NF) A platform should offer user-friendly
graphical interfaces to facilitate the construction of graphical
representations of domain(s), taking in consideration the environment, the
agents and their interactions. Moreover, it should provide help
systems to guide and direct the interaction between the platform and
its users.
GUARANTEE
INDEPENDENCY FROM THE
SIMULATOR (F) The platform should provide agent
architectures independent from the simulator design patterns. After
the verification and validation processes the agents should be
easily decoupled from the simulator, and be ready to be deployed in
real environments.
USE
ORGANISATIONAL
ABSTRACTIONS (F) Organisational abstractions are MAS
components that explicitly structure an organisation. The platform
should provide services to represent an organisation, taking
together organisational abstractions like roles, groups and multiple
societies.
USE
GROUPS (F) Groups define different aggregates of
agents according to modularity, encapsulation and organisational
principles. The platform should support the creation and management
of agent collections, clustered around common relations that can be
defined by the users.
USE
ROLES (F) A role is what an agent is expected to
do in the organisation, in an autonomous way or in cooperation with
other agents. A platform should provide services to define roles in
terms of functions, activities or responsibilities. It should also
be possible to define their interaction protocols.
USE
ORGANISATIONAL
RULES (F) Organisations
should take into account a set of constraints to model their global
behaviours. Such constraints are called organisational rules. For
instance, an organisational rule in a football team specifies that
the players must respect the coach. The platform should provide
services to express specific relations and/or constraints between
roles, protocols and between roles and protocols.
USE
MULTIPLE
SOCIETIES (F) In the real world we have the ability to
create explicit organisational structures, observe them and
reason about them, like other agents, institutions or even new
societies (for example, artificial agent societies). From the
observer's point of view, an artificial society may be seen as an
aggregate of agents and organisations, which coexist and interact
with each other through social events. The concept of society in
agent-based simulation is rarely specified as an explicit structural
and relational entity, but usually implicitly defined in formally or
informally terms of inclusiveness of agents and organisational
entities. Such tendency complicates the design of artificial agents
that are able to observe and reason about other societies, particularly if the
environment is composed of multiple interacting social spaces and
levels of abstraction. Although some approaches have used models
that explicitly define multiple societies, the concept of society in
those models is still reducible to a group, where agents are viewed
simultaneously both as actors and non-neutral observers in a given
society. Therefore, the role of opaque artificial observers is not
assigned explicitly to agents, being exclusively and implicitly
defined in the person of the system designer. The platform should
provide primitives to instantiate topologies of multiple societies
and to instantiate opaque social spaces (see requirement R26)
that can be used as neutral observation windows to other societies
and social spaces.
USE
ONTOLOGIES (F) Ontologies can be used with many
objectives like interoperability, reuse, search of information and
knowledge sharing and acquisition. When using ontologies to assist
simulation modelling, the platform should provide: (i) knowledge
bases containing the ontologies; (ii) a search engine to manipulate
the ontologies; (iii) a browser to query the search engine and
visualize the results; (iv) mechanisms to relate the ontologies with
implementation components.
ADOPT
ONTOLOGICAL
COMMITMENT
(NF) The use of ontologies may take into
account the establishment of an ontological commitment. An
ontological commitment is an agreement to use the shared vocabulary
in a coherent and consistent way. This commitment allows the access
to heterogeneous sources of information, which could be otherwise
unintelligible. To comply with this requirement a platform should be
designed having in mind the sharing of ontologies between the
simulation components.
PROVIDE
TRANSLATION
MECHANISMS (F) Translation
mechanisms are necessary when agents use different protocols and/or
languages, and need to communicate with each other. Translation may
be implemented through different ontologies related to common blocks
of information shared by agents. To provide this functionality the
platform should offer
services such as: (i) knowledge bases containing the ontologies;
(ii) tools that translate ontologies to operational data.
OBSERVE
BEHAVIOURAL
EVENTS (F) Behavioural events are the agents' social events
that can be observed by an external observer (e.g., message passing,
creation/destruction of agents, data base access). The platform
should offer mechanisms to select specific points (observation
windows) to observe behavioural events.
OBSERVE
COGNITIVE
EVENTS (F) Cognitive events concern events in the
agents' internal architectures. The observation of these events may
be counter-intuitive according to the usual agent paradigm, but
indispensable to analyse structured simulations. For example, one may
want to analyse the effect of some social event on the recipient
agent's mental states. The platform should offer mechanisms to
control the agents' internal mechanisms, in order to trigger specific
observation methods. In order to provide structured observation of
cognitive events the platform should comply with another requirement,
called Cognitive Reflectivity (see R27).
DEFINE
SCENARIOS (F) The platform should provide services to
configure/assemble different scenarios considering characteristics
like type and quantity of agents.
CONTROL
TRACKING (F) The platform should provide services to
follow up the execution of simulations, in both graphical and batch
modes (storing data to later analysis and manipulation). It also
should provide services to terminate, initialise, interrupt and
temporarily suspend simulations. While suspended the platform should
permit the observation (see R17
and R18)
and intervention (see R24
and R25)
in behavioural and cognitive events.
PROVIDE
DATA
ANALYSIS (F) The platform should define technical
indicators and decision support (e.g., graphical and statistical
packages) to work in more depth with the generated data.
PROVIDE
SENSITIVITY
ANALISYS (F) The platform should offer services to
implement controlled variations of simulation parameters, and
provide graphical and statistical packages to assess relationship between parameter variation and changes in the output
data.
PROVIDE
GRAPHICAL
INTERFACE (NF) The platforms should provide friendly
graphical interfaces to assist debugging processes and observation
activities, helping the researcher to understand the dynamic
behaviour of a simulation.
INTERVENE IN
BEHAVIOURAL
EVENTS (F) Behavioural events are the agents' social
events that can be observed by an external observer. The platform should offer mechanisms to
select specific points to intervene in behavioural events. The
intervention should permit the suppression, modification or creation
of behavioural events. For instance, one may want to modify the
content or the intended recipient of behavioural events (e.g.
messages), or even to inhibit its arrival to the intended recipients.
These experiments give the means to analyse functional effects in
the simulation, independing from the agents' internal
representations that originate other or those same events.
INTERVENE IN
COGNITIVE
EVENTS (F) The platform should offer mechanisms to
intervene in the agents' internal mechanisms, for instance, program
variables, order of method invocation and the agents beliefs.
This may alter the order of invocation and nature of behavioural
events. In order to provide structured interventions the platform
should comply with another requirement, called Cognitive
Reflectivity (see R27).
MANAGE
SOCIAL
OPACITY (F) The problematic of social opacity
analyses the organisational conditions under which the control of
cognitive information transfer between agents in different social
spaces is possible, for instance between different multiple societies
(see R14.D).
Social opacity is therefore related to organisational borders. The
platform should provide the means to instantiate different topologies
of opaque social spaces in a dynamic way. This is useful to simulate
agents that have the ability to instantiate and observe given models
of other artificial agents and societies, allowing the simulation of
agents that reason autonomously about the heterogeneity of different
models of societies at various levels of observation. However, while
the observed agents and societies must be visible to the observer
agent, the observer agent and societies must be opaque to the
observed agents. The platform should provide organisational
ingredients and services to instantiate multiple societies (see R14.D)
and opaque social spaces.
PROVIDE
MODELS OF
COGNITIVE
REFLECTIVITY (F) Cognitive reflectivity refers to the
identification of cognitive structures and internal procedures of
agents at run time. The agent architectures templates (see R12)
should provide adequate models of cognitive reflectivity, allowing
the user or other system agents to observe and intervene in the
simulated agents' cognitive events at run time. Thus, different models of cognitive reflectivity should
be provided together with generic agent architectures (see R12).