Reviewed by
Jaime Simão Sichman and Jomi Fred Hübner
Laboratório de Técnicas Inteligentes (LTI), Escola Politécnica, Universidade de São Paulo, Brazil.
This volume comprises a collection of papers which present key conceptual foundations in agent-based computing, and whose main goal is to show various examples of commercial and industrial applications that use this paradigm. The book arose from a series of industrially oriented seminars that were held between 1995 and 1997 under the auspices of UNICOM, a seminar organising company based in London. Authors at the seminars were invited to write up their presentations as chapters. These were reviewed and subsequently edited. In addition, there were one or two specially invited chapters. The aim was to get genuine industrial contributions rather than the academic perspectives on the field prevailing at the time. Several applications in quite different domains are presented.
The book contains sixteen chapters, which are divided into three parts: introductory papers (chapters 1 to 3), vision papers (chapters 4 to 7) and systems and their applications (chapters 8 to 16).
In the first chapter, Jennings and Wooldridge ("Applications of Intelligent Agents") introduce the reader to the main concepts of agent(-based) systems. They present some domain characteristics that indicate where agent-based solutions will be appropriate. They then describe examples of industrial, commercial, medical and entertainment applications where agent technology has been successfully applied. Some difficulties concerning the development of these systems are also discussed, covering the main software development phases: requirements specification, design, implementation and testing. The chapter concludes by summarising the contents of the remainder of the book.
Nwana and Ndumu ("A Brief Introduction to Software Agent Technology") present a typology of agents whose aim is both to provide a better understanding and classification of existing agent-based software. Basically, they define seven different types of agents: collaborative, interface, mobile, information, reactive, hybrid and smart agents. They also argue that some of these types could be seen as characteristics in a multi-dimensional space, therefore enabling the development of heterogeneous agent systems. For each of these types, they present its main motivation and benefits, as well as some application examples.
Laufmann ("Agent Software for Near-Term Success in Distributed Applications") takes a complementary approach. He presents a simple conceptual model for a human problem-solving agent, consisting of three skill categories: communication/interaction, task knowledge/behaviour and general knowledge/behaviour. The first category involves the use of shared protocols and languages to achieve co-ordination when interacting with other agents. The second category can be characterised by the performance of one or more task specific activities. The third category is defined by the ability to carry out task independent behaviours, like co-operative planning, reasoning about internal capabilities, negotiating for services and so on. The author then presents three different perspectives on agents, namely agents as communicating software modules, agents as automated personal assistants, and agents as co-operative problem solvers, where the emphasis is respectively put on the first, second and third categories described above. A functional model of a coarse grained agent is then presented and discussed, based on these categories.
The second part of the book starts with a chapter written by Janca and Gilbert ("Practical Design of Intelligent Agent Systems"), in which the authors classify agents into two broad kind: user interface agents and process agents. User interface agents act on the behalf of users, translating their wishes into actions that are executed by other parts of the application or even in the network. On the other hand, process agents are responsible for translating the requests of the users into operational actions, determining the best information sources, choosing between alternatives and negotiating with other agents. The state of the market for agent-enabled software and/or services is then described, and some of its limitations presented, like self-contained (niche) applications, proprietary formats, absence of standards. An agent model is then presented, which has re-usable parts, leading to a "plug-and-play" programming style.
Guilfoyle ("Vendors of Intelligent Agent Technologies: A Market Overview") presents a very interesting overview of the potential market areas where agent technology may have an impact, like email/groupware applications, PDAs (Personal Digital Assistants), user interfaces, business desktop, workflow, network management, information retrieval, custom solutions and development tools. She argues that in the information technology industry, organisations form a value chain consisting of three basic roles: technology developers, application/product developers and end users. These roles are maintained the in agent technology market. Some companies may even play two distinct roles, for instance when developing a tool only in order to build a custom solution for a client. A selection of vendor activities involved in these three roles is presented in the appendix to the article.
Foss ("Brokering the Info-Underworld") discusses in detail the notion of information brokerage, and how intelligent agents can be seen as an intermediate service linking information consumers and providers. A basic model for information brokerage is presented, which makes use of the traditional roles of clients, suppliers and brokers. Some services provided by the brokers are information search and retrieval, service selection, access/hand-off to other brokers, market research, negotiation, subscription to nets and webs and so on.. The author shows that agent technology is highly suitable for use in the development of the brokerage model.
Kearney ("Personal Agents: A Walk on the Client Side") discusses an important class of agents: personal digital assistants (PDAs). The role of agents in personal electronics is presented, as well as a sequence of innovations in personal agent-based products that are likely to reach the marketplace. The most basic agents are those that simply automate some actions, like filtering emails. These are already available. The most complex agents are called ubiquitous. These form a dynamic, adaptive, self-organising global information system. Some important issues in the construction of such systems are also discussed. These include infrastructure requirements, standardisation efforts and interoperability.
The third part of the book presents some field application in diverse domains: telecommunications, control, manufacturing, finance and business process.
The problem that motivates Georgeff and Rao ("Rational Software Agents: Form Theory to Practice") is the application of traditional software technologies and methodologies in the development of complex real-time systems. The authors claim that BDI ("Belief, Desire, Intention") architectures are necessary in order to develop such systems. Generally, a system model requires beliefs for the world state representation and these come from continuous, imprecise and incomplete perception. As the system's specific purposes may change over time, the system needs to know its own objectives and desires. When trying to achieve these objectives, the system must create a sequence of actions that cannot be changed as often as the environment changes. Thus the system needs to be committed to a certain sequence; it needs to intend this sequence. The authors present a BDI architecture that was used in the development of three different applications. The first application is an air traffic management system, known as OASIS, which has been operational at Sydney airport since 1995. The second application is a business process management system (SPOC) which helps customer service representatives to answer customer enquiries. A third system (SWARMM) models air-combat.
Burmeister et al. ("Agent-Oriented Techniques for Traffic and Manufacturing Applications: Progress Report") present two applications of the COSY agent architecture to traffic and manufacturing domains. Both domains seem ideally suited for Agent Oriented Techniques (AOT) because of the existence of geographically and functionally distributed data, sub-systems with a high degree of autonomy and heterogeneity, complex interactions patterns among the sub-systems and highly dynamic changes in the system. In the traffic domain, for example, the freight logistics area involves sub-tasks such as route planning, maintenance planning and crew scheduling. Normally each task could be solved using operational research techniques, but these methods are not effective in solving all tasks together when there are dynamic constraints. The authors model the system using two agents' roles: company agents are responsible for the allocation of transport orders to their own trucks or to those of other companies. Truck agents do the route planning. The advantages of AOT are the ability to model co-operation among company agents and the autonomy of truck agents in dealing with their local dynamic environments. In the manufacturing domain, the main contribution of AOT is to generate flexible and robust behaviour in the system. For instance, the system can dynamically receive incoming orders or reorganise itself if the production processes is rearranged.
The main purpose of Haugeneder and Steiner ("Co-operating Agents: Concepts and Applications") is to demonstrate how a group of autonomous agents can act together to solve a common problem. They define both an agent architecture and a multi-agent language called MAI2L. These comprise the MECCA (Multi-Agent Environment Constructing Co-operative Applications) environment. The agent architecture consists of three components: body, head and communicator. Basically, the agents' body represents its domain-dependent skills, its head represents its problem-solving functionality (such as goal activation, planning and scheduling) while the communicator enables the agent to exchange messages with other agents. The co-operation process is basically the distribution of goals, plans and tasks among agents. A common semantics is used for the message exchanges. Two applications are detailed. The first is a calendar management assistant where each human user of the system has a personal assistant (PA) agent that knows its user's goals, plans, capabilities, responsibilities, preferences and so on. Using this information, the PA can schedule appointments when dealing with others PAs. The second example consists of a personalised traffic guidance system with two functions: car park allocation and route guidance. The model is based on a market-like competitive scenario, in which cars compete for car parks and vice versa, each entity being ruled by its own private strategy. Three different types of agents are defined: driver's assistant, parking assignment and traffic guidance. The driver assistant takes the initiative in negotiating with the parking assignment agent to obtain a parking space meeting its needs. The parking assignment agent negotiates based on its knowledge of individual resources. Finally, the traffic guidance agent has information about the route topology and provides services like route planning.
Weihmayer and Velthuijsen ("Intelligent Agents in Telecommunications") present a brief survey of DAI use in telecommunications, including some of the authors' specific experiences with these applications. They divide applications in the following groups: transmission and switching, network control and management, service management, network design and process support. Basically, the most used DAI techniques in the domain are blackboard systems and the contract net protocol. Despite the clear need and rationale for DAI in telecommunications, the authors identify the most common problems in applying DAI to this domain as inadequate infrastructure in public networks and the lack of maturity in DAI. However, they suggest that there are good opportunities to apply DAI in network management and operations support, especially using multiple expert systems with some knowledge sharing.
Huhns and Singh ("Managing Heterogeneous Transaction Workflow with Co-operating Agents") are motivated by a practical problem: how to integrate huge and complex enterprise information systems, where information can be in a variety of forms, locations and computers. Typically, when modelling these systems one can hardly propose a single unified model because it is easier for each team to work autonomously using its own model. Another difficulty arises when joint enterprises are created, because the development of a single model can take a lot of time. On the other hand, maintaining different autonomous models may lead to some semantic inconsistencies (for instance, one model may use the term "single" whereas another one may use "unmarried" to denote the same situation). The authors propose to create a common ontology (expressed in Cyc), and to establish mappings between each model and the common ontology. In this way, for n different data types, one must supply n different mappings. Each application calls an agent that translates its local language into the common ontology.
Plu ("Software Technologies for Building Agent Based Systems in Telecommunication Networks") presents some agent oriented techniques to integrate different services in telecommunications. In this domain, services are provided by autonomous companies, which use different technologies and which are usually embedded in distributed and heterogeneous computer environments. The author starts by analysing several dimensions along which one may consider agents as a new paradigm for software: autonomy, trustworthiness, distribution, communication and temporal continuity. The notions of agents and objects are compared, and the role of the former in open systems detailed. In order to motivate the use of agents in the telecommunications domain, the issue of integration for heterogeneous services is then addressed. A brief overview of the RM-ODP (Open Distributed Processing Reference Model) is presented. This model, created by ISO (International Standardisation Organisation), provides a framework for describing distributed systems, which consists of five viewpoints: enterprise, information, computational, engineering and technological. When analysing each of these viewpoints, the author details how agents may be used within each of them. From the enterprise viewpoint, agents may represent some internal policies, like incurring, fulfilling or waiving obligations to other agents or getting/losing permission to perform certain actions. This representation is implemented by defining roles and contracts. From the information viewpoint, the main goal of agents is to ensure the interoperability of data from the various heterogeneous systems, using some standards like ARPA's KQML/KIF (Knowledge Query Manipulation Language/Knowledge Interchange Format) or ISO's GDMO (Guidelines for the Definition of Managed Objects). When discussing the computational and engineering viewpoints, the author shows how agents can be used in a distributed implementation, using either RPC/CORBA or mobile agent approaches.
Sycara, Zeng and Decker ("Intelligent Agents in Portfolio Management") present the first financial application, a continuous portfolio management multi-agent system. The tasks defined in this application are learning the user's profile, collecting the user's information, and suggesting reallocations. In order to execute these tasks, a set of task and information agents is assigned to each user, and these agents collaborate to achieve the user's goals. As an example, a task agent may perform some technical or fundamental analysis for the current situation and an information agent could collect information from stock prices or the market. The user's agents dynamically form a top-down organisational structure, in which one portfolio manager agent controls several task agents side by side with some information agents. This kind of organisation has the following advantages: the information agent level isolates some problems regarding the information sources (such as information quality, filtering irrelevant information, integration of information from heterogeneous sources) and the task agents are responsible for activating relevant information agents and co-ordinating them. Information gathering is one of the reasons for applying multi-agent systems techniques in this domain, since there is a huge volume of complex and dynamically changing information. As an example, a particular asset in the portfolio may no longer meet the user's needs as time passes. Information handled by the information agents is modelled as a series of cases. The case base contains cases of successful and unsuccessful information gathering episodes. A new case is created after each information gathering cycle, based on the user's goals, the characterisation of the situation, the information sources, the information retrieval plan skeleton adopted, the effectiveness of this information retrieval and its potential failures. The system takes advantage of case base reasoning techniques to deal with this information.
Goldberg and Senator ("The FinCEN AI System: Finding Financial Crimes in a Large Database of Cash Transactions") present a second application in the financial sectot. The authors detail the FAIS system, a data mining application that uses a large input space of cash transactions in order infer potential money laundering activities. The authors discuss some current limitations of the system. As they intend to enlarge its range of applicability, thus requiring a more co-operative computing paradigm, the authors argue that agent-based technology may be useful. The possible co-operating agents would be responsible for classification, transformation, evaluation, linking and monitoring tasks.
Wenger and Probst ("Adding Value with Intelligent Agents in Financial Services") describe a third financial application. Their main goal is to show the utility of the agent paradigm in building corporate information systems in the financial domain. The authors present an interesting analogy between manufacturing and information processing. In the same way that goods are made up of material, information is made up of data. By analogy with manufactured goods, information is the added value in financial systems. Examples of human co-operative behaviours in financial domain are presented: sales of mortgages, corporate financial management and portfolio management. The authors then present their agent architecture based on the blackboard model, where agents store and collect information, and wait until certain conditions hold in order to start processing. The FBSM-E (Fractal Business Service Modelling Environment) is presented, in which organisational processes, tasks and agents can be specified. The use of this environment is illustrated by real examples, showing some advantages when compared to traditional approaches.
In summary, the book is a very interesting source of information for those who wish to learn the basic concepts of agent technology. It is more useful to those interested by real world applications, particularly in the industrial, commercial and financial domains. These applications prove that agent technology is neither just a new fashion in information technology, nor a technique whose interest is restricted to academia. The book helps to show that agent technology is rather a powerful new paradigm that enables better modelling, design and implement for some classes of applications, especially those where issues like interoperability, autonomy, integration of heterogeneous systems and openness play a crucial role.
Return to Contents of this issue
© Copyright Journal of Artificial Societies and Social Simulation, 1999