933 views

Intelligent Agent Technology Research and Development

Category: Technology

Comment

Description

Published by
World Scientific Publishing Co. Pte. Ltd.
P O Box 128, Farrer Road, Singapore 912805
USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661
UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE


British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library.


INTELLIGENT AGENT TECHNOLOGY
Research and Development
Copyright © 2001 by World Scientific Publishing Co. Pte. Ltd.
All rights reserved. This book, or parts thereof, may not be reproduced in anyform or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher.

For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher.

ISBN 981-02-4706-0

Printed in Singapore by World Scientific Printers (S) Pte Ltd

PREFACE

Intelligent Agent Technology is concerned with the development of autonomous computational or physical entities capable of perceiving, reasoning, adapting, learning, cooperating, and delegating in a dynamic environment. It is one of the most promising areas of research and development in information technology, computer science, and engineering today.
This book is an attempt to capture the essence of the current state of the art in intelligent agent technology and to identify the new challenges and opportunities that it is or will be facing. It contains the papers accepted for presentation at The Second Asia-Pacific Conference on Intelligent Agent Technology (IAT '01), held in Maebashi, Japan, October 23-26, 2001. The second meeting in the IAT conference series follows the success of IAT '99 held in Hong Kong in 1999. IAT '01 brought together researchers and practitioners to share their original research results and practical development experiences in intelligent agent technology. The most important feature of this conference was that it emphasized a multi-facet, holistic view of this emerging technology, from its computational foundations, in terms of models, methodologies, and tools for developing a variety of embodiments of agentbased systems, to its practical impact on tackling real-world problems.
Much work has gone into the preparation of the IAT '01 technical program: Original, high-quality papers were solicited for various aspects of theories, applications, and case studies related to agent technologies. 134 full papers were submitted from 32 countries and regions of all continents. Each submitted paper was reviewed by at least three experts on the basis of technical soundness, relevance, originality, significance, and clarity. Based on the review reports, 25 regular papers (19%) and 40 short papers were accepted for presentation and publication.
This book is structured into six chapters according to the main conference sessions:
Chapter 1. Formal Agent Theories
Chapter 2. Computational Architecture and Infrastructure
Chapter 3. Learning and Adaptation
Chapter 4. Knowledge Discovery and Data Mining Agents
Chapter 5. Distributed Intelligence
Chapter 6. Agent-Based Applications

In addition to the above chapters, this book also includes the abstract or papers for the IAT '01 keynote/invited talks by Benjamin W. Wah, Toyoaki Nishida, Zbigniew W. Ras, Andrzej Skowron, and Katia Sycara, which provide different perspectives to Intelligent Agent Technology.

Transcript

1 Proceedings of the 2nd Asia-Pacific Conference on IAT DzvdDpmzni Editors Ning Zhong Jiming Liu Setsuo Ohsuga Jeffrey Bradshaw World Scientific Proceedings; trf the 2nd Asia-Pacific Intelligent Agent 2nd Asia-Pacific rri 1 1 Conference on W A eCnnOIOgy Research and Development Proceedings erf the 2nd Asia-Pacific Conference on IAT Intelligent Agent Technology Research and Development Editors Ning Zhong Maebashi Institute of Technology, Japan Jiming Liu Hong Kong Baptist University Setsuo Ohsuga Waseda University, Japan Jeffrey Bradshaw University of West Florida, USA ^Sj World Scientific wB New Jersey * London • Singapore • Hong Kong Published by World Scientific Publishing Co. Pte. Ltd. P O Box 128, Farrer Road, Singapore 912805 USA office: Suite IB, 1060 Main Street, River Edge, NJ 07661 UK office: 57 Shelton Street, Covent Garden, London WC2H 9HE British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library. INTELLIGENT AGENT TECHNOLOGY Research and Development Copyright © 2001 by World Scientific Publishing Co. Pte. Ltd. All rights reserved. This book, or parts thereof, may not be reproduced in any form or by any means, electronic or mechanical, including photocopying, recording or any information storage and retrieval system now known or to be invented, without written permission from the Publisher. For photocopying of material in this volume, please pay a copying fee through the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, USA. In this case permission to photocopy is not required from the publisher. ISBN 981-02-4706-0 Printed in Singapore by World Scientific Printers (S) Pte Ltd PREFACE Intelligent Agent Technology is concerned with the development of autonomous computational or physical entities capable of perceiving, reasoning, adapting, learning, cooperating, and delegating in a dynamic environment. It is one of the most promising areas of research and development in information technology, computer science, and engineering today. This book is an attempt to capture the essence of the current state of the art in intelligent agent technology and to identify the new challenges and opportunities that it is or will be facing. It contains the papers accepted for presentation at The Second Asia-Pacific Conference on Intelligent Agent Technology (IAT '01), held in Maebashi, Japan, October 23-26, 2001. The second meeting in the IAT conference series follows the success of IAT '99 held in Hong Kong in 1999. IAT '01 brought together researchers and practitioners to share their original research results and practical development experiences in intelligent agent technology. The most important feature of this conference was that it emphasized a multi-facet, holistic view of this emerging technology, from its computational foundations, in terms of models, methodologies, and tools for developing a variety of embodiments of agent- based systems, to its practical impact on tackling real-world problems. Much work has gone into the preparation of the IAT '01 technical program: Original, high-quality papers were solicited for various aspects of theories, applications, and case studies related to agent technologies. 134 full papers were submitted from 32 countries and regions of all continents. Each submitted paper was reviewed by at least three experts on the basis of technical soundness, relevance, originality, significance, and clarity. Based on the review reports, 25 regular papers (19%) and 40 short papers were accepted for presentation and publication. This book is structured into six chapters according to the main conference sessions: Chapter 1. Formal Agent Theories Chapter 2. Computational Architecture and Infrastructure Chapter 3. Learning and Adaptation Chapter 4. Knowledge Discovery and Data Mining Agents Chapter 5. Distributed Intelligence Chapter 6. Agent-Based Applications In addition to the above chapters, this book also includes the abstract or papers for the IAT '01 keynote/invited talks by Benjamin W. Wah, Toyoaki Nishida, Zbigniew W. Ras, Andrzej Skowron, and Katia Sycara, which provide different perspectives to Intelligent Agent Technology. v vi We wish to express our gratitude to all members of the Conference Committee and the International Advisory Board for their instrumental and unfailing support. IAT '01 has a very exciting program with a number of features, ranging from technical sessions, invited talks, agent demos, and social programs. All of this work would not have been possible without the generous dedication of the Program Committee members and the external reviewers in reviewing the papers submitted to IAT '01, of our invited speakers, Benjamin W. Wah, Toyoaki Nishida, Zbigniew W. Ras, Andrzej Skowron, and Katia Sycara, in preparing and presenting their very stimulating talks, and of Jianchang Mao (Demos & Exhibits Chair) in soliciting demo proposals and setting up the program. We thank them for their strong support. The conference Web support team at the Knowledge Information Systems Laboratory, Maebashi Institute of Technology did a terrific job of putting together and maintaining the home page for the conference as well as building a software, namely, cyber-chair, which is an intelligent agent and interface among organizers, program committee members, and authors/attendees. We would like to thank Juzhen Dong, Muneaki Ohsima, Norichika Hayazaki of the conference Web support team for their dedication and hard work. IAT '01 could not have taken place without the great team effort of the Local Organizing Committee and the support of Maebashi Institute of Technology and Maebashi Convention Bureau. Our special thanks go to Nobuo Otani (Local Organizing Chair), Sean M. Reedy, Masaaki Sakurai, Kanehisa Sekine, and Yoshitsugu Kakemoto (the Local Organizing Committee members) for their enormous efforts in planning and arranging the logistics of the conference from registration/payment handling, venue preparation, accommodation booking, to banquet/social program organization. We are very grateful to the IAT '01 sponsors: ACM SIGART, Maebashi Institute of Technology, Maebashi Convention Bureau, Maebashi City Government, Gunma Prefecture Government, The Japan Research Institute, Limited, United States Air Force Office of Scientific Research, Asian Office of Aerospace Research and Development, and United States Army Research Office in Far East, and Web Intelligence Laboratory, Inc. for their generous support. We thank ACM SIGWEB, SIGCHI, Japanese Society for Artificial Intelligence, JSAI SIGFAI, SIGKBS, and IEICE SIGKBSE for being in cooperation with IAT '01. Last but not the least, we thank Ms. Lakshmi Narayanan of World Scientific for her help in coordinating the publication of this book. October 2001 Ning Zhong and Jiming Liu Program Committee Chairs Setsuo Ohsuga and Jeffrey Bradshaw General Conference Chairs CONFERENCE ORGANIZATION General Chairs: Program Chairs: Demos and Exhibits Chair: Local Organizing Chair: Jeffrey M. Bradshaw (Inst. H&M Cognition, USA) Michele L. D. Gaudreault (US AOARD) Daniel T. Ling (Microsoft Corp., USA) Jiming Liu (Hong Kong Baptist U.) Jianchang Mao (Verity Inc., USA) Hiroshi Motoda (Osaka U., Japan) Masahiko Satori (Maebashi Inst. Tech., Japan) Tadaomi Miyazaki (Maebashi Inst. Tech., Japan) Nobuo Otani (Mabashi Inst. Technology, Japan) Sean M. Reedy (Mabashi Inst. Technology, Japan) Ning Zhong (Maebashi Inst. Technology, Japan) Setsuo Ohsuga (Waseda U., Japan) Jeffrey Bradshaw (Inst. H&M Cognition, USA) Ning Zhong (Maebashi Inst. Technology, Japan) Jiming Liu (Hong Kong Baptist U.) Jianchang Mao (Verity Inc., USA) Nobuo Otani (Mabashi Inst. Technology, Japan) Setsuo Ohsuga (Waseda U., Japan) Patrick S. P. Wang (Northeastern U., USA) Yiyu Yao (U. Regina, Cadada) Jie Yang (U. Science & Technology of China) Ning Zhong (Maebashi Inst. Technology, Japan) Jan Zytkow (U. North Carolina, USA) Toshio Kawamura (Maebashi Convention B.) Masaaki Sakurai (Maebashi Convention Bureau) Kanehisa Sekine (Maebashi Convention Bureau) Midori Asaka (IPA, Japan) Yoshitsugu Kakemoto (JRI, Limited, Japan) International Advisory Board Local Organizing Committee Program Committee K. Suzanne Barber (U. Texas-Austin, USA) Guy Boy (EURISCO, France) Cristiano Castelfranchi (CNR, Italy) Kerstin Dautenhahn (U. Hertfordshire, UK) Edmund H. Durfee (U. Michigan, USA) E. A. Edmonds (Loughborough U., UK) Tim Finin (UMBC, USA) Adam Maria Gadomski (ENEA, Italy) Scott Goodwin (U. Regina, Canada) Vladimir Gorodetsky (Russian Academy of Sci.) Mark Greaves (The Boeing Company, USA) Barbara Hayes-Roth (Stanford U., USA) Michael Huhns (U. South Carolina, USA) Keniti Ida (Maebashi Inst. Technology, Japan) Tom Ishida (Kyoka oto U., Japan) Lakhmi Jain (U. South Australia) Stefan J. Johansson (U. Karlskrona, Sweden) Qun Jin (U. Aizu, Jaoan) Juntae Kim (Dongguk U., Korea) David Kinny (U. Melbourne, Australia) Matthias Klusch (German Research Center for AI) Sarit Kraus (U. Maryland, USA) Danny B. Lange (General Magic, INC., USA) Jimmy Ho Man Lee (Chinese U. Hong Kong) Jiming Liu (Hong Kong Baptist U.) Mike Luck (U. Southampton, UK) Helen Meng (Chinese U. Hong Kong) Joerg Mueller (Siemens, Germany) Hideyuki Nakashima (ETL, Japan) Wee-Keong Ng (Nanyang Tech. U., Singapore) Katsumi Nitta (Tokyo Inst. Technology, Japan) Yoshikuni Onozato (Gunma U., Japan) Tuncer Oren (Marmara Research Center, Turkey) Ichiro Osawa (ETL, Japan) Sun Park (Rutgers U., USA) Van Parunak (ERIM, USA) Zbigniew W. Ras (U. North Carolina, USA) Eugene Santos (U. Connecticut, USA) Zhongzhi Shi (Chinese Academy of Sciences) Carles Sierra (Scientific Research Council, Spain) Kwang M. Sim (Chinese U. Hong Kong) Andrzej Skowron (Warsaw U., Poland) Ron Sun (U. Misouri-Columbia, USA) Niranjan Suri (U. West Florida, USA) Takao Terano (U. Tsukuba, Japan) Demetri Terzopoulos (U. Toronto, Canada) Huaglory Tianfield (Glasgow Caledonian U., UK) David Wolpert (NASA Ames Research Center) Jinglong Wu (Kagawa U., Japan) Takahira Yamaguchi (Shizuoka U., Japan) Kazumasa Yokota (Okayama Prefectural U., Japan) Eric Yu (U. Toronto, Canada) P. C. Yuen (Hong Kong Baptist U.) Chengqi Zhang (Deakin U., Australia) Ning Zhong (Maebashi Inst. Technology, Japan) TABLE OF CONTENTS Preface v Conference Organization vii Invited Talks Intelligent Agents for Market-Trend Prediction 2 Benjamin W. Wah Social Intelligence Design for Knowledge Creating Communities 3 Toyoaki Nishida Query Answering Based on Distributed Knowledge Mining 17 Zbigniew W. Ras Approximate Reasoning by Agents in Distributed Environments 28 Andrzej Skowron Multi-Agent Infrastructure for Agent Interoperation in Open Computational Environments 40 Katia Sycara Chapter 1. Formal Agent Theories SPY: A Multi-Agent Model Yielding Semantic Properties 44 F. Buccafurri, D. Rosaci, G. M. L. Same, L. Palopoli ABT with Asynchronous Reordering 54 Marius-Calin Silaghi, Djamila Sam-Haroud, Boi Faltlngs Social Rationality and Cooperation 64 Guido Boella Belief Revision in Type Theory 69 Tijn Borghuis, Fairouz Kamareddine, Rob Nederpelt Heterogeneous BDI Agents II: Circumspect Agents 74 Maria Fash A Preference-Driven Approach to Designing Agent Systems 80 Stefan J. Johansson, Johan Kummeneje Agent Consumer Reports: of the Agents, by the Agents, and for the Agents 86 Xiaocheng Luan, Yun Peng, Timothy Finin Logical Formalizations Built on Game-Theoretic Argument about Commitments 92 Lamber Royakkers, Vincent Buskens Asynchronous Consistency Maintenance 98 Marius-Calin Silaghi, Djamila Sam-Haroud, Boi Faltings IX Chapter 2. Computational Architecture and Infrastructure Reasoning about Mutual-Belief among Multiple Cooperative Agents 104 Wenpin Jiao Portable Resource Control for Mobile Multi-Agent Systems in JAVA 114 Walter Binder, Jarle G. Hulaas, Alex Villazon, Rory G. Vidal An Agent-Based Mobile E-Commerce Service Platform for Forestry and Agriculture 119 Matthias Klusch, Andreas Gerber An Itinerary Scripting Language for Mobile Agents in Enterprise Applications 124 Seng Wai Loke, Arkady Zaslavsky, Brian Yap, Joseph Fonseka Intelligent Agents for Mobile Commerce Services 129 Mihhail Matskin A New Concept of Agent Architecture in Agentspace 134 T. Nowak, S. Ambroszkiewicz 21st Century Systems, INC.'s Agent Enabled Decision Guide Environment (AEDGE™) 139 Plamen V. Petrov, Alexander D. Stoyen, Jeffrey D. Hicks, Gregory J. Myers Proactiveness and Effective Observer Mechanisms in Intelligent Agents 144 Jon Plumley, Kuo-Ming Chao, Rachid Anane, Nick Godwin Chapter 3. Learning and Adaptation Parrondo Strategies for Artificial Traders 150 Magnus Boman, Stefan J. Johansson, David Lyback BDI Multi-Agent Learning Based on First-Order Induction of Logical Decision Trees 160 Alejandro Guerra Hernandez, Amal El-Fallah Seghrouchni, Henry Soldano Evolutionary Behaviors of Competitive Agents in Dilemma Situation... 170 Tin Tin Naing, Lifeng He, Atsuko Mutoh, Tsuyoshi Nakamura, Hidenori Itoh A Strategy for Creating Initial Data on Active Learning of Multi-Layer Perceptron 180 Kazunori Iwata, Naohiro Ishii Equilibrium Selection in a Sequential Multi-Issue Bargaining Model with Evolutionary Agents 190 Norberto Eiji Nawa, Katsunori Shimohara, Osamu Katai Affect and Agent Control: Experiments with Simple Affective States 200 Matthias Scheutz, Aaron Sloman Meta-Learning Processes in Multi-Agent Systems 210 Ron Sun Scalability and the Evolution of Normative Behavior 220 Jorg Wellner, Sigmar Papendick, Werner Dilger Thinking-Learning by Argument 230 Aladdin Ayesh Evolution of a Foraging Model with Many Individuals by Kin-selection 235 Kazue Kinoshita, Atsuko Mutoh, Tsuyoshi Nakamura, Hidenori Itoh The Use of Emergent Behaviour in a Multi-Agent System to Drive Self-Adaptation at the Interface 240 Peter Marshall, Sue Greenwood A Biologically Inspired Four Legged Robot That Exhibits Some Natural Walking Behaviours 245 5. Peng, G. R. Cole, C. P. Lam Chapter 4. Knowledge Discovery and Data Mining Agents CM-RELVIEW: A Tool for Causal Reasoning in Multi-Agent Environments 252 Brahim Chaib-Draa User's Ontology-Based Autonomous Interface Agents 264 Tarek Helmy, Satoshi Amamiya, Makoto Amamiya Integration and Reuse of Heterogeneous XML DTDs for Information Agents 274 Euna Jeong, Chun-Nan Hsu Virtual Museum's Assistant 284 Osvaldo Cairo, Ana Aldeco, M.E. Algorri Index Based Document Classification with CC4 Neural Networks 289 Enhong Chen, Zhengya Zhang, Xufa Wang, Jie Yang Price Watcher Agent for E-Commerce 294 Simon Fong, Aixin Sun, Kin Keong Wong Automated Information Extraction from Web Pages Using an Interactive Learning Agent 300 Jugal K. Kalita, Paritosh Rohilla An Intelligent Agent with Structured Pattern Matching for a Virtual Representative 305 Seung-ik Lee, Sung-Bae Cho A Calendar Management Agent with Fuzzy Logic 310 Wayne Wobcke XML Based Multi-Agent Collaboration for Active Digital Libraries 315 Yanyan Yang, Omer F. Rana, David W. Walker, Roy Williams, Giovanni Aloisio XII Chapter 5. Distributed Intelligence An Intelligent Channel Allocation Scheme for Mobile Networks: An Application of Agent Technology 322 Eliane L. Bodanese, Laurie G. Cuthbert An Atomic Approach to Agent-Based Imagery and Geospatial Problem Solving 334 James J. Nolan, Robert Simon, Arun K, Sood Model-Based Creation of Agents and Distribution of Problem Solving 344 Katsuaki Tanaka, Setsuo Ohsuga A Distributed Algorithm for Coalition Formation Among E-Commerce Agents 355 Guillaume Vauvert, Amal El Fallah-Seghrouchni Optimal Reward Functions in Distributed Reinforcement Learning 365 David H. Wolpert, Kagan Turner Polygonal Approximation of Planar Digital Curves Using Ant System 375 Peng-Yeng Yin A Biological View on Information Ecosystems 385 Bengt Carlsson, Paul Davidsson The CoDAC Collaboration Framework 390 K W. Ng, T. O. Lee A Multi-Agent Approach to Modelling Interaction in Human Mathematical Reasoning 395 Alison Pease, Simon Colton, Alan Smaill, John Lee Secure Asynchronous Search 400 Marius-Calin Silaghi, Djamila Sam-Haroud, Boi Faltings Foundations of Market-Driven Agents: An Adaptation of Zeuthen's Bargaining Model 405 Kwang Mong Sim, Chung Yu Choi Chapter 6. Agent Based Applications Kavanah: An Active User Interface Information Retrieval Application 412 Eugene Santos JR., Hien Nguyen, Scott M. Brown iJADE WeatherMAN - A Multi-Agent Fuzzy-Neuro Network Based Weather Prediction System 424 Raymond Lee, James Liu, Jane You Acquaintance Models in Coalition Planning for Humanitarian Relief Operation 434 Michal Pechoucek, Vladimir Marik, Jaroslav Barta Agent Negotiation in a Virtual Marketplace 444 Walid S. Saba, Pratap R. Sathi XIII Modeling User Preferences to Facilitate More Automated and Accurate Transaction Brokering within Heterogeneous Multi-Agent Electronic Markets 454 G. Tewari, P. Maes, A. Berkovich, V. Gabovich Attitude Based Agents in E-Commerce Applications 464 S. Au, N. Parameswaran Organizing Internet Agents According to a Hierarchy of Information Domains 469 Sylvie Cazalens, Philippe Lamarre Introducing User Preference Modeling for Meeting Scheduling 474 Hon Wai Chun, Rebecca Y. M. Wong Executive Attentional Control in Autonomous Robotic Agents 479 Jason Garforth, Anthony Meehan, Sue Mchale Implementation and Analysis of Mobile Agents in a Simulation Environment for Fieldbus Systems 484 R. Hunstock, U. Ruckert, T. Hanna Evaluating Believability in an Interactive Narrative 490 Jarmo Laaksolahti, Per Persson, Carolina Palo iJADE Stock Predictor - An Intelligent Multi-Agent Based Time Series Stock Prediction System 495 Raymond S. T. Lee, James N. K. Liu Approximate Sensor Fusion in a Navigation Agent 500 J. F. Peters, S. Ramanna, M. Borkowski, A. Skowron Simulating Day-Ahead Trading in Electricity Markets with Agents 505 Max Scheldt, Hans-Jurgen Sebastian Using Mobile Agents to Update and Maintain Course Materials on Students' Computers in Internet-Based Distance Education 510 Hongxue Wang, Pete Holt Author Index 515 INVITED TALKS INTELLIGENT AGENTS FOR MARKET-TREND PREDICTION BENJAMIN W. WAH Department of Electrical and Computer Engineering and the Coordinated Science Laboratory University of Illinois at Urbana- Champaign Urbana, IL 61801, USA http://manip.crhc.uiuc.edu (2001 IEEE Computer Society President) In this presentation we discuss the role of intelligent agents in market- trend predictions. Market-trend data, such as stock-market data, are charac­ terized by non-stationary time series that may depend on non-numeric and non-quantifiable measures. The prediction of market trends, therefore, should consist of prediction of non-stationary time series and the abstraction and in­ tegration of non-numeric information in prediction. In this talk, we survey various prediction techniques for and mining of market-trend data. We pro­ pose to use intelligent agents in the abstraction of non-numeric information, the decomposition of non-stationary time series into multiple stationary time series, and the prediction of trends using artificial neural networks. Finally, we illustrate our techniques in predicting stock-market data. 2 SOCIAL INTELLIGENCE DESIGN FOR KNOWLEDGE CREATING COMMUNITIES TOYOAKI NISHIDA Department of Information and Communication Engineering Graduate School of Information Science and Technology The University of Tokyo 7-3-1 Hongo, Bunkyo-ku, Tokyo 113-8656, Japan nishida@kc. t. u-tokyo. ac.jp Communities play an important role in knowledge creation by providing people with opportunities to continually learn from others, find partners to collaborate with, and demonstrate the significance of their disciplines. In education or business, it is relatively easy to find typical examples of knowledge creating communities for sharing and exchanging specialized knowledge among knowledge workers. In other domains such as NPO or local communities, people are naturally practicing mutual learning and invaluable knowledge is built as a result, even if knowledge creation is not deemed a primary goal of the community. In this paper, 1 present an interdisciplinary approach to augmenting the community knowledge creating process by integrating insights from social psychology, cognitive psychology, and advanced information technology. I emphasize the role of conversations and stories as a means of establishing a common background in a community. I describe several systems that primarily use the conversational modality to mediate community communication. Among others, EgoChat allows the user to make conversation with virtualized egos responding on behalf of other users. It allows the user to take an initiative by interrupting the conversation and changing its flow. VoiceCafe allows artifacts to make conversation with people or other artifacts. It stimulates creative thinking by bringing about utterances from the physical object's point of view, which might be strikingly different from humans' view. These engineering approaches should be tightly coupled with sociological and cognitive approaches, to predict and assess the effects of community communication mediation systems on the human society. 1 discuss issues on designing a constructive framework of interaction for achieving practical goals without being caught by known pathological pitfalls of group interactions. 1 Introduction The agent technology plays a diverse role in the networked society. On the one hand, agents may be intelligent surrogates that work on behalf of the user. This type of agents includes intelligent brokers that seek best match between service providers and consumers, intelligent traders that buy and sell goods on behalf of the user, intelligent decision makers that negotiate contracts for the user, and so on. Alternatively, agents may be embodied conversational interfaces that entertain the 3 4 user. This type of agents is becoming popular as agent portals on the Internet, or as artificial pets in the entertainment and amusement domain. In this paper, I discuss issues in applying the agent technology to the development of a social information service for mediating communication among people. From this perspective, the central issue is designing and understanding a world where people and agents cohabit, rather than inventing a system of artifacts. We will not be able to innovate a totally new kingdom of artificial agents apart from the human society, but we have to carefully embed the agent system in the existing human society. This means that we need to understand more about humans and the human society to better design an embedded system. We need to pay much attention on the effects the technology brings about the human society. We need to make every effort to have the proposal accepted by the human community. In contrast, we need not insist on the novelty of the technology or a pedagogical issue of whether the artifact can be called an agent. Let us call this field social intelligence design in general. Research on social intelligence design involves such issues as how new technologies induce the emergence of a new language and lifestyle. For example, interactive multimedia websites are a new medium and maybe even a new language, with interesting new conventions, and increasing adaptation to the support of communities. Japanese teenagers have developed a new language for use originally with beepers and now with mobile phones. These are both new mainstream real world developments that should be studied further, and could probably give some valuable insights. The theme of "social intelligence" is really an angle on the support of groups in pursuit of their goals, whether that is medical knowledge, stock trading, or teenage gossip. I focus on community support systems to shed light on key aspects of social intelligence design. The goal of a community support system is to facilitate formation and maintenance of human and knowledge networks to support activities in a community. Examples of community support systems include socially intelligent agents that mediate people in getting to know and communicate with each other, a collaborative virtual environment for large-scale discussions, personalized agents for helping cross-cultural communication, interactive community media for augmenting community awareness and memory, to name just a few. I emphasize the role of stories and conversations as a means of establishing a common background in a community. Stories allow us to put pieces of information into an intelligible structure. Conversations give us an opportunity to examine information from various angles and search for a good story structure. In some community support systems, story-telling agents play a central role. It should be noted that their significance depends more on the contents of stories rather than conversation mechanism. I also emphasize the empirical aspects of social intelligence design. Engineering approaches should be tightly coupled with sociological and cognitive approaches, to predict and assess the effects of community communication 5 mediation systems on the human society. I show how psychological approaches are applied to design and evaluation of community support systems. 2 Communities and Social Intelligence Social intelligence design is distinguished from most of other conventional engineering disciplines in that we have to be strongly aware of the human society as a target. For this reason, I first take a look at the nature of my target, i.e., communities, in this section. A community is a group of people loosely coupled by a shared interest or environment. More formal discussion can be found in literature in sociology. For example, Smith defines a community as follows: Generically, a community can be understood as a set of on-going social relations bound together by a common interest or shared circumstance. As a result, communities may be intentional or unintentional, a community's participants may purposely join together or be thrust into membership by circumstance. Intentional communities are of particular interest because they raise more questions about the reasons and causes for their emergence than do unintentional ones [21]. Traditional communities were local communities that are characterized by locality and shared living environment. The advent of a global information network has not only considerably relaxed spatial constraints for communities to be built, but also provided a new opportunities for existing communities. Typical networked communities include: • communities of interest, in which people are tied with a shared interest; • communities of practice, in which a group of people work together and share a common work practice; and • enhanced local communities or smart communities, which result from enhancing communication and information sharing facilities in existing local communities. Schlichter contrasts communities with groups and teams [23]. He characterizes communities as sets of people who share something but who do not necessary know each other or interact on personal basis. In contrast, groups are sets of people who know each other but who do not necessarily cooperate, while teams are sets of people who are cooperating to achieve a common goal. In educational environments, the class of lecture may be regarded as a community, a discussion group a group, and a learning group a team. Recently, communities have become increasingly paid more attention in the context of knowledge management and distance learning. A community provides its members with opportunities to continually learn from others, find partners to collaborate with, and demonstrate the significance of their disciplines. In education 6 or business, it is relatively easy to find examples of communities oriented towards knowledge creation by sharing and exchanging specialized knowledge among knowledge workers. In other domains such as NPO or local communities, people are naturally practicing mutual learning and invaluable knowledge is built as a result, even if knowledge creation is not deemed a primary goal of the community. We consider that community knowledge creation is essentially a co-evolution of human and knowledge networks [16, 17]. By human network, I mean a collection of people connected by various social relations, such as acquaintance or partnership. A human network is considered to embody tacit knowledge that may be shared in a community but may not be explicitly spoken. In contrast, knowledge network is a collection of documents or files connected explicitly by hyperlinks or implicitly by references. Knowledge network explicitly describes shared knowledge and interest in a community. A knowledge network enables people with a common interest to know each other, resulting in extension of human network. A human network, in turn, helps new ideas grow through intimate discussions. It facilitates the extension of knowledge network through publication of new knowledge. Thus, a synergetic cycle of human and knowledge network will lead to a successful community. A more elaborate characterization of human and knowledge networks is proposed by Contractor [3]. He pointed out that observed knowledge networks are different from cognitive networks that each individual possesses as a cognitive perception of the network. He proposes to distinguish between: (i) knowledge networks that represent the extent to which the same or disparate knowledge is distributed among various members of the group, and (ii) cognitive knowledge networks that represent individuals' cognitive perceptions of "who knows what" within the group. In order to understand the dynamics of community knowledge, Contractor proposes to observe five types of network data: (i) a communication network of actors based on existing tasks and project links between them, (ii) a knowledge network based on actors providing an inventory of their skills and expertise, (iii) a knowledge network of actors based on links between their web sites, (iv) a knowledge network of actors based on common links from their web sites, and (v) a knowledge network based on similarity in content between different actors' web sites. It should be noted that all kinds of interaction in a community may not bring about fruitful results. In social psychology, various pathological pitfalls are known about group interactions. A notorious example is flaming, an endless slander battle on the net, which is rare in face-to-face communication. Flaming blocks discussions among community members, possibly resulting in a destructive damage to a community. False consensus is another undesirable phenomenon. It results from "a spiral of silence", or "bandwagon effect", for instance, in which false cognition is socially amplified. 7 3 Community Support Systems The role of community support systems is to support community activities by providing a communication channel for community members. Community support systems are built on top of the communication and expected to help community members (i) exchange awareness with other members, (ii) explore human and knowledge networks, (iii) build community knowledge, (iv) organize public events, (vi) form a group/team for collaborative work, (v) negotiate with others, and (vii) discuss public issues and make decisions about the community. Community support systems provide rather long-range, bottom-up communicative functions in the background of daily life. This feature is contrastive with groupware that emphasizes more task-driven, short-range collaboration, although awareness is equally emphasized. In the rest of this section, I will discuss the first three functions. 3.1 Helping to Awareness with Other Members Most of networked communities are based on intentional participation, based on a common interest for instance. Compared with mission-oriented groups where participants are incorporated in a certain work structure, the degree of necessity to exchange awareness is relatively low in networked communities. Participants tend to become silent unless a mechanism is provided for lowering the cost for exchanging awareness with other members. In order to support awareness, Schlichter uses spatial metaphors such as rooms or hallways in "The Lecture 2000", a computational environment for supporting a learning community. FaintPop supports a light-weight, acknowledge-only mode of communications [19]. The major design goal of FaintPop is to communicate the sense of connectedness, not to perform informative functions. FaintPop is a communication device similar to a photo frame. Small photos or icons of the user's colleagues are displayed in the frame, through which the user can communicate with other users using a simple touch actions. Three types of touching are permitted: a tap to communicate a neutral feeling, a pet a positive feeling, and a hit a negative feeling. The user can communicate her/his feeling towards her/his colleagues by using these three types of touching and other community members can observe it. Sumi proposes to use interest-based information distribution system, which pushes information to interested users, rather than passively waits for requests from users [24]. Voice Cafe [8] allows artifacts to make conversation with people or other artifacts (Figure 1). It stimulates creative thinking by bringing about utterances from the physical object's point of view, which might be strikingly different from humans' view. Each Voice Cafe artifact consists of a physical object and a conversational agent. It can communicate with community members by exchanging gossips, or small talks about members' conditions, schedules, thoughts and opinions, and so on. 8 (a) the conceptual framework of Voice Cafe Figure 1. Virtualized egos as an interactive community medium. By listening to the gossips, members can gain awareness of other people at the small talk level. 3.2 Helping to Explore Human and Knowledge Networks This facility helps the user find human and human resources in a community. Social matchmaking is frequently used to locate people on the Internet who share some similar interests and enable the automatic formation of interest group. Social matchmaking calculates the distance between users by referring to their user profiles. A major motivation . behind social matchmaking is to address situations such that finding an expert is difficult and time consuming; people are often working on similar projects without realizing it; or people feel socially isolated 9 Real World Inhabitant Figure 2. Virtualized egos as an interactive community medium. because nobody around s/he seems to share the same Interest. Yenta [4] is a multi- agent matchmaking system that can automatically determine user interests and operate in a completely decentralized, peer-to-peer fashion. Yenta is a persistent agent that uses referrals to find each other, build clusters of like-minded agents, and introduce users to each other. Special care is paid to protect user privacy. Silhouettell [20] combines awareness support and social matchmaking to bridge between informal and formal meetings. It projects the location of participants on the screen as shadows, and facilitates conversation by presenting Web pages that are inferred to common to the participants. Referral Web [11] integrates recommendations and search through the concept of a social network. It helps the user discover her/his relationship to the best human experts for a given topic. It gathers all information from public sources, which removes the cost of information posting and registration. It can also explain the user why each link in the referral-chain appeared. In order to provide an integrated method of exploring and building human and knowledge networks, we use a talking-virtualized-egos metaphor in CoMeMo- Community [14] and EgoChat [12] to enable an elaborate asynchronous communication among community members. A virtualized ego mainly plays two functions (Figure 2). First, it stores and maintains the user's personal memory. Second, it presents the content of the personal memory on behalf of the user at appropriate situations. By personal memory, we mean an aggregation of relevant information represented in the context specific to a particular person. Personal memory plays a crucial role not only in personal information management but also in mutual understanding in a community. A virtualized ego serves as a portal to the memory and knowledge of a person. It accumulates information about a person and allows her/his colleague to access the information by following an ordinary spoken-language conversation mode, not by 10 going up and down a complex directory in search for possibly existent information, or by deliberately issuing commands for information retrieval. In addition, virtualized ego embodies tacit and non-verbal knowledge about the person so that more subtle messages such as attitude can be communicated. As is also the case with Voice Cafe, we take a conversation-centered approach in designing intelligent systems and capturing intelligence itself. Conversation plays varieties of roles in human societies. It not only allows people to exchange information, but it also helps them create new ideas or manage human relations. In our approach, more emphasis is placed on creating, exchanging, reorganizing, and utilizing conversational contents in knowledge creation, rather than implementing intelligent agents or yet-another human interface. 3.3 Helping to Build Community Knowledge The third function of a community support system is for helping community members build a shared knowledge. Nonaka and Takeuchi pointed out that the community knowledge is built by a spiral of interactions between explicit and tacit knowledge [18]. They suggest that the process of knowledge creation is more important than the body of knowledge, for people often find more value in communities that evolve as a result of learning. This implies that more emphasis should be placed on supporting interactions or the emergent aspect of community knowledge [13] in community support systems. The Public Opinion Channel (POC) [15, 16, 7] is a community-wide interactive broadcasting system (Figure 3). A POC continuously collects messages from people in a community and feeds edited messages back to them. POC is not intended to be a system that broadcasts public opinions per se. Instead, it is intended to broadcast miscellaneous information that can serve as a basis of public opinion formation. A POC repeats a cycle consisting of call-for-opinion followed by one or more repetition of responding by the community members and summarization by the POC system. In the initial call-for-opinion message, the POC system specifies a focus of discussion. Alternatively, people may also initiate discussion by submitting a topic. Then, interested community members may respond with messages. In principle, messages are not limited to pure opinions. Instead, they may include questions, stories, findings, jokes, proposals, and all other message types. The POC system may combine these messages, generate a story, and broadcast it to the community. The POC system may issue a progress report based on responses from community members. The process proceeds with altering subjects. A POC brings about ecology of ever evolving stories. People can access to the story pool at anytime by an on-demand-type access means. Another thing I would like to emphasize here is that the POC broadcasting can be embedded in the ambient environment, just like a radio broadcasting, so that people need not pay much attention at all times. 11 Community Broadcasting Servers (POC Servers) POC Clients We have to discuss of the ethics of cyborg .^j Advertising/or Opinions ' Community B Broadcasting Opinions . cyborg nil be Community C Figure 3. The Conceptual framework of Public Opinion Channel (POC). The POC is an interactive broadcasting system that continuously collects messages from community members and feeds edited message streams back to the community. Compared with existing mass media, a POC has various advantages. Computational support and network connectivity enable a large amount of responses to be analyzed on the fly, allowing real-time interactive stories to be generated. In particular, a combination of statistical computation and semantic processing permits minority opinions to be reflected in the structure of public opinion. We believe that POC also contributes to community knowledge building and public discussion. 4 Social Intelligence Design Social Intelligence Design is a new discipline aimed at understanding and supporting social intelligence, i.e., intelligence collectively exhibited by (natural/artificial) agents to dynamically organize members' activities into a coherent one by exploiting or innovating the social structure. Social intelligence models intelligence as a phenomenon emerging from the way agents, either natural or artificial, are interacting with each other. Research into community support systems is concerned with engineering aspects of Social Intelligence Design. Meanwhile, investigation into the sociological and cognitive aspects are equally or sometimes more important. Engineering approach should be tightly coupled with sociology and psychology and other disciplines closely related to the study of humans and human society. Thus, Social Intelligence Design involves not only designing artifacts but 12 also designing and understanding the social structure in which the artifacts are embedded.1 Social intelligence design gives some new life to Agent Technology and Artificial Intelligence research in general in that humans are integral part of a big picture by shifting the focus, from building artifacts with problem solving or learning capabilities, to designing a framework of interaction that leads to creation of new knowledge and relationship among participants. An interdisciplinary study integrating insights from Artificial Intelligence, Human-Computer Interaction, Social and Cognitive Sciences, Media Studies, and other related disciplines is necessary to predict and assess the effects of social intelligence augmentation systems on the human society from sociological and cognitive viewpoints. Promising application domains includes collaborative environment, e-learning, knowledge management, community support systems, symbiosis of humans and artifacts, crisis management, and digital democracy. The engineering side of Social Intelligence Design involves not only community support systems but also systems that range from group/team oriented collaboration support systems [5] to large-scale online-discussion support systems such as Bubble used in the IBM's WorldJam trial [26]. The humanity side of Social Intelligence Design involves design and assessment of social intelligence. In the rest of this section, I will overview a couple of research in this side. 4.1 Social Intelligence Design from Social Psychological View Azechi points out that two factors hinder dynamic knowledge interaction in a networked community. One is the essential disposition of a group that prevents effective cooperation, particularly known as groupthink and the hostility to out- groups. Groupthink [9] means a phenomenon that collective creativity does not exceed individual creativity. The hostility to out-groups means that a group member has hostility to out-groups easily [23, 25]. This phenomenon is closely related with stereotyping, which means some stigmata produce the wrong inference about an outsider's behavior pattern and personality. Another factor is a new concept called escape from information, which means the tendency of the people living in a mass-consumption society to make themselves the same as others and avoid choosing and expressing information themselves. Azechi classifies the content of a message into dry and wet information [1]. Dry information primarily contains logical linguistic information and constitutes the core of a message. It may be an assertion, a question, a view, an opinion, or any other statements that are logically constituted by fact. In contrast, wet information is These arguments are inspired by discussion at JSAI-Synsophy International Workshop on Social Intelligence Design, Matsue, Japan, May 21-22, 2001. 13 meta-information incidental to the contents of the message. It is mainly nonlinguistic information and is not affected by the dry information. He argues that community-wide discussion for achieving some practical goal should be made only with dry information, otherwise rational discussion will be hindered due to the pathology of a group and the fruitful outcome cannot not be expected at all. He suggests that the following five functions should be incorporated into POC: (i) protecting personal information, (ii) consideration of minority opinions, (iii) equal opportunities to presenters, (iv) positive reciprocity, (v) and tutoring. Such discussion is taken as a valuable insight that can be used to design a public discussion support system in general. In addition, he argues based on the investigation from a POC prototype that the user's motivation is often discouraged by trash information, setting up a clear purpose for using the tool will encourage the user, and informing the user of the size of the user group will increase the motivation [2]. Matsumura addresses the consensus formation in networked communities and points out the importance of minority opinions in group decision making [10]. Based on social psychological experiments on the minority opinions, he has found out that (i) minority members tend to overestimate the number of other members who share the same attitude, (ii) minority members tend to underestimate the attitude of other members, (iii) minority members who underestimate the proportion of the minority's opinion tend to lose an intention to act. Such inaccuracy in cognition of opinion distribution is called false consensus effect. These observations should be taken into account in designing discussion support systems so that useful discussions can be expected by reflecting minority opinions. He discusses the pros and cons of using anonymous messages, which will obscure the real distribution of opinions. Good news is that it will not discourage minority members by the fact that they are in the minority. Bad news is that it may cause an incorrect cognition about the distribution of opinions. 4.2 Evaluations of Social Intelligence Social Intelligence Design is certainly an empirical study. We have to repeat the design-implement-evaluation cycle until we reach better systems. Network Analysis is a powerful means of evaluating or comparing empirical data. It provides us with a means for calculating various aspects of a given network in terms of centrality, density or cohesion. By comparing those features from one network against those from another, we can describe the similarity and difference in quantitative term. Fujihara has applied Network Analysis to a log collected from experiments with a POC prototype for several months to see if POC actually facilitates community knowledge creation [6]. Fujihara points out the importance of setting up appropriate control condition for formalizing the result of experiments. It will allow the effects of tools to be 14 measured and compared against a baseline condition. He also suggests that multiple methods should be combined to gain a reliable result. Methods of evaluation may fall into the following three types: • Analysis of users' subjective estimations and introspection that can be collected through questionnaire and interview, • Experimental methods where experimental conditions are operated systematically, and • Analysis of data collected by observing users' behavior in ordinal and natural settings. 5 Concluding Remarks In this paper, I have discussed issues related to Social Intelligence Design. By social intelligence I mean intelligence collectively exhibited by (natural/artificial) agents to dynamically organize members' activities into a coherent one by exploiting or innovating the social structure. The central issue here is designing and understanding a world where people and agents cohabit, rather than inventing a system of artifacts. I have overviewed community support systems as example of the engineering aspects of Social Intelligence Design. I have also shown some psychological approaches related to the design and evaluation stages of Social Intelligence Design. The agent technology has a large potential of augmenting social intelligence, provided that special care is taken in order to embed artifacts into the human society. References 1. Azechi, S., Social psychological approach to knowledge-creating community, in: Nishida, T. (ed.), Dynamic Knowledge Interaction, pp. 15-57, CRC Press LLC, 2000. 2. Azechi, S., Motivation for showing opinion on public opinion channel: a case study, to be presented at KES-2001, Osaka, Sept. 6, 7 & 8 September 2001, Japan, 2001. 3. Contractor, N., Bishop, A., and Zink, D., PrairieKNOW: a tool to assist the study, creation, and growth of community networks, in: Bullinger, H.-J. and Ziegler, J. (eds.), Human-Computer Interaction, Volume 2, Erlbaum, Hillsdale, pp. 447-451, 1999. 4. Foner, L. N., Political Artifacts and Personal Privacy: The Yenta Multi-Agent Distributed Matchmaking System. Ph.D. Dissertation, MIT, 1999. 15 5. Fruchter, R., Bricks, bits & interaction, presented at JSAI-Synsophy International Workshop on Social Intelligence Design, Matsue, Japan, May 21- 22,2001. 6. Fujihara, N., How to evaluate application of conversational intelligence, to be presented at KES-2001, Osaka, Sept. 6, 7 & 8 September 2001, Japan, 2001. 7. Fukuhara, T., Nishida, T., and Uemura, S., Public Opinion Channel: a system for augmenting social intelligence of a community, presented at JSAI-Synsophy International Workshop on Social Intelligence Design, Matsue, Japan, May 21 - 22,2001. 8. Fukuhata, T., Nishida, T., and Uemura, S., Voice Cafe: conversational support system in a group, KES 2001, to be presented at KES-2001, Osaka, Sept. 6, 7 & 8 September 2001, Japan, 2001. 9. Janis, I. L., Groupthink, 2nd Edition, Houghton Mifflin Company, Boston, 1982. 10. Matsumura, K., Consensus formation process in network community, to be presented at KES-2001, Osaka, Sept. 6, 7 & 8 September 2001, Japan, 2001. 11. Kautz, H., Selman B., and Shah, M., Referral Web: combining social networks and collaborative filtering, Communications of the ACM, 40 (3) pp. 63-65, 1997. 12. Kubota, H., Nishida, T., and Koda, T., Exchanging tacit community knowledge by talking-virtualized-egos, in: Proceedings of Agent 2000, pp.285-292, 2000. 13. Nakata, K., Knowledge as Social Medium, New Generation Computing, Vol. 17, No. 4, pp. 395-405, 1999. 14. Nishida, T., Facilitating community knowledge evolution by talking vitrualized egos, in: Hans-Joerg Bullinger and Juegen Ziegler (eds.), Human-Computer Interaction VOLUME 2, Lawrence Erlbaum Associates, Pub., pp. 437-441, 1999. 15. Nishida, T., Fujihara, N., Azechi, S., Sumi, K., and Hirata, T., Public Opinion Channel for communities in the information age, New Generation Computing, Vol. 17, No. 4, pp. 417-427, 1999. 16. Nishida, T. (ed.), Dynamic Knowledge Interaction, CRC Press LLC, 2000. 17. Nishida, T., Towards dynamic knowledge interaction, Keynote Paper, in: Proc. KES-2000, pp. 1-12,2000. 18. Nonaka, I. and Takeuchi, H., The knowledge-creating company: How Japanese companies create the dynamics of innovation, Oxford University Press, New York, 1995. 19. Ohguro, T., FaintPop: In touch with the social relationships, presented at JSAI- Synsophy International Workshop on Social Intelligence Design, Matsue, Japan, May 21-22, 2001. 20. Okamoto, M., Isbister, K., Nakanishi, H. and Ishida, T., Supporting cross- cultural communication in real-world encounters, The 8th International Conference on Human-Computer Interaction (HCI-99), Volume 2, pp. 442- 446, 1999. 21. Smith, M. A., Voices from the WELL: the logic of the virtual commons, http://www.usyd.edu.au/su/social/papers/virtcomm.htm. 22. Sherif, M., Harvey, O. J., White, B. J., Hood, W. R., and Sherif, C. W., Intergroup conflict and cooperation: the robbers' cave experiment, Institute of group relations, University of Oklahoma, Norman, 1966. 23. Schlichter, J., Koch, M., and Xu, C, Awareness — the common link between groupware and community support systems, in: Ishida, T. (ed.), Community Computing and Support Systems — Social Interaction in Networked Communities, LNCS 1519, Springer, Berlin, pp. 77-93, 1998. 24. Sumi, K. and Nishida, T., Context-aware and personalized communication support system, IEEE Intelligent Systems, in press, 2001. 25. Tajifel, H. and Turner, J.C., The social identity theory of intergroup behavior, in: Worchel, S. and Austin, W. G. (eds.), Psychology of Intergroup Relations, 2nd Edition, Nelson-Hall, pp. 7-24, 1986. 26. Thomas, J. C, Collaborative innovation tools, presented at JSAI-Synsophy International Workshop on Social Intelligence Design, Matsue, Japan, May 21- 22,2001. QUERY ANSWERING BASED ON DISTRIBUTED KNOWLEDGE MINING ZBIGNIEW W. RAS University of North Carolina, Department of Computer Science Charlotte, N.C. 28223, USA E-mail: ras@uncc.edu Traditional query processing provides exact answers to queries. It usually requires that users fully understand the database structure and content to issue a query. Due to the complexity of the database applications, so called global queries can be posed which traditional query answering systems can not handle. In this paper a query answering system based on distributed data mining is presented to rectify these problems. 1 Introduction In many fields, such as medical, banking and educational, similar databases are kept at many sites. Each database stores information about local events and uses attributes suitable for a local task, but since the local situations are similar, the majority of attributes are compatible among databases. An attribute may be missing in one database, while it occurs in many others. Missing attributes lead to problems. A user may issue a query to a local database S\ in search for objects that match a desired description, only to realize that one component a\ of that description is missing in S\ so that the query cannot be answered. The same query may work in other databases but the user is interested in identifying suitable objects only in S\. Clearly, the task of integrating established database systems is complicated not only by the differences between the sets of attributes but also by differ­ ences in structure and semantics of data. We call such systems heterogeneous. The notion of an intermediate model, proposed by [Maluf and Wiederholdf, is very useful in dealing with the heterogeneity problem, because it describes the database content at a relatively high abstract level, sufficient to guarantee ho­ mogeneous representation of all databases. Discovery layers and action layers introduced in this paper, can be used for a similar purpose. Discovery layer contains rules extracted from a database. Actions layer contains, so called, action rules (see [Ras and Wieczorkowskaf) showing what minimal changes in a database are needed to re-classify some of its objects. 17 18 2 Distributed Knowledge Systems In this section, we recall the notion of an information system and a distributed information system (DIS). Next, we introduce local queries and give their stan­ dard semantics. Finally, we show the structure of discovery layers and action layers. By an information system we mean S = (X, A, V), where X is a finite set of objects, A is a finite set of attributes, and V = [J{Va : a G A} is a set of their values. We assume that: • Va, V(, are disjoint for any a, b G A such that a ^ b, • a : X —> Va is a function for every a £ A. Instead of a, we may write a^s] to denote that a in an attribute in S. By a distributed information system we mean a pair DS = ({Si}i^i,L) where: • / is a set of sites. • Si = (Xi, Ai,Vi) is an information system for any i G I, • L is a symmetric, binary relation on the set /. A distributed information system DS = ({Si}i^i,L) is consistent if the following condition holds: (V;)(Vj)(Vz e Xi n X,)(Va e Az n A,) {a[Si]{x) = (a[Sj])(x)). In the remainder of this paper we assume that DS = ({5j}ie/, L) is con­ sistent. Also, we assume that Sj = (Xj,Aj,Vj) where Vj — \J{Vja : a G Aj}, for any j & I. We use A to denote the set of all attributes in DS, A = [){Aj : j G /}. Also, by V we mean (J{Vj : j G /}. Before, we introduce the notion of a discovery layer, we begin with a defini­ tion of s(i)-terms and their standard interpretation Mj in DS = ({Sj}jei,L), where Sj = (Xj,Aj,Vj) and Vj = \J{Vja : a G Aj}, for any j G /. By a set of s(i)-terms (also called a set of local queries for site i) we mean a least set Tj such that: • 0,1 G Tu • w G Ti for any w G Vj, 19 • if ti,t2 eTi, then (*i + t2), (h * t2), ~ h £Tt. By a set of s(2)-formulas we mean a least set F; such that: • if ti,t2 G Ti, then (h = t2) G Fh Definition of D5-terms (also called a set of global queries) and .DS-formulas is quite similar (we only replace Tj by (J{Tj : i £ 1} and F, by F in two defi­ nitions above). We say that: • s(i)-term t is primitive if it is of the form l\{w : w G Ui} for any Ui C V», • s(i)-term t = Y\{w : w G t/j} where U, C Vi is simple if £/$ n Via is a singleton set for any a £ Ai, • s(«)-term is in disjunctive normal form (DNF) if t = ^T,{tj : j G J} where each tj is primitive. Similar definitions we have for DS-terms. Clearly, it is easy to give an example of a local query. The expression: select * from Flights where airline = "Delta" and departureJime — "morning" and departure-airport = "Charlotte" is an example of a non-local query (Z>S-term) in a database Flights(airline, departure dime, arrival-time, departure-airport, arrival-airport). Semantics of s(i)-terms is defined by standard interpretation M* in a dis­ tributed information system DS — ({Sj}jei,L) as follows: • Mi(0) = 0, Mi(l) = Xt • Mi(w) = {x G Xi : if w G Vja then w = h,(x, a)} for any w G V,, • if ti,t2 are s(i)-terms, then MJ(ii+t2) = M1((1)UMi(i2), Mi(t1*t2) = Mi{t1)nMi(t2), Mi(~t1) = Xi-Mi(ti). Afifa = i2) = (if Mi(ti) = Mi(i2) then T else F ) where T stands for True and F for False 20 The sound and complete axiomatization of the above semantics is quite standard and for instance is given in paper by [Ras]6. Now, we are ready to introduce the notion of (fc,i)-rules, for any i £ I. We use them to form a discovery layer at site i £ I. By (fc,i)-rule in DS = ({SJ}J^I,L), k,i £ I, we mean a triple (c,t,s) such that: • c e Vk - Vu • t,s are s(fc)-terms in DNF and they both belong to Tt D T,-, • Mk(t) CMk(c)CMk{t + s). For any (fc,i)-rule (c, t,s) in DS = ({SJ}J^I,L), we say that: • (t —> c) is a k-certain rule in DS, • (t + s —>• c) is a k-possible rule in DS. Let us assume that r\ = (ci, t\, s\), r-i — (02, *2> ^2) are (k, i)-rules. We say that: ri,r2 are strongly consistent, if either ci,c2 are values of two different attributes in Sk or a DNF form equivalent to t\ * £2 does not contain simple conjuncts. Now, we are ready to define a discovery layer Dk{. Its elements can be seen as approximate descriptions of values of attributes from Vk — Vi in terms of values of attributes from Vk n V*. To be more precise, we say that Dki is a set of (k, i)-rules such that: if (c, t, s) £ Dki and t\ =~ (t + s), then (~ c, tl, s) e Dk{. By a discovery layer for site i, denoted by Di, we mean any subset of \J{Dki : (k,i) e L}. 3 Actions Layer In this section we introduce the notion of actions layer which is a basic part of a distributed knowledge system (DKS). Information systems can be seen as decision tables . In any decision ta­ ble together with the set of attributes a partition of that set into conditions and decisions is given. Additionally, we assume that the set of conditions is partitioned into stable conditions and flexible conditions. Attribute a 6 A is called stable for the set X if its values assigned to objects from X can not be 21 changed in time. Otherwise, it is called flexible. Date of birth is an example of a stable attribute. Interest rate on any customer account is an example of a flexible attribute. For simplicity reason, we consider decision tables with only one decision. We adopt the following definition of a decision table: A decision table is any information system of the form S = {X, A\ U A2 U {d},V), where d §t Ai U A2 is a distinguished attribute called decision. The elements of A\ are called stable conditions, whereas the elements of A2 U {d} are called flexible conditions. The goal is to change values of attributes in Ai for some objects in X so the values of the attribute d for these objects may change as well. Rules in a discovery layer defining d in terms of A\ U A2 are extracted from S and used to discover new rules called action rules7. These new rules provide suggestions for re-classification of objects from S in terms of the attribute d. It can be done because d is flexible. Now, let us assume that (a,v —> w) denotes the fact that the value of attribute o has been changed from v to w. Similarly, the term {a, v —> w){x) means that a{x) = v has been changed to a{x) = w. Saying another words, the property {a, v) of object x has been changed to property {a, w). Assume now that S = {X, AiUA2U{d}, V) is a decision table, where A\ is a set of stable attributes and A2 is a set of flexible attributes. Assume that rules ri,r2 have been extracted from S and ri/Ai = r2/A2,d{n) = ki,d{r2) = k2 and hi < k2. Also, assume that {bx, b2,..., bp) is a list of all attributes in Dom{ri) D Dom{r2) fl A2 on which r\,r2 differ and ri{b\) = Wi,7-1(62) = v2,...,n{bp) =vp. By (ri,r2)-action rule on x € X we mean a statement: [(61,vi —>wi) A {b2,v2 —>• w2) A A{bp,vp —> wp)]{x) => [{d,fa) —> {d,k2)]{x). If the value of the above rule is true on x then the rule is valid for x. Otherwise is false. Action layer for a site i, denoted by Acti, contains (ri,^-action rules constructed from rules ri, r2 in a discovery layer Di. 4 Distributed Knowledge System In this section, we introduce the notion of a distributed knowledge system. By Distributed Knowledge System {DKS) we mean DS = {{{Si, Dt, Acti)}iei, L) where {{Si}iei,L) is a distributed information system, Di = \J{Dki : {k,i) e L} is a discovery layer and Acti is an action layer for i £ I. Figure 1 shows the basic architecture of DKS (a query answering system QAS that handles global queries is also added to each site of DKS). Opera- 22 Knowledge Exchange Knowledge Exchange Knowledge Exchange Actions Layer Actions Layer Actions Layer Actions Layer i i Discovery Layer i ~~i Database Knowledge Exchange Operational Semantics QAS Knowledge Exchange Discovery Layer Discovery Layer Knowledge Exchange mining j j j mining j j j Operational Semantics Operational Semantics QAS QAS Discovery Layer Operational Semantics QAS Figure 1: Distributed Knowledge System (DKS) global query Transformation engine based on logical axioms and operational semantics Ni local query at site i (lower approximation) QAS for site i Figure 2: Query Answering System (QAS) tional semantics reflects the dynamic nature of definitions of attribute values in a query (see [Ras and Zytkowf). Figure 2 shows a part of QAS which is responsible for query transforma­ tion. This part of QAS can be replaced by a rough transformation engine shown in Figure 3. If for each non-local attribute we collect rules from many sites of DKS and then resolve all inconsistencies among them (see [Rasp), then the local confi­ dence in resulting operational definitions is high since they represent consensus of many sites. Assume now that N is a standard interpretation of global queries as in­ troduced for instance in [Rasf. It corresponds to a pessimistic approach to 23 local query at site i (upper approximation) local query at site i (lower approximation) QRAS for site i Figure 3: Query Rough Answering System (QRAS) evaluation of global queries because of the way the non-local attribute values are interpreted (their lower approximation is taken). We can replace Ni by a new interpretation Ji representing optimistic ap­ proach to evaluation of global queries. Namely, we define: • Ji(w) =X- Ni(~ w), • Ji(~ w)=X- Ni(w), • Jj(i) = Ni(t) for any other t. In optimistic approach to evaluation of queries, upper approximation of non-local terms w, ~ w is taken. Following this line of thought, we can propose rough operational semantics Ri defined as Ri(t) = [Ni(t), Ji(t)] for any global query t. Rough operational semantics has a natural advantage of either Nt or Jj. Clearly, if interpretations Ni and Ji of a term t give us the same sets of objects, then both approximations (lower and upper) are semantically equal. 5 Query Answering Based on Reducts In this section we recall the notion of a reduct (see [Pawlakf) and show how it can be used to improve query answering process in DKS. Let us assume that 5 — (X, A, V), is an information system and V — \J{Va '• a, € A}. Let B C A. We say that x,y £ X are indiscernible by B, denoted [x «B y], if (Vo € B)[a{x) — a(y)]. Now, assume that both Bi,-02 are subsets of A. We say that B\ depends on B2 if «B2C«BI- Also, we say that B\ is a covering of B2 if B2 depends on global query Rough transformation engine based on logical axioms and operational semantics Ni and Ji 24 B\ and B\ is minimal. By a reduct of A in 5 (for simplicity reason we say A-reduct of 5) we mean any covering of A. Example. Assume the following scenario: . S, = (Xi,{c,d,e,g},Vi), 52 = (X2,{a,b,c,d,f},V2), S3 = (X3, {b, e, g, h}, V3) are information systems, • User submits a query q = q(c, e, /) to the query answering system QAS associated with system Si, • Systems Si, S2, S3 are parts of DKS. Attribute / is non-local for a system Si so the query answering system associated with Si has to contact other sites of DKS requesting a definition of / in terms of {d, c, e,g}. Such a request is denoted by < / : d,c,e,g >. Assume that the system S2 is contacted. The definition of /, extracted from 52, involves only attributes {d, c, e, g} D {a, b, c, d, /} = {c, d}. There are three /-reducts (coverings off) in 52- They are: {a, b}, {a, c}, {b, c}. The optimal /-reduct is the one which has minimal number of elements outside {c, d}. Let us assume that {b, c} is chosen as an optimal /-reduct in 52. Then, the definition of / in terms of attributes {b, c} will be extracted from 52 and the query answering system of 52 will contact other sites of DKS requesting a definition of b (which is non-local for Si) in terms of attributes {d,c,e,g}. If definition of b is found, then it is sent to QAS of the site 1. Figure 4 shows the process of resolving query q in the example above. We will use the graph in Figure 5 to represent visually the fact: R[i] is an a-reduct at site i containing attribute b. Let us adopt the following definition. By < ai,A >-linear set of reducts we mean a set {< Oj, R[i] >: 1 < i < k} such that: • a, £ A, for any 1 < i < k • ai+i G R[i], for any 1 < i < k — 1 • R[i] is an a^-reduct at site i and card(^4 — i?[i]) = l, for any 1 < i < k — 1 • R[k] C A. 25 overlap X2 x1 x2 x3 x4 x5 x6 x7 x8 a a1 a1 a1 a1 a2 a2 a3 a3 b b1 b2 b1 b2 b2 b2 b1 b1 c d c2 d c2 C1 d c3 c3 d d1 d1 d1 d1 d1 d1 d2 d2 f f1 f2 f1 f2 f3 f3 f4 f4 Coverings of f: {a,b}, {a,c}, {b,c} Covering {b, c} is chosen as optimal one. step 2^/ /•^bTd b2*c2 b2*d c3-> ->f1 ->f2 ->f3 f4 rules extracted at site 2 Coverings of b: y7 {e}, {g,h} Covering {e} is chosen as optimal one. yi y2 y3 y4 y5 y6 y7 y8 b b1 b1 b1 b1 b2 b2 b2 b2 e e1 e1 e1 e1 e2 e2 e3 e3 g h gi g2 gi g2 g2 g2 g3 g3 hi h2 hi h2 hi hi h3 h3 Figure 4: Process of resolving a query by QAS in DKS R[i] b i Figure 5: R[i]: a-reduct at site i containing attribute b 26 a3 R[1] I 1 R[2] I ; * R[1], R[2],..., R[k-1] should have minimal number of attributes outside A R[k] is a subset of A lR[k-1] R[k] Figure 6: < a\,A >-linear set of reducts Figure 6 visually represents < a\, A >-linear set of reducts. Clearly, the existence of < a, A >-linear set of reducts is sufficient for attribute a to be definable in DKS. The existence of < a, A >-directed set of reducts (defined below) is necessary for attribute a to be definable in DKS. By < ai, A >-directed set of reducts we mean a smallest, non-empty set {< a,, R[i], Si >: 1 < i < k} such that: • ai $ A, for any 1 < i < k • Si is a site of DKS, for any 1 < i < k • R[i] is an a^-reduct at site Sj, for any 1 < i < k • (Va G U{#[*] :i- directed set of reducts, to guarantee the smallest number of steps needed to learn the definition of attribute a\ while keeping the confidence of what we learn still the highest. 6 Conclusion Query answering system for DKS can handle two types of queries: Queries asking for all objects at a site i which satisfy a given description (any attributes are allowed to be used here). In such a case, query answering system will search for operational definitions of all attributes not-existing at the site i, before it can process the query locally. Queries asking for actions which have to be undertaken in order to change the classification of some objects at site i. Such queries can be processed 27 entirely at site i or moved for remote processing to other sites of DKS. In the last case, operational definitions of all attributes from the site i in terms of attributes from another site are needed. But, this problem will be a topic of a separate paper. References 1. Maluf, D., Wiederhold, G., "Abstraction of representation for interopera-tion", in Proceedings of Tenth International Symposium on Methodologies for Intelligent Systems, LNCS/LNAI, Springer-Verlag, No. 1325, 1997, 441-455 2. Navathe, S., Donahoo, M., "Towards intelligent integration of hetero­ geneous information sources", in Proceedings of the Sixth International Workshop on Database Re-engineering and Interoperability, 1995 3. Pawlak, Z., "Rough classification", in International Journal of Man- Machine Studies, Vol. 20, 1984, 469-483 4. Prodromidis, A.L. & Stolfo, S., "Mining databases with different schemas: Integrating incompatible classifiers", in Proceedings of The Fourth Intern. Conf. onn Knowledge Discovery and Data Mining, AAAI Press, 1998, 314-318 5. Ras, Z., "Dictionaries in a distributed knowledge-based system", in Con­ current Engineering: Research and Applications, Conference Proceed­ ings, Pittsburgh, Penn., Concurrent Technologies Corporation, 1994, 383-390 6. Ras, Z., "Resolving queries through cooperation in multi-agent systems", in Rough Sets and Data Mining (Eds. T.Y. Lin, N. Cercone), Kluwer Academic Publishers, 1997, 239-258 7. Ras, Z., Wieczorkowska, A., "Action Rules: how to increase profit of a company", in Principles of Data Mining and Knowledge Discovery, D.A. Zighed, J. Komorowski, J. Zytkow (Eds), Proceedings of PKDD'00, Lyon, France, LNCS/LNAI, No. 1910, Springer-Verlag, 2000, 587-592 8. Ras, Z., Zytkow, J., "Mining for attribute definitions in a distributed two- layered DB system", Journal of Intelligent Information Systems, Kluwer, Vol. 14, No. 2/3, 2000, 115-130 9. Ras, Z., Zytkow, J.,"Discovery of equations to augment the shared oper­ ational semantics in distributed autonomous BD System", in PAKDD'99 Proceedings, LNCS/LNAI, No. 1574, Springer-Verlag, 1999, 453-463 10. Zytkow, J.M., Zhu, J., and Zembowicz R. Operational Definition Refine­ ment: a Discovery Process, Proceedings of the Tenth National Conference on Artificial Intelligence, The AAAI Press, 1992, p.76-81. APPROXIMATE REASONING BY AGENTS IN DISTRIBUTED ENVIRONMENTS ANDRZEJ SKOWRON Institute of Mathematics, Warsaw University Banacha 2, 02-097 Warsaw, Poland E-mail: skowron@mimuw.edu.pl Solving complex problems by multi-agent systems in distributed environments re­ quires new approximate reasoning methods based on new computing paradigms. One such recently emerging computing paradigm is Granular Computing. Gran­ ular computations are performed on information granules representing vague and complex concepts delivered by agents engaged in tasks such as knowledge represen­ tation, communication with other agents, and reasoning. In this paper, we present an outline of foundations for information granule calculi and methods for inducing relevant information granule constructions from data and background knowledge. These constructions can be interpreted as approximate reasoning schemes. The proposed methodology of approximate reasoning has been developed for solving complex problems in areas such as identification of objects by autonomous sys­ tems, web mining or sensor fusion. 1 Introduction Information processing in intelligent systems, in particular in multi-agent sys­ tems, needs new soft-computing paradigms. The solutions derived by such systems should satisfy a given specification not necessarily exactly but up to a satisfactory degree. One can expect that due to such more relaxed specifi­ cation constraints the robust solutions for complex problems can be derived efficiently by methods based on these new paradigms. One of such recently emerging paradigm is Granular Computing based on rough set and rough mereological approaches (see, e.g., Zadeh and Kacprzyk 33, Zhong et al 35, Lin 9, Polkowski and Skowron 22'25>26; Skowron and Stepaniuk 29, Nguyen et al11, Skowron 27) as a way to achieve Comput­ ing with Words (see, e.g., Zadeh 32-34; Zadeh and Kacprzyk 33). Granular computations are performed on information granules representing vague and complex concepts delivered by agents engaged in, for example, knowledge rep­ resentation, communication with other agents, and reasoning. Our approach is related to logical aspects of perception (see, e.g., Zadeh 34). Specifications of complex tasks are often formulated in words, phrases or more complex texts of a natural language. Hence, the following main prob­ lem arises: if and how can an information granule, in a sense, sufficiently close to the target information granule Gt representing the task specification, be 28 29 constructed from input information granules (e.g., representing sensor mea­ surements) . One of the important problems is related to the construction of an inter­ face allowing knowledge acquisition agents (KA-agents) to acquire knowledge from customer-agents (CA-agents), who specify a task. The aim is to induce a satisfactory approximation Gk of the target information granule Gt in the language of KA-agents, i.e., an information granule Gk sufficiently close to (or included in) the target information granule Gt- Hence, some tools for expressing inclusion and proximity (closeness) of information granules mea­ sured by the degree of proximity are needed. For this purpose we use rough sets (see, e.g., Pawlak 14, Komorowski et al7) and rough mereology (see, e.g., Polkowski and Skowron 18'20'22). The interface construction should be sup­ ported by background knowledge (in particular, by ontology of concepts) and experimental data. An information granule G sufficiently close to the information granule Gk delivered by KA-agents should be constructed from input information gran­ ules (representing, e.g., sensor measurements). In the search for granule G, relevant operations and inclusion (closeness) measures on information granules should be discovered and used. The granule G is constructed from basic com­ ponents defined by information granule calculi. Any such calculus consists of components such as (i) elementary input information granules, (ii) operations on information granules, (iii) relations of inclusion and proximity measured by the proximity degree between information granules, and (iv) schemes of infor­ mation granule construction which can be treated as approximate reasoning schemes (Ai?-schemes, for short) on information granules. Elementary information granules together with inclusion and proximity relations between such granules are primitive constructs in granule construc­ tion. Higher level constructs, like information granules and related inclusion (closeness) relations, can be defined from previously constructed lower level constructs using relevant operations. Fusion operations are important operations on information granules. They are based on negotiation schemes for resolving conflicts between agents, delivering arguments of operations. More complex operations are defined by robust ^4-R-schemes. Such schemes are obtained by approximate reasoning rules and methods for their composition, dependent on available data and background knowledge. The robustness of ^4i?-schemes means that the close­ ness (inclusion) of constructed granules is preserved in a satisfactory degree under small deviations of input granules (or operation parameters used for the granule construction). The robustness of the target construction can be deduced from the robustness of their sub-constructions, if some constraints 30 for composition are satisfied. The robust j4i?-schemes should be extracted from experimental (e.g., sensory) data or/and background knowledge rather than from classical deduction mechanisms. The ^4ii-schemes are parameterized. Relevant information granules are constructed by tuning AR-scheme parameters. There are several kinds of parameters tuned in the process of searching for relevant information granules. Some of them come from approximation spaces of agents that make it possible to obtain a proper generalization degree of the granule constructed in the inductive reasoning. Other parameters are related to agent teams and are used to tune measures of inclusion (closeness) between information granules and to tune propagation mechanisms of the inclusion (closeness) degrees along the Ai?-schemes. The ^4-R-schemes in multi-agent systems can be treated as higher order neural networks, called rough neural networks, performing operations on information granules instead of numbers. One of the main problems of a new Rough Neurocomputing paradigm is to develop methods for inducing rough neural networks. In this paper, we outline an approach to the above mentioned problems. Our approach is based on the foundations of a calculus on information granules developed by means of rough set and rough mereological approaches. Its aim is to create a methodology and tools for solving a wide class of complex problems ranging from the identification of road traffic situations by an unmanned aerial vehicle (see, e.g., www page of WITAS project 31) to problems of text data mining in the Internet (see, e.g., Skowron 27, Kargupta and Chan 6). 2 Information Granule Systems In this section, we present a basic notion for our approach, i.e., information granule system. Any such system S consists of a set of elementary granules E together with an operation {•} making collections of granules from finite sets of granules. A finite subset of the set generated from elementary granules using this operation is fixed. This subset is extended by means of other operations on information granules producing new information granules. Moreover, a family of relations with the intended meaning to be a part to a degree between information granules is distinguished. Degrees of inclusion are also treated as information granules. The degree structure is described by a relation to be an exact part. More formally, an information granule system is any tuple S = (E,{E},H,0,v,{vp}peH) (1) where 1. E is a finite set of elementary granules; 31 2. {£} is a finite subset of PU(E) = E U P(E) U P(E U P(E)) U ... where P(X) denotes the powerset of X; 3. H is a finite set of granule inclusion degrees with a binary relation v C H x H to be an (exact) part; v defines on H a structure used to compare the degrees by assuming, e.g., p < q if and only if v(q,p); 4. O is a set of (partial) operations used to construct new granules from {E}; by means of operations from O, the set {E} is extended to the set G(S) C PUJ(E) of granules generated from {E} using operations from O; 5. j/p C G(S) x G(5) is a binary relation io fee a part to a degree at least p between information granules from G(S). One can consider the following examples of the set E of elementary gran­ ules: (i) a set of descriptors of the form (a, v) where a € A and v £ Va for some finite attribute set A and value sets Va, and (ii) a set of descriptor conjunc­ tions. The set {E} consists of granules constructed by means of an operation {•} making collections from already constructed granules. Examples of such granules are tolerance granules created by means of similarity (tolerance) re­ lation between elementary granules, decision rules, sets of decision rules, sets of decision rules with guards, information systems or decision tables (see, e.g., Polkowski and Skowron 22, Skowron and Stepaniuk 29, Skowron 27). The most interesting class of information granules create information granules specified in natural language and their approximations by means of experimental data tables and background knowledge. One can consider as an example of the set H of granule inclusion degrees the set of binary sequences of a fixed length with the relation v to be a part defined by the lexicographical order. This degree structure can be used to measure the inclusion degree between granule sequences or to measure the matching degree between granules representing classified objects and granules describing the left hand sides of decision rules in simple classifiers (see, e.g., Polkowski and Skowron 25). However, one can consider more complex degree granules by taking as degree of inclusion of granule g\ in granule gi the granule being a collection of common parts of these two granules g\ and 22. Agents are able to extract 33 local approximate reasoning schemes called productions from such represen­ tations. Algorithmic methods for extracting such productions from data are discussed in papers by Polkowski and Skowron 19, Skowron 27, Skowron and Stepaniuk 30. The left hand side of each production (in the simplest case) is of the form (St1(ag),(e[1\---,e^)),---,(stk(ag),(e[k\---,e^)) (2) and the right hand side is of the form (st(ag),(ei,---,er)) (3) for some positive integers k, r. Such a production (see Figure 1) represents information about an opera­ tion o that can be performed by an agent ag. In the production, k denotes the arity of operation. The operation o represented by the production is trans­ forming standard (prototype) input information granules sti(ag), • • •, stk(ag) into the standard (prototype) information granule st(ag). Moreover, if input information granules <7i, • • • ,<7jt are included (close) to st\(ag), • • • ,stk(ag) to degrees e(- , • • •, c- , then the result of the operation o on information gran­ ules gi, • • • ,gk is included (close) to the standard st(ag) to a degree at least €j where 1 < j < k (see Figure 1). Standard (prototype) granules can be interpreted in different ways. In particular, they can correspond to concept names in natural language. Figure 1. Productions and AR-schemes The sample productions in Figure 1 are basic components of a reasoning system related to the agent set Ag. An important property of such productions 22>23.24>26. In the general case, i.e., when it is necessary to use approximation spaces, the Ai?-schemes can be interpreted as rough neural networks. In the case where standards are interpreted as concept names in natural language and there is given a reasoning scheme in natural language over such standards, the corresponding rough neural network rep­ resents a cluster of reasoning constructions approximately following (in other information granule systems) the reasoning given in natural language. Let us observe that AR schemes are not classical proofs defined by means of deductive systems. They are approximate reasoning schemes discovered from data and background knowledge. The notion of classical proof is substi­ tuted by means of derivations defining Ai?-schemes, i.e., derivations satisfying some constraints. The deductive systems are substituted by productions sys­ tems of agents linked by approximation spaces, communication strategies and mechanism of derivation of Ai?-schemes. This revision of classical logical notions seems to be important for solving complex problems in multi-agent systems. 4 Conclusions and Some Directions for Further Research We have outlined a methodology for approximate reasoning in distributed multi-agent systems. It is based on rough mereology. Several research directions are related to the discussed v4i?-schemes and rough neural networks. We enclose a list of such directions together with examples of problems. 1. Developing foundations for information granule systems. Certainly, still more work is needed to develop solid foundations for syn­ thesis and analysis of information granule systems. In particular, meth­ ods for construction of hierarchical information granule systems, and methods for representation of such systems should be developed. 36 2. Algorithmic methods for inducing parameterized productions. Some methods have already been reported such as discovery of rough mereological connectives from data (see, e.g., Polkowski and Skowron 19) or methods based on decomposition (see, e.g., Polkowski and Skowron 20, Skowron 27, Skowron and Stepaniuk 30, Peters et al16). However, these are only initial steps toward algorithmic methods for inducing of param­ eterized productions from data. One interesting problem is to determine how such productions can be extracted from data and background knowl­ edge. A method in this direction has been proposed in a paper by Doherty et al. 3 3. Algorithmic methods for synthesis of AR-schemes. It was observed (see, e.g., Skowron and Polkowski 20-25) that problems of negotiations and conflict resolutions are of great importance for synthe­ sis of ^4i?-schemes. The problem arises, e.g., when we are searching in a given set of agents for a granule sufficiently included or close to a given one. These agents, often working with different systems of information granules, can derive different granules and their fusion will be necessary to obtain the relevant output granule. In the fusion process, the negoti­ ations and conflict resolutions are necessary. Much more work should be done in this direction by using the existing results on negotiations and conflict resolution. In particular, Boolean reasoning methods seem to be promising (Polkowski and Skowron 20). Another problem is related to the size of production sets. These sets can be of large size and it is important to develop learning methods for extracting small candidate production sets in the process of extension of temporary derivations out of huge pro­ duction sets. For solving this kind o problems methods for clustering of productions should be developed to reduce the size of production sets. Moreover, dialog and cooperation strategies between agents can help to reduce the search space for necessary extension of temporary derivations. 4. Algorithmic methods for learning in rough neural networks. A basic problem in rough neural networks is related to selecting relevant approximation spaces and to parameter tuning. One can also look up to what extent the existing methods for classical neural methods can be used for learning in rough neural networks. However, it seems that new approach and methods for learning of rough neural networks should be developed to deal with real-life applications. In particular, it is due to the fact that high quality approximations of concepts can be often obtained only through dialog and negotiations processes among agents in which gradually the concept approximation is constructed. Hence, for 37 rough neural networks learning methods based on dialog, negotiations and conflict resolutions should be developed. In some cases, one can use directly rough set and Boolean reasoning methods (see, e.g., Skowron and Stepaniuk 29). However, more advanced cases need new methods. In particular, hybrid methods based on rough and fuzzy approaches can bring new results (Pal and Skowron 12). 5. Fusion methods in rough neural neurons. A basic problem in rough neurons is fusion of the inputs (information) derived from information granules. This fusion makes it possible to con­ tribute to the construction of new granules. In the case where the gran­ ule constructed by a rough neuron consists of characteristic signal values made by relevant sensors, a step in the direction of solving the fusion problem can be found in Pawlak et al. 17 6. Adaptive methods. Certainly, adaptive methods for discovery of productions, for learning of Ai?-schemes and rough neural networks should be developed (Koza 8). 7. Discovery of multi-agent systems relevant for given problems. Quite often, the agents and communication methods among them are not given a priori with the problem specification and a challenge is to develop methods for discovery of relevant for given problems multi-agent system structures, in particular methods for discovery of relevant communication protocols. 8. Construction of multi-agent systems for complex real-life problems. The challenging problems are related to applying the presented method­ ology to real life problems like control of autonomous systems (see, e.g., www page of WITAS project 31), web mining problems (see, e.g., Kar- gupta and Chan 6, Skowron 27), sensor fusion (see, e.g., Brooks et al1, Peters et al15'17) or spatial reasoning (see, e.g., Escrig 2, Diintsch 4). 9. Evolutionary methods. For all of the above methods it is necessary to develop evolutionary searching methods for (semi-) optimal solutions (Koza 8). 10. Parallel algorithms. The discussed problems are of high computational complexity. Parallel algorithms searching for Ai?-schemes and methods for their hardware implementation belong to one important research directions. 38 Acknowledgements. I would like to thank to Professor Lech Polkowski for the years of close cooperation on rough mereology, to Professor Jaroslaw Stepaniuk for the cooperation on information granule models and to Pro­ fessor James F. Peters for cooperation on sensor fusion methods, insightful comments and a number of clarifying discussions on the presented paper. The research has been supported by the State Committee for Scientific Research of the Republic of Poland (KBN) research grant 8 T11C 025 19 and by the Wallenberg Foundation grant. References 1. R.R. Brooks et al, Multi-Sensor Fusion, (Prentice-Hall PTR, Upper Sad­ dle River, NJ, 1998). 2. M.T. Escrig et al, Qualitative Spatial Reasoning: Theory and Practice, (IOS Press, Amsterdam, 1998). 3. P. Doherty et al, Combining Rough and Crisp Knowledge in Deductive Databases (submitted). 4. I. Diintsch ed., Fundamenta Informaticae (special issue on Spatial Rea­ soning, in print). 5. M.N. Huhns et al eds., Readings in Agents, (Morgan Kaufmann, San Mateo, 1998). 6. H. Kargupta and Ph. Chan, Advances in Distributed and Parallel Knowl­ edge Discovery, (AAAI Press/MIT Press, Cambridge 2001). 7. J. Komorowski et al in Rough Fuzzy Hybridization: A New Trend in Decision-Making, eds. S.K. Pal and A. Skowron (Springer-Verlag, Singa­ pore, 1998). 8. J.R. Koza, Genetic Programming II: Automatic Discovery of Reusable Programs (MIT Press, Cambridge, MA, 1994). 9. T.Y. Lin in 21 1. 10. T.M. Michell, Machine Learning (Mc Graw-Hill, Portland, 1997). 11. H.S. Nguyen et al, Computational Intelligence (in print). 12. S.K. Pal and A. Skowron eds., Rough-Fuzzy Hybridization: A New Trend in Decision Making (Springer-Verlag, Singapore, 1999). 13. S.K. Pal et al eds., Rough-Neuro Computing in Neurocomputing 36, (2001). 14. Z. Pawlak, Rough Sets. Theoretical Aspects of Reasoning about Data (Kluwer Academic Publishers, Dordrecht, 1991). 15. J.F. Peters, S. Ramanna et al, in Proc. of Int. Fuzzy Systems Association World Congress (IFSA'01), Vancouver, July 2001 (to appear). 16. J.F. Peters, A. Skowron et al, in Proc. of Int. Fuzzy Systems Association 39 World Congress (IFSA'Ol), Vancouver, July 2001 (to appear). 17. Z. Pawlak et al, Bulletin of International Rough Set Society 5(1-2), 177 (2001). 18. L. Polkowski and A. Skowron, International J. Approximate Reasoning 15(4), 333, (1996). 19. L. Polkowski and A. Skowron, in Critical Technology, Proc. of the Third World Congress on Expert Systems, eds. J.K. Lee et al (Cognizant Com­ munication Corporation, New York, 1996). 20. L. Polkowski and A. Skowron, Information Sciences, An International Journal 104(1-2), 129 (1998). 21. L. Polkowski and A. Skowron eds., Rough Sets in Knowledge Discovery, 1-2 (Physica-Verlag / Springer-Verlag, Heidelberg, 1998). 22. L. Polkowski and A. Skowron in 33 1. 23. L. Polkowski and A. Skowron in Grammar Systems for Multiagent Sys­ tems, eds. G. Paun and A. Salomaa (Gordon and Breach Science Pub­ lishers, Amsterdam, 1999). 24. L. Polkowski and A. Skowron in Rough Sets: New Developments in Knowledge Discovery in Information Systems, eds. L. Polkowski, T.Y. Lin, S. Tsumoto (Physica-Verlag / Springer-Verlag, Heidelberg, 2000). 25. L. Polkowski and A. Skowron in Proc. of RSCTC'2000, LNAI 2005, (Springer-Verlag, Berlin, 2001) (in print). 26. L. Polkowski and A. Skowron, Computational Intelligence (2001) (in print). 27. A. Skowron, Bulletin of International Rough Set Society 5(1-2), 9 (2001). 28. A. Skowron et al, Bulletin of the International Rough Set Society 3(4), 147 (1999). 29. A. Skowron and J. Stepaniuk, International Journal of Intelligent Sys­ tems 16(1), 57 (2001). 30. A. Skowron et al, Bulletin of International Rough Set Society 5(1-2), 135 (2001). 31. WITAS project web page: http://www.ida.liu.se/ext/witas/eng.html. 32. L.A. Zadeh, IEEE Trans, on Fuzzy Systems 4, 103 (1996). 33. L.A. Zadeh and J. Kacprzyk eds., Computing with Words in Informa­ tion/Intelligent Systems 1-2 (Physica-Verlag, Heidelberg, 1999). 34. L.A. Zadeh, Al Magazine 22(1), 73 (2001). 35. N. Zhong et al eds., New Directions in Rough Sets, Data Mining, and Granular Soft Computing, Proc. of RSFDGr'99, LNAI 1711 (Springer- Verlag, Berlin, 1999). MULTI-AGENT INFRASTRUCTURE FOR AGENT INTEROPERATION IN OPEN COMPUTATIONAL ENVIRONMENTS KATIA SYCARA The Robotics Institute, School of Computer Science Carnegie Mellon University, USA e-mail: katia@cs.cmu.edu http://www.cs.cmu.edu/ softagents/ Multi-agent Systems (MASs) are becoming increasingly important: as a scientific discipline, as a software engineering paradigm, and as a commercially viable and innovative technology. Despite the considerable research that has gone into the formation of theories, scientific principles and guidelines for MASs, there is relatively little experience with the building, fielding and rou­ tine use of them.. To achieve this goal, a stable, widely used, widely accessible and extensible MAS infrastructure is crucial. Various standards bodies (e.g. FIPA) are attempting to define standards for various aspects of MAS infras­ tructure, such as Agent Communications Languages.. However, there is no coherent account of what constitutes a MAS infrastructure, what functional­ ity it supports, what characteristics it should have in order to enable various value-added abilities, and what its possible relation with and requirements it may impose on the design and structure of single agents. Another equally important aspect of MAS that operate in an open world like the Internet, where communication links, informaiton sources, services and agents can appear and disappear dynamically is the issuse of discov­ ery and interoperation of agents. White pages and yellow page registries of companies, for example have been proposed and implemented (e.g. Yahoo business categories) for human understndability. We have coined the term middle agents [DeckerfcSycara, IJCAI97] to describe various agent intermedi­ aries that can act as brokers and discovery services for agents on the Internet. These domain independent intermediaries facilitate the finding and matching of agents and services with desirable functionalities (e.g. an agent that finds weather information). Such intermediaries start having appeal for industry. For example, in­ dustrial organizations (e.g. SUN) are developing and making accessible software that could constitute a part of a MAS infrastructure, such as JINI for service discovery. Protocols such as UDDI (www.uddi.org), SOAP (www.soapware.org) and languages such as WSDL (www.wsdl.org) ebXML and e-speak are receiving increased visibility. 40 41 In this talk, we will present a model of MAS infrastructure, and our im­ plemented RETSINA system that is an example of the general infrastructure model. We will also discuss various applications that we have implemented using RETSINA. CHAPTER 1 FORMAL AGNET THEORIES SPY: A MULTI-AGENT MODEL YIELDING SEMANTIC PROPERTIES F. BUCCAPURRI, D. ROSACI, G. M. L. SARNE, L. PALOPOLI DIMET, Universita "Mediterranea" di Reggio Calabria, Via Graziella Loc. Feo di Vito, 89100 Reggio Calabria (Italy) E-mail: {bucca,rosaci,sarne,palopoli}Sing.unirc.it In Multi-Agent Systems the main goal is providing fruitful cooperation among agents in order to enrich the support given to user activities. Cooperation can be implemented in many ways, depending on how local knowledge of agents is represented and consists in general in providing to the user an integrated view of individual knowledge bases. But the main difficulty is determining which agents are promising candidates for a fruitful cooperation among the (possibly large) uni­ verse of agents operating in the net. This paper gives a contribution in this con­ text, by proposing a formal framework for representing cooperation in multi-agents networks. Semantic properties are here represented by coefficients and adaptive algorithms permit the computation of a set of suggested agents for cooperation. Actual choices of the users modify internal parameters in such a way that the next suggestions are closer to users expectancy. 1 Introduction Coordinating the activities of multiple agents is a basic task for the viability of any system in which such agents coexist. Each agent in an agent com­ munity does not have to learn only by its own discovery, but also through a cooperation with other agents, by sharing individual learned knowledge. Indeed, cooperation is often considered as one of the key concepts of agent communities 6'7. Researchers in Intelligent Agent Systems have recognized that learning and adaptation are essential mechanisms by which agents can evolve coordinated behaviours finalized to meet the knowledge of the inter­ est domain and the requirements of the individual agents 3'10. In order to realize such a cooperation, some techniques developed in the field of Machine Learning has been introduced in various multi-agent systems 8'5'4. Such tech­ niques open, on the one hand, the possibility of integrating individual agent knowledge for acquiring an enhanced knowledge of the environment. But, on the other hand, they deal with the problem of determining which agents are promising candidates for suitable knowledge integration, taking into account situations of the kind mentioned above. In such a context, this paper describes a new multi-agent model, called SPY, able to inform the individual agent of a multi-agent network about which agents are the most appropriate to be contacted for possible knowledge inte- 44 45 gration. The main contributions of this paper are the following: (1) We point out which properties can be considered important for driving the integration of the knowledge coming from non local agents and give a formal model in which such properties are represented as quantitative information by mean of a number of real coefficients. (2) We propose an adaptive method for deter­ mining, for a given agent a of a multi-agent net, the most appropriate agents to cooperate with a. Such a method is adaptive in the sense that it takes into account some reactive properties of users, and, as such, its result depends on their behaviour. (3) On the basis of this model we design a strategy for supporting cooperation of agents operating in a multi-agent network. The first step consists on providing the user with a number of agent lists, each containing the most appropriate agents for cooperation, from which the user can choice agents she/he want to contact for supporting her/his activity. The multiplicity of such choice lists depends on the multiplicity of the properties that can be used as preference criteria. Users are free to also partially use the suggested lists, or can ignore them. In any case, user's behaviour induces a modification of some coefficients (describing reactive properties) in such a way that lists suggested in the future are (hopefully) closer to real user needs. Therefore, the system learns from user's behaviour about how to provide the users with suggestions meeting as much as possible their expectancy. (4) Fi­nally, we design the architecture of a system implementing the above agent cooperation model a. Throughout the paper we refer to a given set of agents A of cardinality n and we suppose that all agents in A can cooperate with each other. Thus we can see the set A as a undirected complete graph of agents whose arcs represent possible cooperation. W.l.o.g., we identify agents in A by the cardinal numbers {l,...,n}. 2 The Knowledge Bases 2.1 An Ontology for describing the domain of interest Since we consider only homogeneous agents, we suppose that a unique envi­ ronment is associated with our agent net. We represent such an environment in our model by a set of objects. For the rest of the section we consider a set of objects O as given. A domain D on O is a set of classes of objects. We suppose that a "This paper is a short version of the full report 2. For space limitations, the system architecture is not illustrated in this paper and theorems are provided without proofs. A detailed description of the system as well as proofs of theorems can be found in 2. 46 suitable semantics underlying the classification provided in a domain is given. The notion of domain is formally defined next. Definition 2.1 A Domain on O , denoted by D , is a subset of 2° such that both: (1) Vo € O, {o} € D, and (2) there exists an element r of D, called root such that, Vo € O, o £ r. Elements of D are called classes. In words, a domain is a set object classes containing a root class collecting all the objects of O and, for each object o of O, the singleton class {o}. Throughout the rest of the section, we assume a domain D on O as given. Set containment induces a partial ordering among elements of a domain D. A domain D plus this ordering in called ontology. Definition 2.2 An ontology on D, denoted by OD, is a partially ordered set (D, C). The ontology graph ofOo is a directed graph G{OD) = (N, A), where N (set of nodes) coincides with D and A (set of arcs) is the binary relation obtained as transitive reduction of the relation C of Oob• The node r of G(OD), where r is the root of D, is called root ofG(Oo). Note that, as a consequence of item (2) of Definition 2.1, each other node is reachable from the root r of G(OD)- Further, by item (1) of Definition 2.1, nodes of G{OD) with out-degree 0 coincide with singleton classes of D. An ontology based on a generalization hierarchy is suitable for representing many real-world situations, like the topics in web engines and in web sites, the items in e-commerce, the staff hierarchy of an organization and so on. It is worth noting that this is not the only possible choice for representing the environment of agents. Indeed, in different contexts, as semi-structured data in web documents, other kinds of ontologies may be better suited (for example OEM-graphs l, SDR-networks 9, etc.). 2.2 The Local Knowledge Base The ontology represents the common knowledge about the environment in which the agents work. However, each agent may have a partial view of the ontology representing the portion of the world the user monitored by the agent selects by her/his activity. Inside this portion of the ontology, different priorities for the classes can be inferred by exploiting user behaviour. This is encoded in the notion of the Local Knowledge Base (LKB for short), defined next. Definition 2.3 Given an ontology OD on D and an agent a, a Local Knowl- edgeBaseLKBa (of a on 0D), is a pair (Ga,pa), such that: (i) Ga = (Na,Aa) is a sub-graph of G{OD) (i-e., Na C D, Aa C A) containing the root r of b(A, B) is in the transitive-reduction of C iff A C B and /B C such A C C and C C B. 47 G(OD) and such that each n £ Na is reachable in Ga from r, and (ii) p is a function, called priority function, defining a real weight ranging from 0 to 1 associated to each arc (i, j) of Ga such that: p(i,j) = ^—^ , where 2-ik£Adj(,i) Ck Adj(i) is the set of nodes adjacent to i, and for each k 6 Adj(i), cy. counts how many times the user of a has selected an object (i.e. a leaf node) through a path selection including the arc (i,k). Note that coefficients c^ in a path of nodes r,ii, ...is are updated only when the leaf node is, corresponding to a single object of the domain, is selected. The root r of G{OD) is also called the root ofLKBa. A Local Knowledge Base, representing the local view of the agent, is then obtained by extracting a sub-graph from the ontology graph including all the classes accessed by the user (and thus at least the root node). Moreover, arcs of the so obtained graph are weighted for assigning highest priority to most accessed classes. 3 Extraction of the Semantic Properties and Integration of LKBs Besides his/her local agent, each user looks at the other agents of the net as a source of potentially interesting information in order to enrich the support to his/her activity. Interest in agents can be defined by considering some semantic properties. Such properties, useful for driving users' choices are of two types: (i) local properties, taking into account information stored in the LKBs, and (ii) global properties, merging local properties with external knowledge extracted from the general context. An important feature of the model is that the merge performed in the construction of global properties is based on adaptive learning involving some parameters taking into account user behaviour. In other words, global properties exploit an important kind of properties (encoded by some parameters) directly reflecting reactions of users to system advice. We call such additional properties reactive properties. Next we describe the set of properties used in the model. 3.1 Local properties: Similarity The only local property we consider is the property we call similarity be­ tween two agents i and j, representing a measure of the similarity of the two corresponding LKBs. Such a coefficient is a real value ranging from 0 to 1. Definition 3.1 Let i and j be two agents. Let Gl = {Ni,Ai) and Gj = {Nj ,Aj) be the two graphs of their LKBs. Let pl and pP be the corresponding 48 priority functions. We define the similarity Sij between i and j as Sij = !- \A*uAi\ T,h,k€AiuAJ Ihk, where 7ftfc =| pl{h, k)-p>{h, k) \ii(h,k) 6 A^A*, 7/ifc = 1 otherwise. Observe that the term i^i^/u'i Ylh,keA'uA' ^hk m tne expression defining Sij (for two agents i and j) represents a distance between agents i and j. Such a distance is defined as a mean of a number of contributions 7^, each corresponding to an arc (h, k) belonging to the set A1 U AK For common arcs of the two LKBs, that is, arcs belonging to the intersection between A1 and •A?, "fhk is the difference (in absolute value) between the respective priority functions (note that such a difference is a real value ranging from 0 to 1). In words, common arcs can be view as "homologous" arcs, and their distance measures how much these arcs differ in terms of weight. For the remaining arcs (h, k) g A1 n A7 , we assign the value 1 to the coefficient 7^. Indeed, an arc belonging to A1 but not belonging to A> has not a "homologous" arc in the LKB graph of the agent ,; (and vice versa), and thus its contribution to the overall distance must be set to the maximum value, that is the value 1. 3.2 Global Properties: Interest and Attractiveness Recall that global properties merge local properties with knowledge extracted from the context. In this section we introduce the notion of interest coeffi­ cient, representing just a measure of the global properties of a given agent as perceived by another one. Hence, for a pair of agents i and j, the interest co­ efficient, besides of the similarity between i and j, must take into account also knowledge extracted from the context. But which kind of contextual knowl­ edge has to be considered as meaningful?. The choice we make in our model is the following: The knowledge extracted from the context, used by the agent i for defining the interest coefficient Iij w.r.t. another agent j, is a measure of the global interest of all the other agents (different from i) w.r.t. the agent j, that is a measure of a sort of attractiveness of the agent j as perceived by the agent i. Recalling that the interest, besides the contextual knowledge, must take into account also the local knowledge (i.e., the similarity), the above definition of contextual knowledge leads to require that, for each i G A \ {j}: Iij = (t>ij{Sij,Pij{{hj I k ^ i})) (1) where pij and 4>ij are suitable functions yielding real values from 0 to 1. In particular, ptj returns a measure of the attractiveness of the agent j detected by the agent i from the value of the interest coefficients of all the agents (different from i) w.r.t j, while ij combines such a measure with the similarity 49 Sij. Clearly, the function (f>ij plays also the role of weighing the importance for the agent i of the local knowledge w.r.t. the contextual one. For fiij and ij (where i and j are two agents) we adopt in our model the following choices: (i) Hij is the function computing the mean of the interest coefficients of all the other agents different from j, (ii) ^>ij is a function com­ puting a linear combination of the similarity coefficient between i and j and the attractiveness of j w.r.t i. Applying the above definitions for \iij and (j>ij, (1) becomes the following linear system: hi = 1>ij • {Pi • Sij + (1 - Pi) • ^2 HkeA\{ij} hi) for each i G A \ {j} (2) where ipij and Pi, for each i G A \ {j}, are adaptive parameters ranging from 0 to 1 representing a measure of reactive properties that we suppose to be learned from the user behaviour, t/fy plays the role of a reducing factor, fil­ tering the advice of the system on the basis of the user behaviour, while P, measures the importance that the user gives to the local knowledge (similar­ ity) w.r.t. the contextual one. Note that both ipij and Pi can be estimated once the reactive properties are defined. We deal with this issue in the next section. Thus, given an agent j, any value assignment to the interest coef­ ficients of all the other agents w.r.t. j must satisfy (2). The next theorem ensures existence and uniqueness of a value assignment, for every value of the parameters occurring in (2). Theorem 3.1 Given an agent j £ A and a set of real coefficients {PutijiSij | i G A\{j}, P G [0,1],^- G [0,1] ,3Pk ^ 0,3WS * 1, Sij € [0,1]}, there exits a unique (n — \)-tuple of real values S = (I\j,..., I(j_!)j, J(j_l_i)j,.. .Inj) satisfying (2) with I^ G (0,1), for each 1^ occurring in S. The above result allows us to define the interest coefficients list of an agent j as the unique solution of (2). Definition 3.2 Given an agent j G A, the interest coefficient list of j is the unique (n — l)-tuple of real values (lij,..., I(J_I)J,IQ+I)J, ... Inj) satisfying (2). Given an agent i ^ j, the interest coefficient of i w.r.t j is the value lij occurring in the interest coefficient list of j. Besides of the interest property, from the knowledge of the interest co­ efficients lists, agents can exploits a second type of property. Indeed, an agent can compare different agents on the basis of their attractiveness coeffi­ cient, representing the component of the interest capturing only the contextual knowledge. Definition 3.3 Given a pair of agents i, j G A, the attractiveness of j per­ ceived by i, is the real coefficient Aij (ranging from 0 to 1) defined as: 50 Aij = ^hT^keAMij}1^, where (Iij,---,I(j-i)j,I(j+i)j,...Inj) is the in­ terest coefficients list of the agent j. 3.3 Choice Lists Suppose the user of an agent i has the intention of contacting other agents in order to establish a cooperation. Suppose the similarities between i and every other agent is known as well as both the interest coefficient of i w.r.t. every other agent and the attractiveness of all the agents perceived by i. As previously discussed, such values can be effectively computed once a number of parameter are set (actually, they can be suitably initialized and their updating could be learnt from the behaviour of the user, as we shall explain in the sequel). Thus, three agent lists can be presented to the user i associated to the agent i, each associated with a property among similarity, interest and attractiveness. We denote these lists Ls(i), i/(i), and LA{I). Ls{i) (Li(i), LA(I), resp.) is the list of the n — 1 agents j (different from i) ordered by decreasing similarity (interest, attractiveness, resp.) coefficient Sy (Jy, A^, resp.). When the user i chooses an agent j from the list Ls(i) (Li(i), LA(I), resp.), it means that she/he perceived only the property of similarity (interest, attractiveness, resp.) about the agent j. From the choices of the users, useful knowledge can be thus drawn, which is potentially usable as feedback for correcting advice given to them. This issue is discussed in the next section. 3.4 Reactive Properties For reactive properties we mean properties describing reactions of users to the suggestions received from the system at a given time, that must be taken into account for adapting future responses of the system. We implement such adaptation of the system to the user behaviour by including into the interest coefficient definition (see Section 3.2) some specific coefficients that are automatically updated during system running. In this section we describe both the role of such coefficients and the rules defining their adaptation to the user behaviour. Recall that, given a pair of agents i and j, for defining the interest coefficient hj, two parameters P, and tpij must be set. They are real parameters ranging from 0 to 1. Pi encodes the preference property and is called preference coefficient of the agent i, while tpij is the product B^ • Cij between the benevolence coefficient B^ and consent coefficient Cij, resp., of i w.r.t. j. Given an agent i, we denote by Ls{i), Lj(i), and LA(I) the three choice lists presented to the user of agent i by the system. The Preference Property. It is described by a real coefficient ranging from 0 to 1 denoted by Pi and called preference coefficient. The property 51 measures how much for an agent i the similarity is more important than the attractiveness property for defining global properties. It is easily recognizable that in the definition of interest given in Section 3.2 the coefficient Pi plays just this role. Now we define how the coefficient Pi is updated. Suppose that at a given time the user of the agent i makes a selection of agents. Denote by Sli (SSi, SAi, resp.) the set of the agents that the user has selected from the list Ls(i) (L[{i), LA{I)I resp.). We interpret the behaviour of the user in the following way. The choice of an agent from a list, say Ls{i), means that the user relies on the associated property, say similarity. We can then interpret the former choice as an implicit suggestion from the user to set the coefficient Pi to 1, while the latter as an implicit suggestion to set this value to 0. In case the user chooses from the list Lj(i), we infer that the user accept the current value the coefficient Pi. Taking into account the above observations, updating Pi after a selection step is defined as: Pi — | • ( is/.i+i'ss^+isJt.i + Pi)- This updating is obtained by computing the average between the old value of Pt and a new contribution corresponding to the mean of the " suggested" values for Pi. Observe that computing the mean with the old value, allows us to keep memory of the past, avoiding drastic changing of the coefficient. The Benevolence Property. This property measures a sort of availability of the agent j to which a user i requires to share knowledge. Such a property is used in order to weight the interest of i w.r.t. j. For instance, an agent j that recently, and for several times, has denied collaboration in favor of i should become of little interest for i. The parameter encoding such a knowledge is called benevolence coefficient, denoted by Bij, and takes real values ranging from 0 to 1. Bij = 0 (resp., Bij = 1) means the agent j is completely unavailable (resp., available) to fulfill the requests of i. The response of j to requests of i updates the value of B^ according to the following rules: B^ = min(l, Bij + 6) if j grants the request of i, B^ = max(l, B^ — S) if j denies the request of i, where 6 is a (reasonably small) positive real value. The Consent Property. This property describes how much the user of an agent i trusts suggestions of the system regarding another agent j done on the basis of the interest property. The coefficient associated with this property is denoted by C»j and is called consent coefficient. The updating rules defining how to adapt the coefficients Cij after a user selection step take into account only the portion of the selection performed on the list Li(i). Indeed, from this portion of the user selection, we can draw information about the opinion of the user about the suggestions provided by the system. For instance, if the user of i completely trusts the system capability of providing the best suited agent for cooperation by providing the list Lj(i) she/he will choose exactly 52 only the first k agents appearing in L/(i), where k is the size of the portion of her/his selection extracted from Li(i). This is not in general the case, that is, some of the k agents chosen from L/(i) do not occur in the set of the first k agents of Lj(i), We defined updating rules by taking into account the above observations according to the following idea: every agent h chosen by the user from Lj (i) produces a gain of the consent coefficient dh if h is a candidate from the system to be selected, produces an attenuation of Cih otherwise. More formally, given an agent i and a selection 5, (set of agents) extracted by the user of i from Lj(i), for each h £ Sf. dh = min(l,C^ + S) if h appears among the first \Si\ elements of Lj(i), C^ = max(0,Cih — 6), otherwise, where S is a (reasonably small) positive real value. 3.5 Integration of Local Knowledge Bases Cooperation between two agents is implemented in our model by the integra­ tion of their LKBs. Thus, the user of an agent i which has selected an agent j from one of the three choice lists can exploit the cooperation of j by consult­ ing the Integrated Knowledge Base, obtained by integrating the LKB of i with the LKB of j. It is worth noting that in our model the integration technique is orthogonal w.r.t. the process of selecting the most appropriated agents for cooperation, even if it is necessary to implement such a cooperation. In order to concentrate the attention of the reader to the main contribution of this paper, consisting on the extraction of semantic properties and their adapta­ tion to the user behaviour and due to space limitation, we omit a detailed description of the integration process, which can be found in 2. 4 Discussion and Conclusion We presented a formal framework for representing cooperation among agents in a multi-agent environment. The model is based on the extraction of some semantic properties capturing both local and contextual knowledge about agents. Such properties, encoded by suitable coefficients, drive users on se­ lecting from the agent net the most promising candidate agents for fruitful cooperation. User choices are exploited as feedback for adapting coefficients in such a way that a trade-off among similarity and attractiveness, on the one hand, agent congestion and user dissatisfaction, on the other hand, is obtained. As example, consider: (i) An agent a with high similarity and low attractiveness perceived by another agent b. The user of b can decide to contact a less similar, but more attractive, agent c, and this means that the current similarity does not fully satisfy b. Since b has chosen c, probably 53 it will make choices more similar to those of c than to those of a, and the similarity between a and b will decrease, coherently with dissatisfaction of the user, (ii) An agent a with high interest and low similarity (or low attractive­ ness) perceived by another agent b. The user of b can decide to contact a less interesting, but more similar (or more attractive) agent c. As a conse­ quence, the interest for a perceived by b will decrease, due to the decreasing of the consent coefficient Ct,a. (iii) An agent a with high interest and high attractiveness perceived by another agent b. The user of b knows that high attractiveness means probably long waiting time for obtaining answers from a and can decide to contact a less interesting agent c. As a consequence, the interest of b for a will decrease. References 1. S. Abiteboul. Querying semi-structured data, in Proc. of Int. Conf. on Database Theory -ICDT'97, (LNCS, Springer, 1-18, Delphi, 1997). 2. F. Buccafurri, L. Palopoli, D. Rosaci, G.M.L. Same. The SPY system. TR 1/01, DIMET Univ. Reggio C. (Italy, 2001). 3. C. Byrne and P.Edwards. Collaborating to refine knowledge, in Workshop on Agents that Learn from Other Agents - ML'95, (USA, 1995). 4. A. Chavez, A. Moukas and P.Maes. Challenger: a multi agent system for distributed resource allocation, in Proc. of the 1st Int. Conf. on Autonomous Agents - Agents'97, (ACM Press, 323-331, New York, 1997). 5. Y. S. Choi and S. I. Yoo: Multi-agent web information retrieval: neural network based approach, in Proc. of the 1st Int. Symposium - IDA '99 ed. Berthold (Springer, Amsterdam, 1999). 6. J.E. Doran, S. Franklin, N.R. Jennings and T.J. Norman, On cooperation in multi-agent systems, The Knowledge Engineering Revue 12, 3 (1997). 7. M. Fisher, J. Muller, M. Schroeder, G. Staniford and G.Wa gner. Method­ ological foundation for agent-based systems, The Knowledge Engineering Revue 12, 3 (1997). 8. A. Moukas and P.Maes. Amalthaea: an evolving multi-agent information filtering and discovery system for the WWW, Autonomous Agents and Multi-agent Systems 1, 1 (1998). 9. L. Palopoli, G. Terracina and D. Ursino. A graph based approach for extracting terminological properties of elements, in Proc. of Int. Conf. on Data Engineering - ICDE'01, (Heildeberg, 2001). 10. S. Sen. A foundational principle for promoting cooperative behavior among self-interested agents, in Proc. of the 1st Int. Conf. on Multi-agents Systems, (MIT Press, 1995). ABT WITH ASYNCHRONOUS REORDERING MARIUS-CALIN SILAGHI, DJAMILA SAM-HAROUD, AND BOI FALTINGS Swiss Federal Institute of Technology Lausanne 1015 Ecublens, Switzerland { Marius.Silaghi,Djamila.Haroud, Boi.Faltings) @epfl. ch Existing Distributed Constraint Satisfaction (DisCSP) frameworks can model problems where a)variables and/or b)constraints are distributed among agents. Asynchronous Backtracking (ABT) is the first asynchronous complete algorithm for solving DisCSPs of type a. The order on variables is well-known as an im­ portant issue for constraint satisfaction. Previous polynomial space asynchronous algorithms require for completeness a static order on their variables. We show how agents can asynchronously and concurrently propose reordering in ABT while maintaining the completeness of the algorithm with polynomial space complexity. 1 Introduction Distributed combinatorial problems can be modeled using the general frame­ work of Distributed Constraint Satisfaction (DisCSP). A DisCSP is defined in 1 as: a set of agents, Ai, ...,An, where each agent Ai controls exactly one distinct variable xt and each agent knows all constraint predicates relevant to its variable. The case with more variables in an agent can be obtained quite easily from here. Asynchronous Backtracking (ABT) * is the first complete and asynchronous search algorithm for DisCSPs. A simple modification was mentioned in * to allow for a version with polynomial space complexity. The completeness of ABT is ensured with the help of a static order im­ posed on agents. So far, no asynchronous search algorithm has offered the possibility to perform reordering without losing either the completeness, or the polynomial space property. In this paper we describe a technique that al­ lows the agents to asynchronously and concurrently propose changes to their order. We then prove that, using a special type of markers, the completeness of the search is ensured with polynomial space complexity. This is the first asynchronous search algorithm that allows for asyn­ chronous dynamic reordering while being complete and having a polynomial space complexity. Here we have built on ABT since it is an algorithm easier to describe than its subsequent extensions. The technique can nevertheless be integrated in a straightforward manner in most extensions of ABT. 2 54 55 2 Related Work The first complete asynchronous search algorithm for DisCSPs is the Asyn­ chronous Backtracking (ABT)1. For simplicity, but without severe loss of generality, the approach in 1 considers that each agent maintains only one variable. More complex definitions were given later.3'4 Other definitions of DisCSPs 5'6'7 have considered the case where the interest on constraints is distributed among agents. 6 proposes versions that fit the structure of a real problem (the nurse transportation problem). The Asynchronous Aggregation Search (AAS) 7 algorithm actually extends ABT to the case where the same variable can be instantiated by several agents and an agent may not know all constraint predicates relevant to its variables. AAS offers the possibil­ ity to aggregate several branches of the search. An aggregation technique for DisCSPs was then presented in 8 and allows for simple understanding of the privacy/efficiency mechanisms. The strong impact of the ordering of the variables on distributed search was so far addressed in 9>6>10. 3 Asynchronous Backtracking (ABT) In asynchronous backtracking, the agents run concurrently and asyn­ chronously. Each agent instantiates its variable and communicates the vari­able value to the relevant agents. Since here we don't assume FIFO channels, in our version a local counter, Cf', is incremented each time a new instan­ tiation is proposed, and its current value tags each generated assignment. Definition 1 (Assignment) An assignment for a variable a;, is a tuple (xi,v,c) where v is a value from the domain of Xi and c is the tag value. Among two assignments for the same variable, the one with the higher tag (attached value of the counter) is the newest. A static order is imposed on agents and we assume that A{ has the i-th position in this order. If i>j then Ai has a lower priority than Aj and A, has a higher priority then Ai. Rule 1 (Constraint-Evaluating-Agent) Each constraint C is evaluated by the lowest priority agent whose variable is involved in C. Each agent holds a list of outgoing links represented by a set of agents. Links are associated with constraints. ABT assumes that every link is directed from the value sending agent to the constraint-evaluating-agent. Definition 2 (AgentJView) The agent_view of an agent, Ai, is a set con­ taining the newest assignments received by Ai for distinct variables. Based on their constraints, the agents perform inferences concerning the assignments in their agent.view. By inference the agents generate new con- 56 straints called nogoods. Definition 3 (Nogood) A nogood has the form -N. An add-link message announces Ai that the sender A,- owns constraints involving £j. Ai inserts Aj in its outgoing links and answers with an ok?. The agents start by instantiating their variables concurently and send ok? messages to announce their assignment to all agents with lower priority in their outgoing links. The agents answer to received messages according to the Algorithm 1 (except for pseudo-code delimited by '*').4 Definition 4 (Valid assignment) An assignment (x,v\,ci) known by an agent Ai is valid for A\ as long as no assignment (x, t>2, C2), Ci>C\, is received. A nogood is invalid if it contains invalid assignments. The next property is mentioned in 1 and it is also implied by the Theorem 1, presented later. Property 1 If only one nogood is stored for a value then ABT has polynomial space complexity in each agent, 0(dn), while maintaining its completeness and termination properties, d is the domain size and n is the number of agents. 4 Histories Now we introduce a marking technique that allows for the definition of a total order among the proposals made concurently and asynchronously by a set of ordered agents on a shared resource (e.g. an order). Definition 5 A proposal source for a resource Ti is an entity (e.g. an abstract agent) that can make specific proposals concerning the allocation (or valuation) oflZ. We consider that an order < is defined on proposal sources. The proposal sources with lower position according to -< have a higher priority. The proposal source for TZ with position k is noted P™, k > x^. x^ is the first position. Definition 6 A conflict resource is a resource for which several agents can make proposals in a concurent and asynchronous manner. Each proposal source Pf~ maintains a counter C^ for the conflict resource Tl. The markers involved in our marking technique for ordered proposal sources are called histories. 57 Figure 1. Simple scenarios with messages for proposals on a resource, x. Definition 7 A history is a chain h of pairs, \a:b\, that can be associated to a proposal for 1Z. A pair p=\a:b\ in h signals that a proposal for 7Z was made by P^ when its C"^ had the value b, and it knew the prefix of p in h. An order oc (read "precedes") is defined on pairs such that |ii:Zi| ex \iidi if either ii < ii, or i\ = ii and l\ > li. Definition 8 A history hi is newer than a history hi if a lexicographic comparison on them, using the order oc on pairs, decides that hi precedes hi. P^ builds a history for a new proposal on TZ by prefixing to the pair |fc:value(C^)|, the newest history that it knows for a proposal on TZ made by any P^, a if i i. Definition 11 (Known order) An ordering known by Rl (respectively A1) is the order o with the newest history among those proposed by the agents Rk,0 v, or a%+1 + 1 otherwise. This is the position of the highest priority reordered agent between hi and /12. New optional messages for reordering are: heuristic messages for heuris­ tic dependent data, and reorder messages announcing a new ordering, (o, h). An agent Rl announces its proposed order o by sending reorder messages to all agents Ak(o), k>i, and to all agents Rk, k>i. Each agent A1 and each agent R% has to store a set of orderings denoted Cord. Cord contains the ordering with the newest history that was received from each Ri, ji.b When a message is received which contains an order with a history h that is newer that the history h* of Cord, let the reordering position of h and h* be F. The assignments for the variables xk, k > Ir, are invalidated.0 The agents R1 modify the ordering in a random manner or according to special strategies appropriate for a given problem.d Sometimes it is possible to assume that the agents want to colaborate in order to decide an ordering.6 The heuristic messages are intended to offer data for reordering proposals. The parameters depend on the used reordering heuristic. The heuristic messages can be sent by any agent to the agents Rk. heuristic messages may only be sent by an agent to Rk within a bounded time, th, after having received a new assignment for x^,jPkn ..., Rn"2^>Pki to the ordering. At a certain moment, due to message delays, there can be several entities believing that they are delegated to act for Rl based on the ordering they know. However, any other agent can coherently discriminate among ''The agents absent from the ordering in a nogood are typically not needed by A1. A1 receives them when it receives the corresponding reorder message. 'Alternative rule: A1 can keep valid the assignments of new variables xk, i > k > Ir but broadcasts xl again. de.g. first the agents forming a coalition with R'. eThis can aim to improve the efficiency of the search. Since ABT performs forward checking, it may be possible to design useful heuristics. ^In n we explain how Rl can redelegate itself. 60 when received (okl ,(xj,dj *,cXj, (o, h) *)) do *if(-.getOrder((o,/i)) or old cx.) return*; //ABTR; add(xj,dj*,cXj*) to agent-view; check_agent_view; end do. when received (nogood, Aj,nogood * (o, h) *) do *if(-.getOrder((o,/i))) return*; //ABTR; *discard nogood if it contains invalid assignments else*; //ABTR; when {xk,dk,Ck), where Xk is not connected, is contained in nogood send add-link to Ak\ add (xk,dk,Ck) to agent-view, add nogood to nogood-list; add other new assignments to agent-view; old-value <— current-value; check_agent_view; when old-value = current-value send (ok?,(:Ej,cwrren£-«a/Me,cXi),known_order(j4j)) to A,; end do. procedure check_agent_view do when agent_view and current-value are not consistent if no value in Di is consistent with agent.view then backtrack; else select d £ Di where agent-view and d are consistent; current-value <— d; cXi++; O <— known_order(Aj); send (ok?,(xj,d,cXi),(9) to lower priority agents in outgoing links; end end do. procedure backtrack do nogoods <— {V | V = inconsistent subset of agent-view}; when an empty set is an element of nogoods; broadcast that there is no solution, and terminate this algorithm; for every V G nogoods; select (xj,dj*,cx,*) where Xj has the lowest priority in V; send (nogood, Xi,V,Oj) to AJ; remove (xj,dj*,cx.*) from agentjview; check_agent_view; end do. function getOrderf(o,/i); -> bool //ABTR when h is invalidated by the history of Cord then return false; when not newer h than Cord then add (o, h) to Cord; return true; I <— reorder position for h and the history of Cord; invalidate assignments for x\ j > I (alternative6*); add (o,h) to Cord; end. Algorithm 1: Procedures for Receiving Messages in ABT and ABTR. 61 A1/A1/R1 A2/A2/R°/R2 A3/A3/K3 Ai/RV^1 A2/K°/A3/R2 Ai/YO-/Al A2/R°/A3/R2 A3/A1/R1 A3/A1/R1 _ok?(xi,l,l)(Ai,A2,A3)_ -ok?(x2,2,l)(Ai,A2,A3)- _nogood-.((ii, 1, l)(x2, 2, l»(i4i, A2)_> . reorder (Ai, A3, A2)\l . reorder (A3, Ai, A2)\0 .reorder (Ai,A3,A2)\l . reorder (A3, Ai, A2)|0 -.ok?(x3,l,2){A3,AuA2)\0: 1|_ _ok?(x3,l,2)(A3,Ai,A2)|0:l|_ A3 A3 A2 A3 A3 A2 Ai Ax A2 Figure 2. Simplified example for ABTR with random reordering. Rl delegations are done implicitely by adopting the convention "A* is delegated to act for Rl". Left column: Ai/Ai/R11 /R12... shows the roles played by Ai when the message is sent. In bold is shown the capacity in which the agent Aj sends the message. The addlink message in not shown. messages from simultaneous Rls using the histories that Rls generate. The Rl themselves coherently agree when the corresponding orders are received. The delegation of Rl, i > 0 from a physical entity to another poses no problem of information transfer since the counter Cf'rder of Rl is reset on this event. For simplicity, in the example in Figure 2 we describe the case where the activity of Rl is always performed by the agent believing itself to be A1. R% can send a reorder message within time tr after an assignment is made by A1 since a heuristic message is implicitely transmitted from A1 to Rl. We also consider that A2 is delegated to act as R°. R° and R1 propose one random ordering each, asynchronously. The receivers discriminate based on histories that the order from R° is the newest. The known assignments and nogood are discarded. In the end, the known order for A3 is (A3, Ai, A2)\0 : 1|. By quiescence of a group of agents we mean that none of them will re­ ceive or generate any valid nogoods, new valid assignments, reorder messages or addlink messages. Property 2 In finite time tl either a solution or failure is detected, or all the agents A^,0 tl0, it will be abandoned when the first valid nogood is received (if one is received in finite time). All the nogoods received after t\ + nr are valid since all the agents learn the last instantiations of the agents Ak, k < i before tla + nr — T. Therefore the number of possible incoming invalid nogoods for an assignment of A% is finite. l.If one of the proposals is not refused by incoming nogoods, and since the number of such nogoods is finite, the induction step is correct. 2.If all proposals that A1 can make after tl0 are refused or if it cannot find any proposal, A1 has to send a valid explicit nogood - is refused, but due to the premise of the inference step, A? either 2.b.i) finds an assignment and sends ok? messages, or 2.b.ii) announces failure by computing an empty nogood (induction proven). In the case (i), since -*N was generated by A1, A1 is interested in all its variables (has sent once an add-link to A?'), and it will be announced by A^ of the modification by an ok? messages. This contradicts the assumption that the last ok? message was received by A1 at time tz0 and the induction step is proved. From here, the induction step is proven since it was proven for all alternatives. In conclusion, after tl0, within finite time, the agent A1 either finds a solution and quiescence or an empty nogood signals failure. R° is always fixed (or after tr in the version in xl) and the property is true for the empty set. The property is therefore proven by induction on i • Theorem 1 ABTR is correct, complete and terminates. Proof. Completeness: All the nogoods are generated by logical inference from existing constraints. Therefore, if a solution exists, no empty nogood can be generated. No infinite loop: This is a consequence of the Property 2 for i = n. Correctness: All assignments are sent to all interested agents and stored there. At quiescence all the agents know the valid interesting assignments of all predecessors. If quiescence is reached without detecting an empty nogood, then according to the Property 2, all the agents agree with their predecessors and the set of their assignments is a solution. • 63 6 Conclusions Reordering is a major issue in constraint satisfaction. All previous complete polynomial space asynchronous search algorithms for DisCSPs require a static order of the variables. We have presented an algorithm that allows for asyn­ chronous reordering in ABT. This is the first asynchronous complete algorithm with polynomial space requirements that has the ability to concurrently and asynchronously reorder variables during search. Here we describe a random reordering heuristic that can be useful for special purposes (coalitions, special strategies). However, this algorithm offers a flexible mechanism (general pur­ pose heuristic messages) that allows for implementing most other heuristics that can be believed useful for general or specific applications. Alternative implementations, alternatives to using histories, how to save effort across re­ ordering and efficient heuristics are described in n. References 1. M. Yokoo, E. H. Durfee, T. Ishida, and K. Kuwabara. The distributed CSP: Formalization and algorithms. IEEE TKDE, 10(5)-.613-685, 98. 2. M.-C. Silaghi, D. Sam-Haroud, and B. Faltings. Asynchronous consis­ tency maintenance with reordering. TR #01/360, EPFL, March 2001. 3. M. Yokoo and K. Hirayama. Distributed constraint satisfaction algorithm for complex local problems. In ICMAS'98, pages 372-379, 1998. 4- M. Yokoo. Distributed Constraint Satisfaction. Springer, 2001. 5. Y. Zhang and A. K. Mackworth. Parallel and distributed algorithms for finite CSPs. In Proc. of Symp. on PDP, pages 394-397, 91. 6. G. Solotorevsky, E. Gudes, and A. Meisels. Distributed CSPs - a model and application, http://www.cs.bgu.ac.il/~am/papers.html, Oct 97. 7. M.-C. Silaghi, D. Sam-Haroud, and B. Faltings. Asynchronous search with aggregations. In Proc. of AAAI2000, pages 917-922, 2000. 8. P. Meseguer and M. A. Jimenez. Distributed forward checking. In Proc. of DCS. CP'00, 2000. 9. M. Yokoo. Asynchronous weak-commitment search for solving large-scale distributed CSPs. In ICMAS, pages 467-318, 95. 10. Y. Hamadi and C. Bessiere. Backtracking in distributed constraint net­ works. In ECAI'98, pages 219-223, 98. 11. M.-C. Silaghi, D. Sam-Haroud, and B. Faltings. Hybridizing ABT and AWC into a polynomial space, complete protocol with reordering. TR #01/364, EPFL, Mai 2001. SOCIAL RATIONALITY AND COOPERATION GUIDO BOELLA Dipartimento di Informatica - Universita di Torino C.so Svizzera 185 10149 Torino ITALY - email: guido@di.unito.it The definitions of cooperation to shared plans and joint intentionality have tradi­ tionally included subsidiary goals: they aim at coordinating the part of the group's action which goes beyond the control of individual intentionality. In this paper, we present a definition of collective acting which explains goals aiming at the group's coordination as a result of the interaction of goal adoption, group utility and re­ cursive modeling of BDI agents. 1 Introduction What is involved when a group of agents decides to do something together? Cohen and Levesque1 answered to this question that if the group's members were directed by single agent intentions towards a shared plan, they would risk to let the group act in an irrational way; in fact, a member could realize that her task is impossible to achieve or that the shared goal has been satisfied: she would leave the group, while the remaining members could go on in trying to pursue an impossible or already achieved goal. For this reason, Cohen and Levesque1 add to the definition of joint intention some subsidiary goals which must be satisfied just in case an agent believes that the joint intention has been satisfied, etc.: in all these cases, she should drop her main intention, but pursue the goal of making the remaining agents aware of the new situation. We present an operational definition of joint intentionality based on the idea that subsidiary goals, which come into play at the end of the group activity, can be explained as a consequence of the interplay of three factors: the group's utility, goal adoption and anticipatory coordination. First of all, the rationality of the group's action is defined in terms of the benefit of the entire group and not only of the advantage of the single members. If the group's members followed only their individual intentions, they would achieve a better individual performance, since they would stop their action when it becomes just a waste of time and resources; but, in this way, they would leave the remaining agent working without any real achievement, thus reducing the group's overall performance. Second, making the remaining agents aware that some new situations hold, as prescribed by the definition of joint intention, is an instance of a more 64 65 general behavior which Conte and Castelfranchi2 call goal adoption. The goal of making the partners believe that the joint goal has been achieved or is impossible to achieve can be motivated by a similar attitude of agents: the agent is adopting a control goal of the partners, i.e., a goal which stems during the intentional execution of an action. Goal adoption, per se, does not imply doing anything for another agent's goals. An adopted goal is given as input for means-ends reasoning, but it still must undergo the deliberation process of the agent. It is selected as the actual intention only if the agent gains from its satisfaction the maximum advantage with respect to the other alternatives. We measure the advantage an agent gains in terms of the decision theoretic concept of utility. In case of collective action - as stated above - the utility that must be considered is a combination of the private utility of an agent with that of her partners in the shared plan. Finally, if agents interact (in a cooperative or conflictual way) in a resource bounded environment, when they have to measure the utility of their actions, they cannot but take into account the effect of their actions on the other agents. In particular, they have to compute the utility not of the outcome of their actions, but the utility of the outcomes produced by the predicted reac­ tions of their interactants. In case of cooperation among agents, this means that an action must be chosen only after the agent has predicted what her partners can (or cannot) do afterwards and she has computed which is the util­ity of the resulting situations for the entire group (anticipatory coordination, another brick of social rationality, according to Conte and Castelfranchi2). 2 The Definition of Cooperation and the Planning Algorithm We assume that an agent has a set of preferences and goals and that she does planning in order to find a (possibly partial) plan which satisfies one or more of these goals and maximizes the agent's utility. The chosen plan constitutes the current (individual) intention of the agent. Then, the plan is executed in a reactive manner, i.e., monitoring effects and triggering replanning in case of failure or new information. Since a decision must be taken about which plan to choose, we need some techniques to balance the different possibilities: we adopted the decision theo­ retic planner DRIPS described in Haddawy and Hanks3, where they described a way to relate the notions of goals and planning to that of utility. In our definition of cooperation, a group GR composed of agents G\, .. ., G„ cooperates to a shared plan a" for achieving goal

.j which maximizes the following formula is the one selected by agent Gb for exe­ cution in Sfj (its outcome is SfJbest^ = {(p&^.t^i. Sij,be,tij,i)> •••> \Pi,j,best,,j,r,' &i ,j,best, j,r,) J ): 1-, S^ , g 5,'6-, PiJ,',z * JGR\^i,j,l,z) 3. Expand each state 5,',- in S[ with the recipe Rbest'-e, where 5,-,e is the equivalence class in Si which S[ • belongs to; for each j, the result is a set of (probability, state) pairs: S§ = {{jpffo, S£,,), ..., (^,,, 5^.^)} 4. Given the n initial states 5,' fc in 5,', the probability of each state S'ft x is Pi k * PTj I (tne latter depends on the probability distribution of Rbesti-i effects). Consequently, the expected utility of the initial states S't is: 3 Cooperation phenomena explained Helpful behavior: helpful behavior (i.e., goal adoption) is provided not only when the group cannot otherwise proceed in its plan, but, also, it should not be provided indiscriminately. By exploiting the decision theoretic paradigm, it is possible to keep apart the forms of help that are rational from those that appear to be just a waste of resources. If the effort for satisfying adopted goals conflicts with the ability of the agent to do her own part, then, a decrease in the group utility is obtained, instead of an increase. Communication: As a special case of goal-adoption, it is possible to predict various forms of communication: they arise when the goal to be adopted is a control goal; that is, a goal of Gb to know some information, like the truth value of constraints and preconditions of actions, and whether an action succeeded 68 or not. The effect of a communicative action is to make such a goal true (at least with some probability). Communication does not always worth its cost for the group. The agent Ga has to consider the cost of communicating with Gb- If communication is expensive, slow or unreliable, then it is not convenient for the group to waste resources in kindly communicating, since Gb could discover the desired information in a less expensive manner. Hierarchical groups: the consumption of resources need not be weighed in a uniform way for all members of the group; a sort of hierarchy in the group can be induced by weighing - in the multi-attribute utility function - resource consumption differently depending on the agent who executes an action. Conflict avoidance: since agents share a group utility function and perform anticipatory coordination, they will (try to) avoid conflicts with other agents' intentions: performing an action that interferes with the plans of other team members decreases the utility of the whole team. When Ga considers the possible developments of her partial plan, she examines what effects her action will have on the partners' plans. So also the possible interferences are weighed as any other cost that decreases the group utility: conflicts result in less preferred choices, but they are not necessarily ruled out. Ending cooperation: when all members know that the top-level goal of the group has been achieved, has become impossible or irrelevant, then no more utility can be obtained by any other actions than terminating the group: in fact, termination gets higher utility by saving resources. Therefore, the shared plan is naturally ruled out, without the need of stipulating other explicit conditions for its termination. References 1. P. R. Cohen and H. J. Levesque. Teamwork. Nous, 25:487-512, 1991. 2. R. Conte and C. Castelfranchi. Cognitive and Social Action. UCL Press, 1995. 3. P. Haddawy and S. Hanks. Utility models for goal-directed, decision- theoretic planners. Computational Intelligence, 14:392-429, 1998. 4. P.R. Cohen and H.J. Levesque. Intention is choice with commitment. Artificial Intelligence, 42:213-261, 1990. 5. G. Boella. Cooperation among economically rational agents. PhD thesis, Universita di Torino, 2000. 6. I. Hideki and K. Hirofumi. Observability-based nested belief computa­ tion for multiagent systems In N.R. Jennings and Y. Lesperance, editors, Intelligent Agents VI (ATAL-99). Springer-Verlag, 2000. BELIEF REVISION IN TYPE THEORY TUN BORGHUIStAND FAIROUZ KAMAREDDINEJAND ROB NEDERPELTf fMathematics and Computing Science, Eindhoven Univ. of Technology, P.O.Box 513, 5600 MB Eindhoven, the Netherlands Email: {v. a.j. borghuis,wsinrpn} @win. tue.nl \Computing and Electrical Engineering, Heriot-Watt Univ., Riccarton, Edinburgh EH14 4AS, Scotland. E-mail: fairouz@cee.hw.ac.uk This paper explores belief revision for belief states in which an agent's beliefs as well as his justifications for these beliefs are explicitly represented in the context of type theory. This allows for a deductive perspective on belief revision which can be implemented using existing machinery for deductive reasoning. 1 Introduction An agent who keeps expanding his belief state with new information may reach a stage where his beliefs have become inconsistent, and his belief state has to be adapted to regain consistency. In studying this problem of "belief revision", the justifications an agent has for his beliefs are not usually consid­ ered as first-class citizens. The two main approaches for dealing with belief revision (foundation and coherence theories5) represent justifications of beliefs implicitly (e.g. as relations between beliefs in foundations theory) rather than as objects in their own right which are explicitly represented in the formali- sation of belief states and belief change operations. In this paper, we explore belief revision for belief states in which justifications are first-class citizens. Our motivation for investigating belief revision along these lines stems from working on knowledge representation in type theory2 in the DenK- project4. In this project a formal model was made of a specific communication situation, and based on this model, a human-computer interface was imple­ mented. Both in the model and in the system, the belief states of agents were formalised as type theoretical contexts. This means that an agent's beliefs are represented in a binary format, where one part of the expression is the proposition believed by the agent and the other the justification the agent has for this particular belief. Both parts are syntactic objects in their own right, and can be calculated upon by means of the rules of the type theory. This way of representing beliefs turns justifications into first-class citizens, and proved to be very fruitful for the purposes of the project. At that time mechanisms for belief revision were not investigated but it became clear that given this formalisation of belief states there is a straight­ forward deductive approach to the problem: since every belief is accompanied 69 70 by its justification (and the rules operate on both), every inconsistency that surfaces in the agents belief state has its own justification containing the jus­ tifications of the beliefs that cause the inconsistency. 2 Type theory for knowledge representation Judgements: The basic relation in type theory is the judgement T h a : T (read as 'term a has type T in context r'). Here 'o' and 'X" are both formulas written according to a well-defined syntax, a : T is called a statement, whose subject is the term a. One also says that term a is an inhabitant of type T. The context T is a list of statements with variables as subjects, e.g. x\ : T\,...,xn : T„. The judgement T h a : T can then be read as follows: "If x has type Ti, ..., and xn has type Tn, then term a has type T". Note that a may contain x\, ..., xn, so a depends on xi to xn. The set of subject variables {x\,... ,xn} is called the domain of F. Statements: The intuitive notion 'has type' has a direct counterpart in naive set theory, viz. 'is element of. For example, the statement 'a : N' ('term a has type N'), assuming that N is a symbol representing the set of natural numbers, can be interpreted as 'a G N' ('the object represented by a is element of the naturals'). The notion of having a type, however, is more general than the notion of set-theoretical elementhood. This is because a type T can represent not only some kind of set, but also a proposition. In the latter representation, the statement a : T expresses: 'a is (a term representing) a proof of the proposition T". One speaks of 'propositions as types and proofs as terms' (abbreviated as PAT) in order to emphasize this usage of types. Contexts: The context T in a judgement T h a : T contains the 'prerequisites' necessary for establishing the statement a : T. In T = x\ : T\,..., xn : Tn, a statement Xi : Ti expresses many kinds of prerequisites, the simplest being: 1. Xi is an element of the set Ti, 2. Ti is an assumption (a proposition) and xt is its atomic justification. However, in type theory there are different 'levels' of typing: a type can have a type itself. Statements expressing the typing of types deal with the well- formedness of these types. For the Ti in 1. and 2. above, we can have: 1. Ti : set, to express that Ti is a well-formed formula representing a set, 2. Ti : prop, to express that Tj is well-formed representing a proposition. The last-mentioned statements can also be part of a context. So a context could look like: Xi : prop,T2 : set,xi :Ti,X2 :Tt- The terms set and prop are examples of so-called sorts, predefined constants on which the type system is based. Every type system has a specific set of sorts, which we denote by S. We identify three characteristics of knowledge which, according to us, should be taken into account in any attempt to formalize knowledge: 71 • Subjectivity: Knowledge of an agent is partial: no one knows everything, and agents differ in what they know and don't know. Also, knowledge is formulated in terms of concepts which are subjective in the sense that one agent may judge something to be an instance of a certain concept, while another agent would not recognize this as such. • Justification: Knowledge is justified: agents not only know things, but they have reasons for knowing them. Generally, parts of knowledge are justified in terms of more basic parts; an agent's body of knowledge is structured. And even atomic justifications are supports for the knowl­ edge, since they point at an origin (an axiom, an observation, etc.). • Incrementality: The knowledge of an agent can be extended as new in­ formation becomes available. Whether this information can be incorpo­ rated by the agent depends on the possibility to tie this information to the knowledge that is already present. This may lead to simply adding the new information, but also to dismissing it (for instance because it is incomprehensible) or even to a reorganization of the existing knowledge. With these requirements, the traditional distinction between knowledge and belief disappears: there can be no knowledge which is true in any absolute sense, since an agent's knowledge depends on his subjective conceptualisation of the world. At best some pieces of knowledge turn out to be more reliable than others and some things can be agreed upon by more agents than others. There is a natural way to capture the above characteristics in type theory: • Subjectivity is captured by types: Each concept is formalized as a type, each instance of the concept is a term inhabiting this type. An agent's subjective ability to recognize something as an instance of a concept, is mirrored in the ability to judge that the corresponding term inhabits the corresponding type. Note that 'having a concept' is also subjective in the sense that different people may have formed different concepts in the course of time. This means that one agent can have a concept, whereas another agent has no comparable concept. And in case agents do have comparable concepts, they may differ in what they recognise as belonging to this concept. In case the type formalizing the concept is a 'set-type', this means that they may differ in what they regard as elements of the set (a rhododendron may be a tree for the one, but a shrub for the other). In case this type is a 'proposition-type', they may differ in what they accept as a justification for that proposition. • Justification is captured by terms: As said before, by the PAT-principle, justifications are first-class citizens, formalized in the type-theoretical syntax as terms. The fact that term a justifies proposition T, is expressed 72 as the statement a : T. The rules of type theory allow these terms to be combined into complex terms, which reflects that parts of knowledge may be a structured combination of more basic parts of knowledge. • Incrementality is captured by contexts: An agent's knowledge state can be formalized as a type-theoretical context. Addition of new information to the knowledge state can be formalized by adding statements to the context, dismissing information amounts to reducing the context. Infor­mation may only be added if it 'matches' an agent's knowledge state. In type theory, a statement can only extend a context if it obeys certain well-formedness restrictions. The knowledge state of an agent consists of 'everything he knows' at some instant. Given our characterization of knowledge, this means that everything in a knowledge state is formulated in terms of the agent's concepts. Hence: • Meaning fulness: An agent has formed his own, private concepts, and only things formulated by means of these concepts can be meaningful to him. Whether or not information coming from outside (by observation or communication) makes sense, depends on the concepts that are already available. (We assume that the entirety of concepts of an agent is fixed.) • Inhabitation: Whatever an agent knows about the world is recorded in a knowledge state in the form of meaningful expressions that he accepts. This includes expressions about which objects 'inhabit' the concepts, and which propositions hold, according to the agent. If we take the following (very simple) context as representing an agent's knowl­ edge state: Ti : prop,T2 : set,a:i : T\,X2 : T-j, we can see: • Meaningfulness is captured by statements of the form T : prop or T : set. That is to say, in this example the agent has two concepts, viz. Ti, which is a proposition to him, and T2, which is a set. At this stage, there are no other concepts, i.e. all sets and propositions which are not constructed out of T\ and/or T2 are not meaningful to him. • Inhabitation is captured by statements of the form x : T, where T is meaningful. In the example context, the inhabitant X\ of T\ represents the agent's justification for the holding of Ti, and the inhabitant X2 of T2 is an element of the set T2 which is recognized as such by the agent. 'Everything an agent knows' at a certain instant can be divided into: • Explicit knowledge expressed by the statements in context T. These are explicitly represented pieces of knowledge directly available to the agent. • Implicit knowledge expressed by statements derivable on context T. These are consequences (obtained by inference) of an agent's explicit knowledge. Hence, in a judgement of the form V \- a : T, the explicit knowledge can be found to the left of h, and the implicit knowledge to the right of K 73 3 Concluding remarks We explored the use of explicitly represented justifications in belief revision where beliefs and belief states were represented respectively as type theoret­ ical statements and contexts (for details see 3). Justifications make it easy to identify the beliefs that cause inconsistency of the belief state and greatly simplify the handling of dependencies between beliefs. Our approach is appli­ cable to agents with limited computational resources because it is deductive and we do not require that our theory of belief revision itself selects which beliefs have to be removed. This holds independently of the strength of the logic in which the belief change operations are cast: the mechanisms that were used to represent justifications and dependency relations between beliefs are at the heart of type theory, making our approach applicable to: a) a large family of type systems, and hence b) given the connections between type the­ ory and logic, in a wide range of logics2. Our work has been implemented on the basis of a standard type theoretic theorem prover where the agents belief state is represented as type theoretical contexts as described in this paper 4. Although we know of no work in the literature where justifications are explicitly represented, we show in 3 that our framework is related to: a) revision for belief bases and to Foundations Theory, but does not suffer from the drawbacks usually associated with foundations theory such as problems with disbelief propagation, circular justifications, and multiple justifications for the same belief; and b) the work of Hansson on semi-revision, whose notion of consolidation can be simulated in our framework and where new information is not automatically completely trusted. References 1. Ahn, R., Borghuis, T., Communication Modelling and Context- Dependent Interpretation: an Integrated Approach. In: TYPES'98. LNCS 1657, Springer Verlag (1999), pp. 19 - 32. 2. Barendregt, H., Lambda calculi with types. In Handbook of logic in com­ puter science, Abramsky, Gabbay and Maibaum (eds.), Oxford University Press, Oxford (1992), pp. 117 - 309. 3. Borghuis, T., and Nederpelt, R., Belief Revision with Explicit Justifi­ cations, an Exploration in Type Theory. CS-report 00-17, Eindhoven University of Technology, Dept. of Math, and Comp. Sc, NL (2000). 4. Bunt, H., Ahn, R., Beun, R-J., Borghuis, T., and Van Overveld, K., Mul­ timodal Cooperation with the DenK System. In: Multimodal Human-Computer Interaction, Bunt, H., Beun, R-J., Borghuis, T. (eds.), Lecture Notes in Artificial Intelligence 1374, Springer Verlag (1998), pp. 39 - 67. 5. Gardenfors, P., The dynamics of belief systems: Foundations versus co­ herence theories, Revue Int. de Philosophic, 44 (1990), pp. 24 - 46. HETEROGENEOUS BDI AGENTS II: CIRCUMSPECT AGENTS MARIA FASLI University of Essex, Department of Computer Science, Wivenhoe Park, Colchester C04 3SQ, United Kingdom Emaitmfasli@essex.ac.uk The study of formal theories of agents has received an increasing attention in par­ ticular within the context of the BDI paradigm. An interesting theoretical issue in this framework is defining notions of realism, that is interrelations between the agent's beliefs, desires and intentions. Intuitively, each notion of realism charac­ terises a different type of agent. In this paper we extend the BDI framework and we propose notions of realism for capturing circumspect agents, that is agents that are willing to adopt intentions only if they believe that these are achievable op­ tions. Three such notions of realism are presented which are shown to have better features than the classical notion of strong realism. 1 Introduction Formal theories of agents have attracted considerable attention within the Agent community since they can be viewed as specifications for building such systems. The BDI framework developed by Rao and Georgeff3,4 is perhaps one of the most well-known theories of agents. An interesting theoretical issue in the BDI is defining notions of realism, that is interrelations between the agent's attitudes, namely beliefs, desires and intentions. Different relations give rise to different types of agents as in Rao and Georgeff4. Thus the notion of strong realism characterises a cautious agent whereas realism characterises an enthusiastic one. This accords well with our intuition that heterogeneous agents are needed: different domains of applications may require agents with different cognitive models. In this paper we propose notions of realism for capturing what we describe to be circumspect agents. In particular, we present notions of realism that improve upon the characteristics of strong realism and thus come closer to the desiderata for reasoning BDI agents laid down by Bratman1 and Rao and Georgeff4. The paper proceeds as follows: Next we describe the BDI paradigm along with the the three original notions of realism. Due to space limitations we will only briefly describe the basic ideas; the reader is referred to Rao and Georgeff3,4 for full formal definitions. Then we discuss our intuitions and basic ideas behind the concept of a circumspect agent and we present three notions of realism for formalising such agents. A discussion on their evaluation is also included. The paper ends with the conclusions. 74 75 B D I i) ii) iii) Figure 1. i) Strong Realism, ii) Realism, iii) Weak Realism Table 1. Axioms for the notions of realism Strong Realism Intendi(4>) =>• Desi() Desi{4>) => Beh() Realism Beli{) => Desi{4>) Desi() =*• Intendi() Weak Realism Jntendi(4>) => -iDesi(->4>) Intendi() =r- -) Desi{) => -iBeii(-i^) 2 The BDI Paradigm In the BDI paradigm developed by Rao and Georgeff3,4 an agent's cognitive state is described in terms of both information attitudes and pro-attitudes. Formally a first order language is adopted which in addition includes three modal operators Be/j, Desi, and Intendi for expressing beliefs, desires and intentions respectively. The framework also uses a temporal component based on CTL logic. Semantics is given in terms of possible worlds which are them­ selves branching time structures. The axiomatisation for beliefs is the stan­ dard KD45 modal system while for desires and intentions the K and D systems are adopted respectively. Since an agent's cognitive state is expressed in terms of three attitudes, additional axioms are required that describe their interre­ lations. Three such ways of interrelating the three attitudes, known also as notions of realism, have been proposed by Rao and Georgeff 4, Figure 1. In the first notion of realism, namely strong realism, if an agent intends to optionally achieve ip, then it optionally desires it and also believes that I/J is an achievable option. The set of belief-accessible worlds is a subset of the desire- accessible worlds, and the set of desire- accessible worlds is a subset of the intention-accessible worlds, as is illustrated in Figure l(i). Set containment corresponds to logical implication and thus the relationships between sets of accessible worlds entail the axioms in Table 1°. According to strong realism aNote: the application of these axioms is restricted to O-formulas3,4. This is also the case for the other notions of realism. D B 76 Table 2. Asymmetry Thesis Principles and their satisfaction in Basic BDI Systems # Al A2 A3 A4 A5 A6 Al A8 A9 Name I-B Inconsistency I-B Incompleteness I-D Incompleteness I-D Inconsistency B-D Incompleteness B-I Incompleteness D-B Inconsistency DT Incompleteness D-B Incompleteness Formula h Intend^) => -^Beli(-Kp) \/ Intendi((f>) => Beli{4>) \f Intendi((p) => DeSi((f>) h Intendi(• ->.Desi (-><£) \/Bdi() => Desi((t>) \/ Beli(4>) => Intend^) h Desi( ^Bek{-^4>) \f Desi() => Intendi((f>) \f Desi{4>) => Beli{4>) S T F F T T T T T F R T T T T F F T F T W T T T T T T T T T Table 3. Consequential Closure Principles and their satisfaction in Basic BDI Systems # CI C2 CS Formula Intendi{\) A Beli(i =£• fa) A -ilntend^fa) Intendi(i) A Des,(i => fa) A -ilntend^fc) Desi(i) A Beli(i => 2) A ^Desi{(p2) S T T T R F F F W T T T the agent is very cautious, and only intends and desires propositions that believes to be achievable. In realism the set of intention accessible worlds is a subset of the desire-accessible worlds, and the set of desire-accessible worlds is a subset of the belief-accessible worlds Figure l(ii). The axioms are given in Table 1. An agent based on realism is an enthusiastic agent and believes that it can achieve its desires and intentions. Finally, in weak realism, the intersection of intention- and desire-, intention- and belief-, and belief- and desire-accessible worlds is not the empty set as is shown in Figure l(iii). The axiom schemas for weak realism are provided in Table 1. The agent described by weak realism is a more balanced agent than the two other types of agents. The three different systems that result from the adoption of the corresponding axioms of realism will be called S-BDI, R-BDI and W-BDI respectively. Bratman1 and Rao and Georgeff4, discussed several properties or condi­ tions of rationality that a BDI system should satisfy. The first set of such properties is known as the Asymmetry Thesis or the incompleteness and the inconsistency principles, and they hold pairwise between desires, beliefs, and intentions. They are listed in Table 2 along with their satisfaction in the basic systems. The second set is called the Consequential Closure principles. They are provided in Table 3 along with their satisfaction in the basic BDI systems. 77 3 Circumspect Agents Different domains of applications for agents have different requirements, which need to be reflected in their conceptualisation, design and implementation. For instance an agent that has to deal in the stock market may have a differ­ ent reasoning model from an air-traffic controller agent that has as a respon­ sibility the safe landing and take off of aircrafts. The stock market agent may be required to engage in transactions that involve risk, whereas the air-traffic controller agent needs to be strictly cautious and avoid decisions that involve uncertainty and risk. Thus the need for heterogeneous agents stems from the fact that the cognitive model of the agent may have to vary depending on the requirements of a particular application. In the BDI paradigm this need is addressed by adopting different realism constraints. For instance the no­ tion of strong realism characterises a cautious agent. However, strong realism describes only one possible way of relating the three sets of accessible worlds for capturing cautious agents. Moreover, it does not satisfy three Asymmetry Thesis principles as shown in Table 2. Here we propose alternative constraints for characterising cautious or circumspect agents. A circumspect agent is not willing to take any risks, that is, we interpret circumspect agents in the BDI framework as agents that only intend a proposition if they believe it to be achievable in all accessible worlds. Although a number of notions of realism have been uncovered, only three will be presented due to space limitations. According to our interpretation of circumspect agents, such an agent only intends to optionally achieve ip if it believes in all respective accessible worlds that ip is an achievable option. Thus, one basic restriction for such agents in terms of semantic conditions is that the set of belief-accessible worlds should be a subset of the intention-accessible worlds. Consequently the A2 prin­ ciple will not be satisfied for such an agent. Therefore we will attempt to improve on the remaining principles, namely A3 and A9. In the first no­ tion of realism RC1-BDI the set of belief-accessible worlds is a subset of the intention-accessible worlds, the intersection of desire- and intention-accessible worlds is not the empty set, and the intersection of belief- and desire-accessible worlds is not the empty set as shown in Figure 2(i). Although we relax the requirement of strong realism in that an agent can have desires that it may not believe in all possible worlds to be achievable, the agent remains circum­ spect regarding its intentions and it will only adopt intentions that believes to be achievable options. The axioms imposed according to the set relations are given in Table 4b. The respective system called RC1-BDI consists of the 6 Again the application of the realism axioms of Table 4 is restricted to O-formulas. 78 D B I I B D I B D i) ii) iii) Figure 2. i) RC1-BDI Realism, ii) RC2-BDI Realism, iii) RC3-BDI Realism Table 4. Axioms for the notions of realism for Circumspect Agents RC1-BDI Intendi(4>) => Beli((t>) Desi{) => ->Bdi{-4) RC2-BDI Intendi{4>) =>• Beli() DeSi() =>• Bek{4>) RC3-BDI Intendi((j>) =>• Beli{4>) Intendi() =$> Desi() BeU{4>) => -iDesj(-i^) basic BDI axiomatisation and the axioms for RC1-BDI realism. Another type of circumspect agent is depicted in Figure 2(h). This agent believes that its desires and intentions are achievable options, although its in­ tentions are loosely coupled with its desires. Thus, the set of belief-accessible worlds is a subset of the desire-accessible worlds, the set of belief-accessible worlds is a subset of the intention-accessible worlds as well, and the intersec­ tion of the intention- and desire-accessible worlds is not the empty set. The axioms are provided in Table 4 and the respective system is called RC2-BDI. A third variation is illustrated in Figure 2(iii). In this both the set of belief- and desire-accessible worlds are subsets of the intention-accessible worlds, while the intersection of the desire- and belief-accessible worlds is not the empty set. The axioms that are imposed according to these conditions are provided again in Table 4. This system is called RC3-BDI. We turn now our attention to the evaluation of the proposed systems with regards to the desiderata for rational agents as suggested by Bratman1 and Rao and Georgeff4. According to our interpretation and basic condition for circumspect agents, the A2 principle is not satisfiable in these systems. Comparing the three notions of realism with strong realism we see that RC1- BDI, RC2-BDI and RC3-BDI provide certain improvements. In strong realism three of the Asymmetry Thesis principles (A2,A3 and A9) are not satisfied whereas in RC1-BDI only one is not satisfied, and in RC2-BDI and RC3-BDI two of them are not. In all three systems the Consequential Closure principles are satisfied. In conclusion the three proposed systems seem to come closer to the requirements for rational BDI agents than that of strong realism. 79 Table 5. Asymmetry Thesis and Consequential Closure in Circumspect BDI Agents RC1 RC2 RC3 Al T T T A2 F F F A3 T T F A4 T T T A5 T T T A6 T T T A7 T T T A8 T T T A9 T F T CI T T T C2 T T T C3 T T T 4 Conclusions The research presented in this paper has been motivated by the need to for­ malise heterogeneous agents and in particular circumspect agents in the BDI paradigm. A circumspect BDI agent will only adopt an intention to option­ ally achieve ip, if it believes that this is an achievable option. Three different notions of realism for circumspect agents were presented. These were shown to have better characteristics than the notion of strong realism. In the scope of this research and in the effort to investigate all the available options, ad­ditional notions of realism were uncovered. However, due to lack of space we only described those that seem to yield the most interesting properties. In contrast to circumspect agents one can consider bold agents. Such an agent can adopt an intention towards a proposition if it does not believe that the proposition is not an achievable option. The basic condition that seems to characterise such agents is: Intendi(4>) =^ -'Bel^-xf)). Notions of realism for such agents were explored elsewhere2. In conclusion, we believe that the research presented here comes one step closer towards heterogeneous BDI agents. Perhaps the most interesting aspect of this work is to consider real applications and investigate how real agents that correspond to these formal cognitive models can be built. References 1. M.E. Bratman, Intentions, Plans, and Practical Reason. Harvard Uni­ versity Press (1987). 2. M. Fasli, Towards Heterogeneous BDI Agents I: Bold Agents. In Proceed­ ings of the 14th International FLAIRS Conference, AAAI Press (2001). 3. A. Rao and M. Georgeff, Modelling Rational Agents within a BDI- Architecture. In Proc. of the 2nd Int. Conf. on Principles of Knowledge Representation and Reasoning, pp.473-484 (1991). 4. A. Rao and M. Georgeff, Decision Procedures of BDI Logics. Journal of Logic and Computation, 8(3):293-343 (1998). A PREFERENCE-DRIVEN APPROACH TO DESIGNING AGENT SYSTEMS STEFAN J. JOHANSSON Department of Software Engineering and Computer Science, Blekinge Institute of Technology, S-372 25 Ronneby, Sweden e-mail:sja@bth. se JOHAN KUMMENEJE Department of Computer and Systems Sciences, Stockholm University and the Royal Institute of Technology, S-I64 42 Kista, Sweden e-mail: j ohankSdsv. su.se We present a preference-driven approach to the construction of agent systems in which owners and designers of both the agents and environments, are recognized to influence the preferences of the agents in order to maximize their expected utilities. We propose some general guidelines for using preferences in the process of agent engineering and identify the need for future research in the area. 1 Introduction One important issue of agency is control. We must not be enticed to believe that the agents live in social and environmental isolation. They have been designed to fulfill the goals of their creators by interacting with its environment and other agents. Central to our point of view are the following concepts: Definition 1 An agent owner, (Ao) is the (human or artificial) agent that has the power to launch the agent, as well as make the decision whether the agent should be shut down or be assigned new preferences. The owner ex­ presses its preferences to the agent, and get it to work toward the given pref­ erences. Definition 2 An agent designer, (AD) is the (human or artificial) agent that has designed (and possibly implemented) the control mechanism of an agent. With control, we mean the internal evaluation of the environment and the owner preferences. Definition 3 A designer of an environment, (ED) is the (human or artificial) agent that has designed and possibly implemented the rules and conditions under which agents are able to act in the environment. Definition 4 An environment owner, (Eo) is the (human or artificial) agent whose run-time preferences are reflected in the dynamics of the rules and the conditions under which agents are able to act in the environment. 80 81 We will try to clarify the role of each and one of these characters in the following sections. In the next section, we will give some (artificial) examples of agent systems and also discuss how the different users and designers relate to their parts of the system. Section 3 discusses a real example of preference dynamics based on the simulated league in RoboCup, in which designers and users of both agents and environments act on the preferences of the others. We finish off with a section of discussion and future work. 2 The Meta-Design of a System In an agent system, we may identify the following features: First, each of the agents has a set of dynamic preferences expressed by their owners as well as a set of static preferences decided at design level. Secondly, the agents may take into account preferences expressed by the designer and the owner of the environment. Thirdly, each of the agents optimize their actions according to their pref­ erences, their knowledge and their abilities, i.e. they are houndedly rational (more about bounded rationality is found in e.g. Boman1). Fourthly, the actions of the agent influence the environment either directly or indirectly, and Fifthly, changes occur in the environment as a result of the actions of the agents. These are the possibly observable side-effects of the system that the owner may benefit from, and possibly adjust its preferences according to. The Agents' Choice of Actions: The assumption of bounded rationality is pragmatic in the sense that the may be unaware of the preferences, abilities, etc. of other agents. Consider an agent as not being bounded rational, then it would deliberately be acting non-optimal with respect to its design objectives which the user and designers would consider to be the best action. Instead, some other preferences must have been present, which is in contradiction with the fact that the only thing that guides the behavior of an agent is the preferences of its owner and its designers and the state of the environment. The Observations of the Owner: It is rarely the case that agents as such are the reason for running a system (exceptions are to be found e.g. in agent- based simulations2). Instead, what the owners of the agents are generally interested in, is the side-effects of actions of the agents. To illustrate this, imagine an office environment agent. The owner of this agent is interested in the result of the negotiations, i.e. that the local environment gets as close to the owner preferences as possible, not the negotiation protocols used nor how many agents it had to negotiate with. 82 Owner preferences Environment preferences Figure 1. The different sources of valuation Design Principles for Agent Environments: As an ED, the task is to imple­ ment the rules and conditions under which agents that act in the environment will be evaluated. At the same time as the ED will have to design a (hopefully unambiguous) framework of rules, much effort must be put into the design of a system of punishments and rewards. The Eo then sets the rewards and the punishments for certain behaviors in a way that will lead the expected behavior to an acceptable behavioral equilibrium. If not, the result will be an environment in which the agents niche themselves in behaviors that are sub-optimal for the environment as whole. We therefore suggest the following schematic guidelines for environment design and owner maintenance: (i) Set up the conditions under which the agents are allowed to act in the environment. (ii) Assign to each (class of) possible allowed state(s) a preference de­ scribing the estimated value of the state (from the perspective of the ED-EO), and (iii) Calculate the assignment of punishments and rewards of behaviors that, when implemented in the environment, will have its equilibrium in the preferred states. The complexity of the calculation of punishments and rewards is of course dependent on the complexity of the allowed actions. It is not our purpose to expound our ideas about how to calculate punishments and rewards here, instead we leave it for future work. Design Principles for Agents: Each agent have a set of preferences in which each preference is a measure of the importance that a certain goal is fulfilled. We can distinguish two types of these preferences, static, and dynamic. The 83 static preferences are the ones set at the designers levels when the agents and the environments are implemented. The dynamic preferences are the ones set by the owners of the agents, and to some extent the owners of the environment, in run-time. We may expect a further development of the skills and abilities of the agents as the field of agent engineering matures. This means that they will be able to (if possible) exploit the weaknesses of the environments that they act in, as well as the weaknesses of other agents. Today these weaknesses are exploited manually through the expression of explicit owner preferences, but as the level of abstraction increases, we may expect this to be automated in a way that the ADS provide skills that automagically find out the weak spots of the environment and use them for its own purposes. A suggested set of guidelines for ADS are therefore to design/implement: (i) Abilities to find out the rules and conditions of an environment (e.g. by look-up services, etc). (ii) Abilities to optimize the behavior with respect to: a) the actions possible to perform in the given environment, b) the expected rewards and punishments of different behaviors in the environment, and c) the preferences of the Ao. (iii) An interface to the Ao in which the Ao can express its preferences. The Relation between the Agent and the Environment: It is possible to rec­ ognize two different types of relationships — between an agent and its envi­ ronment, and between agents (i.e. communicative acts). Also, an agent may observe the effects of its own and other agents actions, even though it may be hard or even impossible for the agent to draw any causal conclusions. If we take a closer look at what happens in the environment, the actions are performed under the assumption of the agent that the action was the best possible thing to do in order to reach its goals, expressed by its preferences, regardless of whether they are communicative or not. The agent must in all cases to some extent observe the external state of the environment and the other agents, but the distribution of computational attention between for example observing and acting, is individual from agent to agent. This is typically a parameter that is determined on the designers level. For instance, an agent that rely on learning in order to perform well may be designed to be more observant than an agent that must be prepared for quick responses on changes in the preferences of its owner. This means that it is possible that one agent in one system collects all possible observations, while another agent only observe the actions performed by itself. A study of the trade off between deliberation and action can be found in e.g. the work of Schut3. 84 3 An Exemplification of Preferences To exemplify our point, we use the student implementations of RoboCup teams at Linkoping University. The example, though somewhat artificial, clearly illustrates a number of occasions where the preferences of the envi­ ronment designer, the agent designer, and the agent owner influences the development process. RoboCup can simply be described as robots playing soccer, however we focus on the simulated league as we avoid dealing with the ambiguity of the real-world (more information on RoboCup and the sim­ ulated league is available in Kummeneje4). The designers of the server of the simulated league, is in our example considered to be the environment designer. RoboSoc5 is a base to ease the creation of new soccer playing teams, and Heintz is thereby considered in our example to be the agent designer, while the students creating their teams are considered to be the agent owners. The agent owners may or may not be aware of the preferences expressed in the simulation server and the RoboSoc platform, however if they are aware of the preferences (and most likely any caveats), they might be able to use these preferences. For instance, in 1997 and 1998 the maximum ball speed was not limited, allowing a team to accelerate the ball to incredible speeds by simply passing the ball a number of times. Afterward the discovery of this feature, it was changed to have a fixed limit. We thereby recognize that the set of preferences of the ED, AD, and the Ao are not fixed, but dynamically changing over time. The preferences may also be viewed as being delicately intertwined. 4 Discussion and Concluding Remarks The designer of the agent may be the same as the owner, however, more likely is that the future user of agent system is someone who is not able to program the low level algorithms, etc., but who prefer to use the agent at the service level. This will of course raise the issue of trust in agent design. How can we as users of an agent make sure that the agent we have launched to perform a certain task will do its best to serve us without putting the interests of the agent designer in the first room? For instance, should we trust a flight ticket buying agent, designed by someone at the payroll of a major airline company? Questions like this are important to ask if we as agent designers and representatives of the agent research community would like to deserve respect for what we are doing from the point of view of the users of our agents. We have presented a perspective on agent systems, based on preferences 85 set by users and designers and suggested general guidelines for the engineering of agents, as well as agent environments. From an evolutionary perspective, we may expect the agent designers to be better on taking other, external preferences into consideration, while the owners get less interested in how exactly the agent works, and more keen on having their preferences satisfied. The environment designers will concentrate on setting up rules, specific for the domain it is designed for. These rules will not be able to control what actions that can be performed by which agents at what time. However, indirectly the punishments and the rewards of the environment will have a great impact on these matters. Even though this study include a good example of the preference per­ spective in the domain of RoboCup, it is far too early to draw any extensive conclusions based on this and we suggest that more effort must be put into this promising area of research. Acknowledgments Stefan Johansson would like to thank the EC research programme 1ST-1999- 10298 ALFEBIITE and the KK-foundation for funding and inspiration to this work6. The authors thank Paul Davidsson and Magnus Boman for comments. References 1. M. Boman. What is rational agency. Technical Report 95-048, Depart­ ment of Computer Systems Sciences, 1995. Internal Working Note. 2. H.J.E. Verhagen. Norm Autonomous Agents. PhD thesis, Department of Computer and Systems Sciences, Stockholm University and Royal In­ stitute of Technology, 2000. 3. M. Schut. Intention reconsideration as discrete deliberation scheduling. In Proceedings of 2001 AAAI Spring Symposium on Game Theoretic and Decision Theoretic Agents, Thecnical Report SS-01-03. AAAI Press, 2001. 4. Johan Kummeneje. RoboCup as a Means to Research, Education, and Dissemination, March 2001. Licentiate thesis, Department of Computer and Systems Sciences, Stockholm University and the Royal Institute of Technology. 5. Fredrik Heintz. RoboSoc a System for Developing RoboCup Agents for Educational Use. Master's thesis, Department of Computer and Infor­ mation Science, Linkoping University, March 2000. 6. The ALFEBIITE home page, http://www.iis.ee.ic.ac.uk/alfebiite. AGENT CONSUMER REPORTS: OF THE AGENTS, BY THE AGENTS, AND FOR THE AGENTS XIAOCHENG LUAN, YUN PENG, AND TIMOTHY FININ University of Maryland, Baltimore County, 22215 Overview Lane, Boyds, MD 20841, USA E-mail: {XLUAN1, YPENG, FININ}@CS.UMBC.EDU Service matching is critical in large, dynamic agent systems. While finding exact matches is always desirable as long as an agent knows what it wants, it is not always possible to find exact matches. Moreover, the selected agents (with exact match) may or may not provide quality services. Some agents may be unwilling or unable to advertise their capability information at the sufficient level of details, some might unknowingly advertise inaccurate information, while others might even purposefully provide misleading information. Our proposed solution to this problem is the agent "consumer reports". The broker agent will not only collect the information advertised by the service provider agents, but also learn about the experiences the consumer agents have about their service providers. It might also hire some agents to test certain service providers to see how well they can do what they claim they are capable of doing. Then agent consumer reports will be built based on the information collected. The advanced level of agent consumer reports will also dynamically capture the probabilistic distribution of the services and use it to assess the probability of a match. We plan to extend LARKS and use it as our agent capability description language. 1 Introduction Finding the right agent(s) for the right task (service) is critical to achieve agent cooperation in large, dynamic agent systems. A popular approach to this problem is to use a broker agent (may also be called matchmaker, or facilitator) to connect the service provider agents and the service consumer agents, via service matching. Typically a broker agent recommends service providers based on the capabilities/services advertised by the service providers themselves. The matching method evolves from the early age, simple KQML performative based matching, to syntax and semantic based matching; from yes/no matches to matches with probabilities. However, we may still have problems since some agents may be unwilling or unable to advertise their capability information at sufficient level of details; some might unknowingly advertise inaccurate information; while others might even purposefully provide misleading information. We have similar problems in the real world: we don't know whether the colorful, fancy, and even touching commercials are true or not. There is no perfect solution to this real world problem, but consumer reports certainly help a lot (besides the justice system). Consumer reports are created using the information from the manufacture's specification, consumer's feedback, and their test results on the products. It provides 86 87 guidance for consumers to choose the right product. We believe that this consumer reports approach should work for the agent world, too. By following a simple brokering protocol (which will not be discussed here because of space limitation), the broker agent will not only collect the information advertised by the service provider agents, but also learn about the experiences the consumer agents have about their service providers. It might also hire some agents to test certain service providers to see how well they can do what they claim they are capable of doing. Based on the collected information and the domain knowledge, consumer reports can be built to assist in service matching. Moreover, the broker agent can dynamically capture the probabilistic distribution of the agent services and use this information to assess the probability of a service match. Finally, our approach goes beyond the simple notion of a "reputation server" in that it discovers and refines a complex, symbolic model of a service provider's performance. This rest of this article is organized into two sections. In section 2, we shall describe how the agent consumer reports will be built, and we will discuss some related issues in section 3. 2 Building Consumer Reports In our model of agent system, there are three types of agents: service provider agents, service consumer agents, and broker agents. A broker agent is the one responsible for building the agent consumer reports. To simplify the problem, but without loss of generality, we make the following assumptions: (1) All the agents (including the broker agent) in a system share a common domain ontology, and (2) the security and/or privacy issues are orthogonal to what we will discuss in this article. 2.1 Representation We are extending the LARKS framework for use in describing the agent's capabilities. LARKS, Language for Advertisement and Request for Knowledge Sharing, is an agent capability description language developed at CMU. It describes an agent's service by specifying the context, the data types, the input and output variables, and the input and output constraints. It also has a slot for the definition of the concepts used in the description. The matchmaking scheme in LARKS is relatively flexible and powerful. It has five filters, each of which addresses the matching process from a different perspective. "Context matching" determines if two descriptions are in the same or similar context; "profile comparison", "similarity matching", and "signature matching" are used to check if two descriptions syntactically match; "semantic matching" checks if the 88 input/output constraints of a pair of descriptions are logically match. Based on the need of a specific application domain, these filters can be combined to achieve different types/levels of matching. Since LARKS doesn't provide the mechanisms for describing the "ratings" of an agent service, we plan to extend LARKS so that, besides the 7 standard slots described above, a description will also have zero or more "CR" (Consumer Reports) slots. These slots (if any) are typically domain dependent, and will be used to describe the strength of various aspects of the service provided by some specific agent. For example, the integer sort service description can have some CR slots (in Italic) as shown in figure 1. Context Types Input Output InConstraints OutConstraints ConcDescriptions Pricelndex ResponseTimelndex Sort Xs: ListOf Integer; Ys: ListOf Integer; Le(length(xs), 100); Before(x,y,ys) <- ge(x.y); In(x,ys) <- in(x,xs); 2 (10 is best) 1 (10 is best) Figure 1. Capability description for integer sort, with CR slots. Basically we will add another type of filter, the consumer reports filter, to handle the CR related slots. Since these slots are usually domain dependent, the evaluation and comparison of these slots might need to be done in a domain dependent way. A default CR filter can be provided, e.g., to compare integer-typed slots. The system will allow customized CR filters to be plugged-in to handle the CR slots in a domain dependent way during the matchmaking or comparison. It is recommended that the consumer reports filter be applied after all the other designated filters have been applied. The CR filter will then be used to pick the best one(s) from all the candidates. Please note that while we plan to extend LARKS and use its service/capability description language and its matching filters, we think the approach proposed here is applicable to other representations or systems as well. 2.2 Building Consumer Reports The consumer reports are built based on the information the broker collects about the service provider agents. The information comes from various channels: The feedback 89 from service consumer agents, testing results (relevant agents can be asked or "hired" to test the service provider agents, when appropriate), the service descriptions advertised by the service provider agents, the domain knowledge etc. If the broker also performs task brokering (in which the broker receives a query, finds an appropriate agent, forwards the query to that agent, and passes the result back to the requesting agent), the requests and the results are useful sources for learning too. The building of consumer reports is more than just collecting feedback data and assigning ratings. There are two levels of consumer reports - the basic level and the advanced level. The basic level is simply about assigning ratings to each relevant CR slots of the original service descriptions based on the information collected. The advanced level, however, goes beyond the originally advertised service descriptions. It might also rate the sub-classes and super-classes of the advertised service class, and captures the probabilistic distribution of the services. Let's use an example to illustrate the basic idea. Consider selling televisions as a service with three sub-service classes: selling traditional TVs, selling HD-ready TVs, and selling HDTVs. Suppose the broker discovered that 85% of the advertisements/requests are about traditional TVs, 8% are about HD-ready TVs, and the rest (7%) are about HDTVs. Then if an agent requests a recommendation on "selling TV" service, the broker would be able to recommend a traditional TV seller with pretty high confidence, or recommend a HD-ready TV seller or a HDTV seller with low confidence (if there is no better choice). Five years later, the distribution of the 3 sub service classes might change to 30%, 20%, and 50% respectively. The broker agent will then be able to dynamically capture the changes in the probabilistic distribution and change its matching criteria accordingly. On the other hand, while most of the TV sellers (those who advertise that they sell TVs) sell traditional TVs, not that many TV sellers sell HDTVs. So based on the probabilistic distribution, the broker agent would be more confident to recommend a TV seller if the request is about traditional TV, while it would be less confident (to recommend a TV seller) if the request is about HDTV. When computing the probabilistic distributions, we consider both how many sub classes a service class has, and the frequency of the advertisements and recommendation requests on that service. Moreover, the feedback from the consumer agents will also be taken into account. In large, heterogeneous agent systems, while exact service matches are always desirable (as long as you know what you want), it's not always possible to find exact matches. Therefore, it's important for the broker agent to learn the probabilistic distribution of the services so as to identify the partial matches that have higher probability of success. 90 3 Discussions This paper presents some preliminary concepts and plans for an adaptive service broker which learns and refines a model of a service provider's performance. Although we have touched on a number of issues, significant additional issues remain as well as a concrete implementation. The related issues not addressed here include (but not limited to) the security issue, the privacy issue, the fairness issue, and the ontology issue. We believe that the security issue and the privacy issue are orthogonal to what we've discussed here. The fairness issue is more closely related. Though we believe that in general the agent consumer reports provide basis for better service matching, the ratings on specific services may not always be "accurate" - the evaluation of "accuracy" itself is already a big issue. One (partial) solution in mind is for the broker agent to always return a list of service provider agents (instead of the best one(s) only) but will be ordered. For the ontology issue, what if the agents have only a limited subset of shared ontology, or they might use just different ontologies. This issue is somewhat orthogonal, but not cleanly. Employment of ontology translation or ontology negotiation might help. One of the ideas behind this work is the law of locality. The approach proposed here is meant to capture both the temporal locality (e.g., the distribution may change over time) and the spatial locality (e.g., a sub set of the services may get referenced frequently). We will develop a prototype implementation of a system which is partly based on the LARK framework. We will incorporate new ideas which are evolving from the semantic web [Berners-Lee, et. Al. 2001] and the DAML [DAML, 2000] language in particular. Some initial work has been done to explore how DAL can be used to represent and reason about web services and agent services [DAML-S 2001, Mcllraith and Zeng 2001]. References 1. [Cohen, et al, 1992] Cohen, W., Borgida, A. and Hirsh, H. Computing Least Common Subsumers in Description Logics. Proceedings of the National Conference on Artificial Intelligence - AAAI 92, pp 754-760, 1992 2. [Decker, et al, 1996 (1)] Decker, K, and Sycara, K and Williamson, M, Modeling Information Agents: Advertisements, Organizational Roles, and Dynamic Behavior. Working Notes of the AAAI-96 workshop on Agent Modeling, AAAI Report WS- 96-02. 1996. 91 3. [Dellarocas 2000] Dellarocas C, , Immunizing online reputation reporting systems against unfair ratings and discriminatory behavior, Proceedings of the 2nd ACM Conference on Electronic Commerce, Minneapolis, MN, October 17-20, 2000 4. [Genesereth & Singh, 1993] Genesereth, M. R. and Singh, N. P., A Knowledge Sharing Approach to Software Interoperation Stanford Logic Group Report Logic- 93-12. 5. [Gruber, 1993] Gruber, T. R., A Translation Approach to Portable Ontologies. Knowledge Acquisition, 5(2): 199-220, 1993. 6. [Michalski, et al, ????] Michalski, R. S., Carbonell, J. G., Mitchell, T. M., Machine Learning, An Artificial Intelligence Approach, Tioga Publishing Company 7. [Mui 2001] Mui, Lik, Szolovitz, P, and Wang, C, Sanctioning: Applications in Restaurant Recommendations based on Reputation, Proceedings of the Fifth International Conference on Autonomous Agents, Montreal, May 2001. 8. [Sycara, et al, 1998] Sycara, K., Lu, J. and Klusch. M. Interoperability among Heterogeneous Software Agents On the Internet. CMU-RI-TR-98-22. 9. [Berners-Lee, et. Al. 2001] Tim Berners-Lee, James Hendler and Ora Lassila, The Semantic Web, Scientific American, May 2001. 10. [Chen et. Al., 2001] Harry Chen, Anupam Joshi, Tim Finin. "Dynamic Service Discovery for Mobile Computing: Intelligent Agents Meet Jini in the Aether." The Baltzer Science Journal on Cluster Computing. March 2001 (Volume 3, No. 2). 11. [DAML 2000] DAML specification, http://www.daml.org/, October 2000. 12. [DAML-S, 2001] DAML-S: A DAML for Web Services, White paper, SRI, http://www.ai.sri.com/daml/services/daml-s.pdf 13. [Labrou, et. Al, 2001] Yannis Labrou, Tim Finin, Benjamin Grosof and Yun Peng, Agent Communication Languages, in Handbook of Agent Technology, Jeff Bradshaw, ed., MIT/AAAI Press, 2001. 14. [Mcllraith abd Zeng, 2001] Mcllraith, S., Son, T.C. and Zeng, H. "Semantic Web Services" , IEEE Intelligent Systems. Special Issue on the Semantic Web. To appear, 2001. 15. [WSDL, 2001] Web Services Description Language (WSDL) 1.1, January 23, 2001, Microsoft Corporation, http://msdn.microsoft.com/xml/general/wsdl.asp Logical Formalizations Built on Game-Theoretic Argument about Commitments Lamber Royakkers and Vincent Buskens * Abstract The formalization of commitment is a topic of continuing interest in Artificial Intelligence (AI)'s understanding of human cooperative activity and organization. Such formalizations are crucial for clarifying rational behavior. AI research on commitments, however, has been focusing on describing systems of agents, neglecting the individual incentives to per­ form certain actions. We argue in this paper that an understanding of a system of agents needs to incorporate not only a logical system of possi­ble actions, but also an incentive structure related to the actions and the interdependence of agents involved in interactions between more agents. As an example we will discuss the use of commitments in interactions be­ tween two agents. By adding game-theoretic reasoning, we will not only be able to describe different commitment systems in various (legal) set­ tings, but we can also determine whether or not such commitment system is expected to be socially efficient, desirable, and able to influence human behavior. 1 Introduction Many social interactions between two (or more) agents demand for various rea­ sons the use of commitments to reach socially efficient or avoid socially inefficient outcomes. We will start with an example. Assume you want to write an arti­cle together with a colleague. You are both convinced that joining forces will produce a better product than writing two articles separately. However, you as well as your colleague cannot be sure that the other will actually invest his fair share in this joint project (cooperate). Still, if both of you work hard, you will both be satisfied. You realize that if the colleague sits back (defects) while you do the job, he is even better off and you would have preferred to write an article alone. Clearly, your colleague also fears that you sit back and profit from his effort. * Supported by a grant from the Niels Stensen Foundation and by a grant from the Netherlands Organization for Scientific Research (NWO), email: l.m.m.royakkers@tm.tue.nl, v.buskens@fss.uu.nl. 92 93 2,2 1,4 4,1 3,3 Agent 2 Defect Cooperate Defect Agent 1 Cooperate Figure 1: Strategic form of the Prisoner's Dilemma Game The "game" described above (without commitments) is called a Prisoner's Dilemma Game [3]. In strategic form,1 the game is shown in figure 1. The values in the cells of the matrix indicate the payoffs for each agent related to a combination of actions of the two agents. The expected action in this game is "defect" by both agents, because independent of the action of the other agent, each agent is better of by defecting. Consequently, both agents receive 2 instead of 3, which they could obtain if they both would cooperate. Thus, the expected outcome (2,2) is socially inefficient. However, by committing to cooperation, e.g., my mutually informing the responsible professor who can incur sanctions on the researcher who does not work on the joint paper, cooperation becomes the best option for both agents. Hence, a mutual commitment leads to a better outcome for both agents in this situation. If we want to represent such a simple interaction in a logical system, only the possible actions are described. Commitment is then introduced as an ele­ mentary proposition. This implies that the commitment is a fact that does or does not occur. More sophisticated theories [2, 4] describe a formalization of motivational attitudes such as intentions, goals, and wishes that explain why agents behave the way they do. However, within the logical systems there is nothing that drives the motivational attitudes. It is only stated that if certain attitudes are present, commitments are used without explicit reasoning why and when a certain attitude leads to a commitment. For example, in organization theories of Distributed Artificial Intelligence (DAI), negotiation systems, and cooperative software agents, the notion of commitment is used as a mediator of the transformation of the collective activity to the agents expressing issues such as delegation, adaptation, intention, responsibility, etc., which constitutes the theory of collective activity in a narrower way (cf. [1]). We use the primitive notions of intention, knowledge, and goal to define formally social commitment, inspired by Castelfranchi [1, 2]: COMM(i, j,r) =def INT(i,r) A /^(INT^.r)) A GOAL(j, ACHIEVE^, r)), (1) where Kj() stands for the fact that agent j knows , and is interpreted in the Kripke-style possible worlds semantics. The definition means that agent i is committed to agent j to achieve task r if and only if agent i has the intention to do that, agent j knows this, and agent j is interested in i fulfilling i's intention. 1For all basic game-theoretic terminology and aspects we refer the reader to [6]. 94 The last condition can be seen as a goal adoption: the achievement of the task is a goal of j. In game theory, motivational attitudes are represented by the payoffs agents receive at the end of an interaction, based on their combination of actions. The situation discussed above is only one example of a situation in which a commit­ ment can change the expected outcome for an interaction between two agents. Likewise, the usefulness of commitment systems can be investigated for many social and legal interactions. For now, we will give a very informal description of what we mean by a commitment in this paper. Later we will become more precise and we will show that there are various types of commitments. Definition 1 A commitment is an action by an agent before an interaction with other agents that signals to the other agents the intention to perform a particular action later on in the interaction. We restrict ourselves in this paper to commitments that ensure that the agent who commits to a certain action will execute this action (binding commitments). 2 Adding Game Theory Our main criticism of logical systems is that they do not explain but only de­ scribe actions by agents, probably including the use of commitments. Logical systems fail to explain why commitments are used in some situations and not in others. Logical systems cannot distinguish which commitment is or is not credible in a given interaction. The reason is that logical systems generally ne­ glect the incentives related to various combinations of actions and the strategic interdependence between different agents. Besides explaining the use and effectiveness of commitments, game theory can help to distinguish between different types of commitments. As an illustra­ tion, we consider games in which two agents have each two possible actions and preferences over the four possible outcomes are strictly ordered for both agents. Because only the ordering of the payoffs is important for the analyses, they can be labeled as 1, 2, 3, and 4.2 Rapoport, Guyer, and Gordon [5] show that there exist 78 distinct 2x2 games with strictly ordered payoffs.3 Each of the four outcomes represents a possible goal state for the agents. The goal states for the two agents do not need to coincide. For considering commitments, we classify these 78 games in eight groups. Figure 2 presents the matrices for one representative of each group. In these games, agent 1 chooses between T(op) and B(ottom), while agent 2 chooses between L(eft) and R(ight). The example of the introduction is not included in this set of games, because in this example the four possible outcomes are not strictly ordered. Including games for which the outcomes are not strictly ordered complicates the analysis considerably. 3 Two games are considered the same if the one can be constructed from the other by changing rows, columns, or person labels. 95 T B L 4,4 2,2 R 3,3 1,1 T B L 2,4 3,2 R 4,1 1,3 T B L 3,3 4,1 R 1,4 2,2 T B L 2,4 1,2 R 4,1 3,3 (1) L R 2,3 1,2 4,1 3,4 (5) (2) L R (3) L R 3,4 1,2 2,1 4,3 2,4 1,2 3,1 4,3 (6) (7) (4) L R 3,3 4,2 2,4 1,1 (8) Figure 2: Representative examples of 2 x 2 games with strictly ordered outcomes Examples (1) and (2) illustrate two situations in which both agents do not want or need to commit to any of the two actions. Example (1) represents a group of 58 games in which at least one of the two agents has a dominant strategy.4 The other agent optimizes her payoff given the dominant strategy of the first agent, and both agents cannot do better using a commitment for some other strategy.5 Example (2) represents 4 games in which none of the agents has a dominant strategy and there exists only one (mixed) equilibrium in which the agents randomly choose between the two options. Their expected payoffs lie between 2 and 3. If one agent would commit, she would not obtain more than 2.6 For examples (1) and (2) it is impossible to formalize a commitment that affects the behavior of the agents. I.e., any commitment the agents want to make leads to the same behavior as they would execute if there was no commitment. Example (3) is the Prisoner's Dilemma game. This is a very special game. In this game, the game-theoretic solution predicts that both agents obtain 2, while they both would prefer to obtain 3. However, this would imply that both agents have to deviate from their dominant strategy. Consequently, the only commitment arrangement that can work in this game is one in which both agents commit to not playing the dominant strategy. No agent wants to commit unilaterally to Top or Left, respectively, because then the other agent certainly plays the dominant strategy leaving the first agent with the worst outcome possible. This can formally be expressed as follows: COMM(l, 2, Top) A COMM(2,1, Left), (2) implying that agent 1 commits to playing Top and agent 2 to playing Left, which leads to the goal state (3,3). This bilateral commitment can be seen as a special case of a collective commitment. Example (4) is also a unique game. In this game, agent 1 wants to commit to playing Bottom, which would result in a payoff 3 for both agents. However, agent 2 prefers to play the game without commitment, which leads to a payoff 4 4 An agent has a dominant strategy if there is one action the agent can perform that gives her a higher payoff for each of the actions the other agent can perform. Readers interested in the precise classification of all the games can contact the authors for an overview. 6 A (Nash) equilibrium is (loosely) an outcome in which none of the agents wants to change her action given the action of the other agent. 96 for her. This shows that definition (1) is too restrictive to incorporate some kinds of commitments. It requires that the commitment of one agent contributes to a goal of the other agent. This presupposes that both agents have the same goal state. However, example 3 illustrates a situation in which (3,3) is the goal state of agent 1 while (2,4) is the goal state of agent 2. Moreover, without commitment the outcome will be (2,4). Consequently, agent 1 wants to commit to play Bottom. Because this is not the goal state of agent 2, such a commitment does not follow the definition (1). However, an alternative definition: COMM'(i, j, r) =de( INT(i, r) A K(j, INT(i, r)) (3) formalizes a unilateral commitment that does not need to lead to the goal state of agent j. This definition excludes that there has to be an agreement between the agents about whether or not the commitment can be made.7 Example (5) represents a group of 8 games, in which both agents agree that one agent should commit. Without commitment they both obtain less compared to the situation that one agent commits. In example (5), agent 1 has to commit to play Bottom. Example (6) represents 3 games, which could also be called "coordination" games. In these games, there are more equilibria, and both agents want to coordinate on one of the equilibria, but without a commitment they do not have a clue about what the other agent will choose. In these games, the agent who commits first is best off, and the other agent is better off than if their would not be a commitment, although she would have preferred to be the one who committed herself. Note that in these games, a two-sided commitment does not work if, for example, agent 1 commits to Bottom and agent 2 commits to Left. The definition (1) is a suitable formalization for a commitment that leads to a socially efficient outcome in example (5) and example (6). However, for example (6), there is a complication because both agents might commit, but they should not commit simultaneously. Therefore, a suitable commitment system should prescribe which agent is allowed to commit. Both agents want to commit because the committed agent receives 4, while the other agent receives 3. The system can be formalized by the convention: COMM(l, 2, Bottom) V COMM(2,1, Left)A -.(COMM(l, 2, Bottom) A COMM(2,1, Left)). (4) Example (7) looks very much the same as example (6). The only difference is that agent 1 prefers to play the game without a commitment, rather than that agent 2 commits to playing Left, while this is the best solution for agent 2. On the other hand, both agents prefer to play the game while agent 1 commits to playing Bottom over playing the game without a commitment. There are two 7For example, a car driver will stop for somebody who started crossing the road, although the car driver would have preferred to continue driving while the other person waited at the sidewalk. In this example, starting to cross the road is the commitment signaling the intention of the pedestrian to go first. 97 games with this structure. This analysis suggests that COMM(l, 2, Bottom) is the preferred formalization of a commitment in this situation. Finally, example (8) is a unique example in which different commitment systems lead to three different solutions. If the agents can commit unilaterally, agent 1 commits to playing Bottom, while agent 2 commits to playing Right. The one who commits first obtains 4, while the other who has to follow obtains 2. However, if they can agree on committing to play Top and Right, they both obtain 3, which is still better than playing without a commitment, because the expected outcome for both agents is then somewhere between 2 and 3. The socially efficient outcome (3,3) can only be reach with a bilateral commitment, expressed by formula (2). What we learn from this classification of simple 2x2 games is that the definition of a social commitment provided in logical systems leaves too many essential dimensions of a commitment unspecified. If the commitment has to be agreed upon by the non-committed agent, the committed agent will commit in other situations than if the committed agent can unilaterally commit which is neglected in existing logical formalizations. Therefore, we introduced another type of commitment using the operator COMM', which does not include that the intended action of the committed agent contributes to the goal of the other agent. It might be crucial whether one or both agents have an option to commit to a move and in which order the agents obtain the opportunity to commit. In game-theoretic terms, these options can be formalized by adding moves to the game that implement the possibilities for the agents to commit and, eventually, to accept the commitment of the other agent. These moves might be speci­ fied simultaneously or sequentially. Using game-theoretic reasoning, solutions of these extended games can be calculated, which provides predictions about whether or not commitments will be used and what the consequences of these commitments are depending on the chosen commitment system. As a result, insides are obtained about whether a commitment system is socially efficient or favors one of the two agents. References [1] Castelfranchi, C, Commitments: From individual intentions to groups and or­ ganizations, in: V. Lesser (ed.), Proceedings First International Conference on Multi-Agent Systems, AAAI-Press and MIT Press, San Francisco, 41-48, 1995. [2] Dunin-Keplicz, B., and R. Verbrugge, Collective commitments, in: M. Tokora (ed.), Proceedings Second International Conference on Multi-Agent Systems, AAAI-Press, San Francisco, 56-63, 1996. [3] Luce, R.D. and H. Raiffa, Games and Decisions, Wiley, New York, 1957. [4] Meyer, J.-J.Ch., W. van der Hoek and B. van Linder, A Logical approach to the dynamics of commitments, Artificial Intelligence 113, 1-40, 1999. [5] Rapoport, A., M.J. Guyer, and D.G. Gordon, The 2x2 Game, University of Michigan Press, Ann Arbor, MA, 1976. [6] Rasmusen, E., Games and Information: An Introduction to Game Theory (2nd), Blackwell, Oxford, 1994. ASYNCHRONOUS CONSISTENCY MAINTENANCE MARIUS-CALIN SILAGHI, DJAMILA SAM-HAROUD, AND BOI FALTINGS EPFL, CH-1015, Switzerland {Marius. Silaghi, Djamila. Haroud, Boi. Faltings} @ epfl. ch Maintaining local consistency during backtrack search is one of the most powerful techniques for solving centralized constraint satisfaction problems (CSPs). Yet, no work has been re­ ported on such a combination in asynchronous settings. The difficulty in this case is that, in the usual algorithms, the instantiation and consistency enforcement steps must alternate sequen­ tially. When brought to a distributed setting, a similar approach forces the search algorithm to be synchronous in order to benefit from consistency maintenance. Asynchronism 1|2 is highly desirable since it increases parallelism and makes the solving process robust against timing variations. This paper shows how an asynchronous algorithm for maintaining consis­ tency during distributed search can be designed. The proposed algorithm is complete and has polynomial-space complexity. Experimental evaluations show that it brings substantial gains in computational power compared with existing asynchronous algorithms. 1 Introduction A constraint satisfaction problem (CSP) is defined as a set of variables taking their values in particular domains and subject to constraints that specify consistent value combinations. Distributed constraint satisfaction problems (DisCSPs) arise when the constraints or variables come from a set of independent but communicating agents. The most successful centralized algorithms for solving CSPs combine search with local consistency. The local consistency algorithms prune from the domains of vari­ ables the values that are locally inconsistent with the constraints, hence reducing the search effort. When a DisCSP is solved by search using a distributed network of agents, it is desirable that this search exploits asynchronism as much as possi­ ble. Asynchronism gives the agents more freedom in the way they can contribute to search. It also increases both parallelism and robustness. In particular, robust­ ness is improved by the fact that the search can still detect unsatisfiability even in the presence of crashed agents. The existing work on asynchronous algorithms for distributed CSPs has focused on one of the following types of asynchronism: a) deciding instantiations of variables by distinct agents. The agents can propose different instantiations asynchronously. b) enforcing consistency. The distributed process of achieving "local" consistency on the global problem is asynchronous (e.g. Distributed Arc Consistency 3). We show how these techniques can be combined without losing asynchronism. 98 99 level 0 level 1 level 2 A1 A^ Aa. MA sAa^----rv level 0 Bli^>o-leveM proposals of A1 Figure 1. Distributed search trees: simultaneous views of distributed search seen by A2, A4, and A4, respectively. Each arc corresponds to a proposal from Aj_ 1 to Aj. 2 Preliminaries Asynchronous search In this paper we target problems with finite domains. We consider that each agent Ai wants to satisfy a local CSP, CSP(Aj). The agents may keep their constraints private but publish their interest on variables. The technique we propose builds on Asynchronous Aggregation Search (AAS), a general complete protocol for solving distributed CSPs with polynomial space requirements 2. AAS is an extension of Asynchronous Backtracking (ABT) and allows for asynchronism of type a. AAS uses a strict order on agents. We assume that Aj has the position j, J> 1- If j > k, we say that Aj has a lower priority than Ak- Aj is then a successor of Ak, and Ak a predecessor of Aj. Asynchronous distributed consistency The centralized local-consistency al­ gorithms prune from the domain of variables the values that are locally inconsistent with the constraints. Their distributed counterparts (e.g. 3) work by exchanging mes­ sages on value elimination. The restricted domains resulting from such a pruning are called labels. In this paper we will only consider the local consistencies algorithms which work on labels for individual variables (e.g. arc-, bound-consistency). Let P be a Distributed CSP with the agents Al,ie{l..n}. We denote by C{P) the CSP defined by Uj6{i..n}CSP(.4t). Let A be a centralized local consistency algorithm as just mentioned. We denote by DC(.4) a distributed consistency algorithm that com­ putes, by exchanging value elimination, the same labels for P as A for C(P). When DC(A) is run on P, we say that P becomes DC(A) consistent. 3 Asynchronous consistency maintenance In distributed search, each agent has its own perception of the distributed search tree. It is determined by the proposals received from its predecessors. In Figure 1 is shown a simultaneous view of three agents. Only A2 knows the fourth proposal of A\. Ay, has not yet received the third proposal of Ai consistent with the third proposal of A\. However, A4 knows that proposal of A2. Suppose that A\ has not received anything valid from A3, A4 will assume that A3 agrees with A2. The term level in Figure 1 100 refers to the depth in the (distributed) search tree viewed by an agent. We show that A, can then benefit from the value eliminations resulting by local consistency from the proposals of subsets of its predecessors, as soon as available. 4 The DMAC protocol This section presents DMAC (Distributed Maintaining Asynchronous Consistency), a complete protocol for maintaining asynchronous consistency, built on AAS. Definition 1 (Aggregate) An aggregate is a triplet (xj,Sj,hj) where Xj is a vari­ able, Sj a set of values for Xj, s-,-^0, and hj a history of the pair (XJ , Sj). The history guarantees a correct message ordering. Let ai = (xj,Sj,hj) and a^ = (xj,s'j,h'j) be two aggregates for the variable Xj. a\ is newer than a-i if hj is more recent than /i'. The ordering of histories is described in full detail in 4. The newest aggregates received by an agent Ai define its view, view(Ai). An aggregate-set is a set of aggregates. Let V be an aggregate-set and vars(Aj) the variables of CSP(^4,). Ti(V) will denote the set of tuples directly disabled from CSP(J4,) by V. Definition 2 (Nogood entailed by the view) V—>-iTi(V) is a nogood entailed for At by its view V, denoted NVi{V), iff V'CV and T(V) = T(V). Definition 3 (Explicit nogood) An explicit nogood has the form -> V, or V—>fail, where V is an aggregate-set. The information in the received nogoods that is necessary for completeness can be stored compactly in a polynomial space structure called conflict list nogood. Definition 4 (Conflict list nogood) A conflict list nogood, denoted by CL, for A; has the form V—>->T, where Vj, if no other history known by Am and built by agents A^, k(x&lx) or V^>^(xGs\lx). V is an aggregate-set and may contain for x an aggregate (x,s,h), lxCs. Any aggregate in V must have been pro­ posed by predecessors of 'Ak+i. lx is a label, lx^ty. Each consistency nogood for a variable x and a level k is tagged with the value of a counter Cx at sender and is sent via propagate messages to all interested agents Ai,i>k. The agents Ai use the most recent proposals of the agents Aj,jk, the last consistency nogood (with highest tag) sent by Aj for level k, denoted cnx (i,j), if it is valid. It has the form V£x—>(xeSjtX). Let cnkx(i,.) be (U*f ^)->(ien^.fc s^x). P^k) := CSP(^) U (Uxcnkx(i,.)) U NV^V^) U CLj.. Cx is incremented each time a new cnx(i, i) is stored. On each modification of Pi (k), cnx(i,i) is recomputed by inference (e.g. using local consistency techniques) for each variable x for the problem Pi(k). cnx(i, i) is initialized as an empty constraint set. CLxk is the set of all nogoods known by Ai and having the form F->-,T where VCV£ and T is a set of tuples in CSP(^). CL\ may contain the CL of A±. An agent can manage to maintain one CL for each instantiation level and the space requirements do no change. cnx(i, i) is stored and sent to other agents by propagate messages if and only if any constraint of CSP(Aj) or CL was used for its logical inference from Pi(k) and its label shrinks. We only use DC techniques that terminate (e.g. 3). By quiescence of a group of agents we mean that none of them will receive or generate any valid nogoods, new valid aggregates, or addlink messages. The proofs are given in 5. 102 Property 1 Vi in finite time tl either a solution or failure is detected, or all the agents Aj, 0,- and P}• (1 < i ^ j < n), Ph PJt and P may need to be redefined. 4.1. Unary relation: Repetition. It means that the corresponding task needs to be performed many times. And P, =redef 'Pi- 4.2. Binary relations. There are four kinds of binary relations between sibling nodes, serialization, synchronization, sequence, and parallel1. 4.2.1. Serialization. It means that the performing order of two tasks is not important, but the two tasks cannot be carried on concurrently. And P, =rm P„ -P.'^P Pj =re„e, P,J • fy • V ^ 1 While defining the plan process, we require that serialization relations must be considered first, and then synchronization and sequence; otherwise, deadlocks may be brought into the plan process. For example, consider three sub-processes, P, Q, R, among which P and Q must be performed serially and R must be carried on before both P and Q. If we do not follow the above convention, we may get process p .5 .P.V~\p .~>^ yp(j * pg ^^ Q communicates with Spq before P has a chance to do so, a deadlock will occur. 107 Where, s = rT.v .S is like a pv semaphore controller in y def rij y ij ^ y r r operating systems. 4.2.2. Synchronization. Two tasks with a synchronization relation must be performed at the same time. And2 Pi=,m$irPi> Pi=retef^rP>'and' P=,m(.^,j)-\Pi\-\Pj\- 4.2.3. Sequence. The performing of two tasks should be controlled under a restricted order, i.e., one must precede the other. And Pt=«ufP,-'s'v. ^=^r Wand p=^(ySu)-\p,\-\Pj\- 4.2.4. Parallel. They can even be carried on concurrently. For that case, processes need not to be redefined. 5. There are no any other kinds of nodes or relations within the plan tree except for those defined above. For example, in an electronic commerce community, a price negotiation procedure can be planned as the repetition of price bargaining between two parties (figure 1). Accept the price? Agree/Disagree '"•£ Dash arrowhead arc represents Wait for a stroked price Wait for an asked price Strike a price the unary repetition relation. •••> Sequence <-> Serialization ••-• Synchronization Figure 1. The plan tree of a price negotiation procedure In the plan, the bargaining process, which is divided into two sub-processes of price asking and striking, will repeat for any times until both sides make a deal. For the price-asking process, it is divided further into two sub-processes, a process asking a price and then the other waiting for a stroked price. For the price-striking process, it is also divided into two sub-processes, one waiting for a price and then the other striking a price back. Once someone (for instance, the bargaining initiator) thinks the stroked price is acceptable, it can stop bargaining and make a deal. 2 Synchronization relations are symmetric, so we need only to consider those cases that i < j. Thus deadlocks can be avoided among synchronized nodes. 108 The plan shown in figure 1 can be expressed in the pi-calculus as follows. PriceNegotiationPlan = P0 = (vS0) !/>, . S0 | S0 • P2 P1I=(^I)/J111.^"|J1.P1I2 />, = {v52)Pm .T2\S2 .Pm When representing a cooperation plan in the pi-calculus processes, we add some new communicating ports to control the execution of sub-processes so that we could represent relationships within a composition process. Generally, when there are relationships such as serialization, synchronization, and sequence in a system, there may occur deadlocks. Fortunately, by using the procedure described above, we can get a non-deadlock plan process if there is no deadlock among the plan tree. Proposition 1. If there is no deadlock among the plan tree, the corresponding composition process of the plan will be deadlock free. The proof is quite simple. As discussed above, we can first eliminate the possibility of a deadlock lying in serialization and synchronization relations. On the other hand, any two synchronized processes cannot have sequence relations with another process simultaneously, and vice versa. That is to say, any sequence relations and synchronization relations are impossible to bring a cyclic waiting-chain into processes if there is no cyclic waiting-chain occurring in the plan tree. Thus, we can say that the translation described above is deadlock free. 3.2 Agent In a cooperative environment, an agent must undertake tasks to cooperate with others by complying with a certain cooperation plan. We can define an agent as an entity that includes actions, tasks it undertakes, and behavior specifications consistent with a specific cooperation plan. To represent the behavior specifications of an agent, we define a function of expectation from actions to actions to indicate that the agent expects to conceive what kind of response after it takes an action. An agent is an 4-ary tuple. A = Where, A is an action set, 7"is a collection of tasks, £"is A's expectations and defined as a function E: A—> A, and Bis A's beliefs. Components of Agents can be defined on the pi-calculus formally, in which the action set A is a set of pi-calculus actions, the task set 7~is a collection of pi-calculus processes, and for any process P e 7~and P-y.P',ye A. Suppose that a, /? e A, then E(a) = ft means that if the agent A takes action a, it will expect that action p to happen. In general, we can say that only when an agent is waiting for something does it expect that thing to appear, so we will only define an agent's expectations on its input actions. Then if E{a) = 0, a can be either an input or an output, but /3 must be an input action. 109 For any process P s T, suppose that P has the following form. P= — .a.p.— Where, oris an input/output action, and /?is an input action. Then E(a) = J3. In addition, suppose that the agent is assigned two tasks within a cooperation plan, P, and P2, if there is a sequence relation between them, and ^=(-.or.-).^T, P2=Sn. (-.p.-) Where, oris an input/output action, and /?is an input action. Then E\a) = p. Since each agent has its own actions, tasks, expectations, and beliefs, A, T, E, and B can be viewed as functions with the domain of agents. In the rest context, we use /4(A), 7[A), E(A), and B(A) to denote the action set, the task set, the expectations, and the beliefs of A, respectively. In this paper, we will only consider such kind of beliefs as whether an agent trusts others, whether the agent is trusted by others, and so on. For convenience, we mark x e Bas A > x. Suppose there is a set of agents, Ag, and A, B e Ag, then A > B means A trusts B, whereas A> (B\> A) means A believes that B trusts A as well. 3.3 Bind Agents into the cooperation plan The cooperation plan is only a cooperation blueprint or specification of tasks, which does not provide concrete actions or functions to perform those tasks. After cooperation is planned, tasks should be assigned to cooperative agents. For example, if we allocate those tasks shown in figure 1 to a seller agent, S, and a buyer agent, B, for instance, P0, Pi, Pn, Pm, Pm, P2, and P2i to S, and Pn, Pm, P122, ar|d P22 to B, agent S and B can be defined as follows. S= B= A = {ap,a(x),0\p,o2(y)) A = {eop,a(x),ol(y),o2t} T = {P0,Pl,P„,Pln,Pnl,P1,P2l) T = {Pl2,Pm,Pll2,P12) E = {,} E = {,,a>,,ol >} B = {) B = {) PIU =CalculatePrices(p).ap pm = a(x).CakulatePrice „(p) Pul=aKx) Pm=cop P2l=o,p.o2(y) P22=o,(y).o2t Figure 2. Formal definitions of agent S and B Where, a and a represent actions "asking a price" and "waiting for an asked price" respectively, co and a represent actions "striking a price" and "waiting for a stroked price", and o, asks "Accept the price or not?" and then o2 waits for the answer. Functions CalculatePricesip) and CalcuatePriceB(p) are used to calculate a new asking price and a new striking price, respectively. 110 For agent S's expectations, they mean that the seller hope that it will receive a response after each round of bargaining and the buyer will acknowledge its any questions. For agent B's expectations, the buyer may expect that the bargaining must be initiated by someone else, and after it strikes a price it may hope that the seller asks a new price or makes a deal with it. To assemble cooperative agents into the cooperation plan, we should connect the abstract plan specification with those concrete implementations of agents' functions. In the pi-calculus, we can use the following method to achieve that. First, we view the tasks occurring in the plan process as pointers and then make those pointers point to the functions provided by agents. For example, suppose that Pi is a task in the plan process and has been assigned to agent A, who will undertake that task by taking action Ta, then we can define following processes. Pi=Z,.A> A = Zi.A'Ta Then compose the processes defined above into a composition process, that is Pi\A = Z,.A I *LA • T<, Thus we bind the agent with the plan together. On the other hand, an agent may undertake several tasks, for instance, Th T2, ...,Tke 7(A), then 7(A) can be re-defined as a composition of processes. r(A) = zl-Tl\z2.T2\ ••• \zk-Tk Thus, a cooperation system with a cooperation plan, Plan, and a collection of cooperative agents, Ah A2, ..., A„, can be defined as follows. Sys = Plan\T(Al)\T(A2)\ ••• |7"(A„) 4 Reason about Mutual-Belief In this section, we will define some inference rules for reasoning about mutual-belief among agents. While defining those rules, we mix the position-exchange principle into the definitions. And then we will describe in what condition agents will believe each other mutually. 4.1 Rules on Beliefs To define rules on beliefs, we should first know what actions are observable to an agent. To represent an agent obseves an action y, we assign the form A—Z—*A' with the following meaning. Pe TO!),«„«.,,—,gte/4Q4), P ">•">• "•"" >P', P'—^P",r*^ A^e A(A) A—L>A' Intuitively, if an action is observable in a process, it is also observable to the agent. In general, An agent knows nothing about others. To build beliefs on others, it can only base on those messages it has sent and received. However, not all messages 111 it receives are something that it is waiting for or expecting. So, in our definitions of rules on beliefs, we include the expectations of agents as premises and then agents will only believe things that they are expecting. Based on the position-exchange principle, an agent can derive beliefs on it from messages it receives, and then derive beliefs on others from messages it sends. 1. Belief about honesty of the other If the agent receives a message that it is expecting, it will believe that the sender agent is trustable. A—g->A',3cr-(a,/7)e E(A),~J3e A{B) A>B (BR1) Where, a can be an input/output action, whereas P must be an input action. 2. Belief on the other's belief Correspondingly, under the position-exchange principle, A will believe that agent B also trusts it if A responds a message to B as B requests. A—£-»A',3a-(«,jff)eF(B),/7e/4(fi) A>(B>A) (BR2) While using the position-exchange principle in the above rule, we do not substitute all occurances of A. Instead, we just replace the action /} with its complement one p since A may not be clear how the receiver, B, is evolving. 4.2 Mutual Belief among Agents Informally, we say two agents have built mutual belief if both of them trust each other and each of them believes that it counterpart also trusts it. Then, the mutual- belief can be defined formally in several groups of beliefs. 1. Both of the agents believe in its counterpart. A> B, and B> A 2. Each of the two agents believes its counterpart trusts it as well. A>(B> A), and B>(A> B) For a cooperation plan, in which its tasks are allocated to cooperative agents, if those agents cannot build mutual belief during cooperation, we will say that cooperation will not proceed smoothly and it is irrational. In other words, to build mutual belief among agents is the least requirement for cooperation. Definition: At-Least-Rationality of cooperation. If agents can build mutual belief during cooperation, we say that cooperation is at least rational. 4.3 Reason about Mutual Belief among Agents - an Example Consider the example shown in figure 1 again, the complete plan, and parts of agent S and agent B are redefined as follows. 112 Plan =! {{(v8x )z, . Sx | Sl . z2) | ((v£, )z3. £21tf2. z4)). J01S0 • ((vf, )f, • z51 $", • z6) n5) = z,.A'111|z2.P112|zs.l,21, ««/ T(B) = z3. PI2I | z3. />122 |z6. P22 Then the procedure to reason about mutual belief between S and B can proceed at the same time while the computation between S and B is going on. 1. S calculates out an asking price and sends it to B, and then waits for Z?'s response. On the other side, B is waiting for S to ask for a new price. If B receives the message from S, i.e., B observes action a(x), then by rule BR1 B—^B\(T,a)sE(B),ae/t(S) then B>S 2. Once after B receives an asking price, it will calculate a new price for striking and then send it back to S. At that case, by rule BR2 B—^B\(a,a))eF(S),coeA(S) then B>{S>B) On the other side, for S, by rule BR1 S—2-^S',(a,aJ)eE(S),~coeA(B) then S>B 3. By now, B has believed that S is trustable and it itself is also trustable for S. However, S is not certain whether it is trusted by B or not though it has trusted B. If the cooperation stopped now, cooperation would be uncompleted since the two agents have not built mutual belief. Nevertheless, according to the cooperation plan, agent S has two choices for its succeeding actions. 3.1. Continue by suggesting another asking price to B. Then by rule BR2 S—Z-*S',(a),a)^E{B),a&A{B) then S>(B>S) 3.2. Or stop bargaining and make a deal with B. Similarly as 3.1 5—^-^S',{w,ol)e E{B),oxeA(B) then S > (B > S) Now, although the computation between S and B does not finish, the mutual belief has been built between them. If we reason about further, we can only enhance the mutual belief. Thus we can say the cooperation between 5 and B is rational. 5 Conclusions In [1], it gave three criteria for cooperation among multiple agents. Briefly, to cooperate, all agents must believe each other mutually. However, cooperation schemes in current literatures take mutual belief for granted [2][3][4][6][8], and they always assume that cooperating agents believe each other mutually, which will leave many chances for malicious agents to do harms on cooperation. Only when we know that every agent participating in the cooperation believes each other mutually can we say that the cooperation will go through smoothly. In this paper, to reason about mutual belief among agents, we adopt a technique using the position-exchange principle. By using those inference rules based on the principle, we can reason about an agent's beliefs on it and on others. In [7], a 113 different inference rule was used to reason about knowledge of others. That inference rule can be expressed as follows. BABB(a -»£)-> (BABBa-» BABBfi) Intuitively, this rule says that if A believes that B believes some implication is held, then once A believes that B believes the premise of the implication is satisfied then A will also believe that B will believe the result of the implication is implied. That inference rule has several main differences from ours. First, it requires that A must have already had beliefs on B. Second, the rule can only be applied to the circumstance that all agents have completely common knowledge. However, in a distributed environment, agents are incapable of owning knowledge or beliefs about others in advance, and it is impossible for agents to possess all knowledge dispersed within the environment, either, which will lead the above rule unsuitable for real distributed systems. Before defining the position-exchange principle in inference rules, we first take a process algebra approach, the pi-calculus, to formalize cooperation plans and then define an agent as an entity with actions, tasks, expectations, and beliefs. While defining the inference rules for reasoning about mutual belief, we take an agent's expectations into consideration and bind the expectations with its beliefs together so that the agent will only believe what it is expecting. Thus once mutual belief is built among agents; we will be able to say that the cooperation will go on rationally. References 1. M. E. Bratman. Shared cooperative activity. Philosophy Reviews, 101:327-341, 1992. 2. Barbara Grosz and Sarit Kraus. Collaborative plans for complex group actions. Artificial Intelligence, 86(2):269-357, 1996. 3. V. R. Lesser. A retrospective view of fa/c distributed problem solving. IEEE Transactions on Systems, Man, and Cybernetics, 21(6), December 1991. 4. H. J. Levesque, P. R. Cohen, and J. H. T. Nunes. On acting together. In Proceedings of the Eighth National Conference on Artificial Intelligence (AAAI-90), pp.94-99, Boston, MA, 1990. 5. R. Milner, J. Parrow, and D. Walker. A Calculus of Mobile Processes, Part I, II. Journal of Information and Computation, Vol.100, 1992, pp. 1-77. 6. Sarit Kraus. Negotiation and Cooperation in Multi-Agent Environments. Artificial Intelligence Journal, 94(l-2):79-98, 1997. 7. SHI Zhongzhi, Tian Qijia, and Li Yunfeng. RAO Logic for Multiagent Framework. Chinese Journal of Computer Science and Technology 14(4), 1999. 8. Michael Wooldridge and Nicholas R. Jennings. Towards a theory of cooperative problem solving. In Proceedings of Modelling Autonomous Agent in a Multi-Agent World (MAAMAW-94), Odense, Denmark, 15-26, 1994. PORTABLE RESOURCE CONTROL FOR MOBILE MULTI-AGENT SYSTEMS IN JAVA WALTER BINDER CoCo Software Engineering, Margaretenstr. 22/9, A-1040 Vienna, Austria E-mail: w.binder Q coco. co. at JARLE G. HULAAS, ALEX VILLAZON, AND RORY G. VIDAL University of Geneva, rue General Dufour 24, CH-1211 Geneva, Switzerland E-mail: {Jarle.Hulaas, Alex Villazon)@cui.unige.ch vidalr5<3cuimail. unige. ch Prevention of denial-of-service attacks is indispensable for distributed multi-agent systems to execute securely. To implement the required defense mechanisms, it is necessary to have support for resource control, i.e., accounting and limiting the consumption of resources like CPU, memory, and threads. Java is the predominant implementation language for mobile agent systems, even though resource control is a missing feature on standard Java platforms. Moreover, prevailing approaches to resource control in Java require substantial support from native code libraries, which is a serious disadvantage with respect to portability, since it prevents the deployment of applications on large-scale heterogeneous networks. This article describes the new resource-aware version of the J-SEAL2 mobile agent kernel. The resource control model is based on a set of requirements, where portability is very significant, as well as a natural integration with the existing programming model. 1 Introduction Java was designed as a general-purpose programming language, with spe­ cial emphasis on portability in order to enhance the support of distributed applications. Therefore, it is natural that access to low-level, highly machine- dependent mechanisms were not incorporated from the beginning. New classes of applications are however being conceived, which rely on the facilities of­ fered by Java, and which at the same time push and uncover the limits of the language. These novel applications, based on the possibilities introduced by code mobility, open up traditional environments, move arbitrarily from machine to machine, execute concurrently, and compete for resources on de­ vices where everything from modest to plentiful configurations can be found. We are therefore witnessing increased requirements regarding fairness and se­curity, and it becomes indispensable to acquire a better understanding and grasp of low-level issues such as resource management. Operating system kernels provide mechanisms to enforce resource limits 114 115 for processes. The scheduler assigns processes to CPUs reflecting process priorities. Furthermore, only the kernel has access to all memory resources. Processes have to allocate memory regions from the kernel, which verifies that memory limits for the processes are not exceeded. Likewise, a mobile agent kernel must prevent denial-of-service attacks, such as agents allocating all available memory. For this purpose, accounting of resources (e.g., memory, CPU, network, threads, etc.) is crucial. The great value of resource control is that it is not restricted to serve as a base for implementing security mechanisms. Application service providers may need to guarantee a certain quality of service, or to create the support for usage-based billing. The basic mechanisms described here will be necessary to schedule the quality of service or to support the higher-level accounting system, which will bill the clients for consumed computing resources. This article is organized as follows. The next section presents the de­ sign goals and the resulting resource control model. Section 3 compares our approach with related work, whereas section 4 concludes the article. 2 Objectives and Resulting Model The ultimate objective of this work is to enable the creation of execution platforms, where anonymous agents may securely coexist without harming their environment. The desire to deploy secure systems translates into the following requirements: • Accounting of low-level resources, like CPU and memory, as well as of higher-level resources, such as threads. • Prevention against denial-of-service attacks, which are based on CPU, memory, or communication misuse. • No dependence on particular hardware or operating system features, in order to enable a portable implementation. Portability and transparency are crucial in heterogeneous environments. • Minimal overhead for trusted agents, which have no resource limits. • Support for resource sharing between closely collaborating agents, in or­ der to minimize resource fragmentation. Since some aspects of resource control are to be manageable by the appli­ cation developer, it is important that the general model integrates well with the existing programming model of the J-SEAL2 mobile agent system 3. The 116 Fully trusted domains (no accounting needed) Figure 1. Illustration of the general resource control model. J-SEAL2 kernel manages a tree hierarchy of nested protection domains. This model of hierarchically organized domains stems from the Ja¥aSeal mobile agent kernel 4. Protection domains encapsulate agents as well as service com­ ponents. The J-SEAL2 kernel ensures that protection domains are completely isolated from each other. Furthermore, a parent domain may terminate its children at any time, forcing the children to release all allocated resources immediately. A general model for hierarchical resource control fits very well to the hierarchical domain model of J-SEAL2. At system startup the root domain owns by default all resources. Moreover, the root domain, along with the other domains loaded at platform startup, are considered as completely safe, and, consequently, no resource accounting will be enforced on them. When a nested protection domain is created, the creator donates some part of its own resources to the new domain. Figure 1 illustrates the way resources are either shared or distributed inside a hierarchy. In the formal model of J-SEAL2, the Seal Calculus 6, the parent domain supervises all its subdomains, and inter- domain communication management was the main concern so far. Likewise, in the resource control model proposed here, the parent domain is responsible for the resource allocation with its subdomains. Within each untrusted protection domain, the J-SEAL2 kernel accounts for the following resources (for details, see 2): 117 • CPU-RELATIVE defines the relative share of CPU, and is expressed as a fraction of the parent domain's own relative share. In our current implementation, this resource is controlled by periodic sampling of the amount of executed bytecode instructions. • MEM_ACTIVE is the highest amount of volatile memory that a protec­ tion domain is allowed to use at any given moment. • THREADS-ACTIVE specifies the maximal number of active threads by protection domain at any moment. • THREADS-TOTAL limits the number of threads that may be created throughout the lifetime of a protection domain. • DOMAINS-ACTIVE specifies the maximal number of active subdomains a protection domain is allowed to have at any given moment. • DOMAINS-TOTAL bounds the number of subdomains that a protection domain may generate throughout its lifetime. Note that the kernel of J-SEAL2 is not responsible for network control, because network access is provided by different services. These network ser­ vices or some mediation layers in the hierarchy are responsible for network accounting according to application-specific security policies. Let us stress that the network is not a special case, since J-SEAL2 may limit communica­ tion with any services, like networking, file 10, etc. 3 Related Work Our current implementation, which is based on Java bytecode transformations (for details see 2), has been inspired by JRes 5, a resource control library for Java that takes CPU, memory, and network resource consumption into ac­ count. The resource management model of JRes works at the level of individ­ ual Java threads; there is no notion of application as a group of threads, and the implementation of resource control policies is therefore cumbersome. JRes is a pure resource accounting system and does not enforce any separation of domains. For its implementation, JRes relies on native code libraries for net­work and CPU accounting. Therefore, JRes does not meet our requirement of full portability. KaffeOS 1 is a Java runtime system allowing to isolate applications from each other, as if they were run on their own Java Virtual Machine. Thanks to KaffeOS it is possible to achieve resource control with a higher precision 118 than what is possible with bytecode rewriting techniques, where e.g. memory accounting is limited to controlling the respective amounts consumed in the common heap, and where CPU control does not account for time spent by the common garbage collector working for the respective applications. The Kaf- feOS approach should by design result in better performance, but is however inherently non-portable. 4 Conclusion Whereas other approaches to resource control in Java demonstrate a long- term, deep re-design of the Java runtime system, our proposal might be grossly characterized as a language-based patch. J-SEAL2 isolates agents from each other, and particularly prevents denial-of-service attacks originat­ ing from inside the execution platform. Moreover, the complete compatibility and portability of our approach makes it immediately usable for the benefit of distributed multi-agent systems, especially when mobile code is involved. References 1. G. Back, W. Hsieh, and J. Lepreau. Processes in KaffeOS: Isola­ tion, resource management, and sharing in Java. In Proceedings of the Fourth Symposium on Operating Systems Design and Implementation (OSDF2000), San Diego, CA, USA, October 2000. 2. W. Binder, J. Hulaas, and A. Villazon. Resource control in J-SEAL2. Technical Report Cahier du CUI No. 124, University of Geneva, October 2000. ftp://cui.unige.ch/pub/tios/papers/TR-124-2000.pdf. 3. W. Binder. Design and implementation of the J-SEAL2 mobile agent ker­ nel. In The 2001 Symposium on Applications and the Internet (SAINT- 2001), San Diego, CA, USA, January 2001. 4. C. Bryce and J. Vitek. The JavaSeal mobile agent kernel. In First Inter­ national Symposium on Agent Systems and Applications (ASA '99)/Third International Symposium on Mobile Agents (MA '99), Palm Springs, CA, USA, October 1999. 5. G. Czajkowski and T. von Eicken. JRes: A resource accounting interface for Java. In Proceedings of the 13th Conference on Object-Oriented Pro­gramming, Systems, Languages, and Applications (OOPSLA-98), volume 33, 10 of ACM SIGPLAN Notices, New York, October 1998. 6. J. Vitek and G. Castagna. Seal: A framework for secure mobile compu­ tations. In Internet Programming Languages, 1999. AN AGENT-BASED MOBILE E-COMMERCE SERVICE PLATFORM FOR FORESTRY AND AGRICULTURE1 MATTHIAS KLUSCH AND ANDREAS GERBER German Research Center for Artificial Intelligence, Stuhlsatzenhausweg 3, 66123 Saarbrucken, Germany E-mail: (klusch, agerberj@dfki.de The range of applications developed in the domain of agriculture and forestry covers restricted types of market places as well as information systems. However, the innovative integration of Internet, agent technology, and mobile telecommunication for integrated commerce, supporting business processes in these domains, is still at the beginning. We present the first approach of a holonic agent-based information and trading network (CASA ITN) for dynamic production and sales in which integrated services for logistics and e- commerce are provided. This paper introduces the agent-based architecture and describes the added-value services of the CASA ITN for mobile timber sales. 1 Introduction Electronic commerce (e-commerce) is a general name for business transactions that are entered into through electronic rather than paper-based means. E-commerce has the capacity to change the way the entire world does business, because it enables people to buy and sell goods and services from anywhere in the world. Especially in the agricultural and forestry domains their is a great demand to announce offers and information about goods to a large audience and in doing fast negotiation before perishable goods get a loss in quality. In the project CASA2 at DFKI we are developing agent-mediated services for the CASA ITN to support the main operative business processes users are performing in each of the following application scenarios: (1) customer-oriented, dynamic timber production, (2) mobile trading of timber using different types of auctions, fixed or negotiable price, and (3) electronic trading of cereals. The approach taken for providing information and trading services in the CASA ITN focuses on the effective integration of production, logistics and trading processes of these scenarios. It is motivated by the paradigm of integrated commerce (i-commerce) which can be seen as an operational extension of traditional e-commerce. The basic ideas of i-commerce are (a) to get customers more involved 1 This research is sponsored by the Ministry of Economics of the Saarland, Germany, under the grant 032000. 2 Abbrev.: Cooperative Agents and Integrated Services for Logistic and Electronic Trading in Forestry and Agriculture 119 120 in the activities related to his/her orders and tasks, and (b) to get related processes in the supply chain more integrated in practice. The agent-based CASA services for i-commerce can easily be accessed from anywhere by using PC or mobile WAP 1.1-enabled devices such as smart phones or PDAs. Efficient coordination of services is performed by appropriate types of collaborating software agents. The WAP application services are currently implemented using the T-D1WAP gateway of the Deutsche Telekom. 2 CASA Agents and Services 2.1 Holonic Agent System of the CASA ITN We differentiate between following groups of participants in the CASA ITN: producers offering goods; buyers purchasing goods; retailers acting of their own or in agency of companies; logistics companies responsible for transportation tasks, storage and resource management. Each member of these groups is represented by a special so-called holonic agent (cf. Figure 1). The concept of holonic agents [1,5] is used for reasons of effectively accomplishing complex, mostly hierarchically decomposed tasks and resource allocations in the selected application scenarios. A holonic agent (or holon) co-ordinates and controls the activities and information flow of its subagents. In a holonic multi-agent system, autonomous agents may join others to form, reconfigure, or leave a holon. A human user in the ITN is represented by a special holonic agent called personal assistant. It pro-actively acts on behalf of its user even if (s)he is off-line; the personal assistant is the coordinating head of a set of other specialized agents for individual negotiation, participation in auctions, finding relevant partners and information, and elaboration of optimal trading strategies over time. Each corporation is represented by a special holonic agent system according to its task-oriented subdivision into departments for information management, logistics, and production planing. In this context we presume that (1) information management services provide information either on certain products, or on current market situation and potential competitors, (2) logistics services support the co-ordination of machines for production and transportation, human resources, and storage capacities., and (3) production planning services support short-, middle-, and long- term product planning cycles. A corporation holon is constituted by other holonic agents each of them representing a special department. Since in the CASA ITN the roles of buyer/retailer or and seller/producer may be interchangeably used both are modeled by similar holonic agent structures. In addition, logistics companies are usually contracted by other corporations for the purpose of time- and cost-saving delivery of goods on demand. 121 Figure 1. Overview of the holonic CASA agent system Finally, we developed agent-based services for a distributed virtual market place to enable different kinds of trading between the participants such as multiple online auctions, sales by fixed or negotiable prices in simultaneously bilateral negotiations. 2.2 Agent-Based Services of the CASA ITN The CASA agent society co-ordinates and provides the following classes of services to its users. • Auction mechanisms [4] including Dutch, English, Vickrey auctions. • Integrated services for dynamic pricing via information on transportation costs and constraints during the bidding processes of the user. • Logistics [2] services provide dynamic, approximately optimal (re-) scheduling and (re-)planning of transportation. • Information management [7]. Agents gather relevant information on behalf of its users in different trading and production settings. • Mobile services to let the users access most services of the CASA ITN also on WAP-enabled mobile devices. 122 2.3 Application Scenarios In brief, the application scenarios of the CASA ITN for its users are as follows. • Customer-oriented dynamic timber production: Foresters and timber harvester appropriately cooperate with pro-active support of service providing agents of the CASA ITN to satisfy an individual customer's order to deliver a certain quantity and quality of timber at a given time. The processing of an order can be influenced by many side effects such as changes in weather, uncompleted order parts, stoppage of harvesting machines, or shortage of human resources. Therefore the approximately optimal, dynamic (re-)planning and coordination of services for harvesting, processing, and transportation has to be performed just-in-time and is additionally supported by mobile WAP-enabled devices. • Mobile timber sales: CASA ITN members may set up and participate in one or multiple different timber auctions via Internet or WAP-enabled mobile devices. • E-trading of cereals: Similar to the mobile timber sales scenario registered users may trade grains via auctions or multi-lateral negotiations. The first two application scenarios have been implemented using the FIPA-OS 2.0 agent system platform and Java; for reasons of space limitations we briefly describe the mobile timber sales scenario in the following sections. 2.4 Mobile Timber Sales: Services, Interactions, and Agents In this special scenario each forester may sell timber via different kinds of auctions, fixed or negotiable sales offers to other registered users of the CASA ITN. Main benefits of the agent-based service support are the concurrent monitoring and computation of optimal transport costs per individual bid or buying offer, and the full mobile service support of the user via WAP-enabled mobile devices. 2.4.1 Services and Interactions In general, the mobile timber sales services of the CASA ITN enable registered users to initiate or participate in one or multiple timber auctions. But the members could also sell or buy timber at fixed or negotiable prices. In the first case, the CASA ITN offers types of auctions like Dutch, English, Vickrey, and First-Price- Sealed-Bid. The auction server has been built upon a general holonic coordination server [3]. Any user may invoke integrated services for decision-support during the participation in auctions. For example, a personal CASA agent may concurrently determine the optimal transportation costs and delivery dates of some auction goods for each individual bid of its user. As a result, the agent may notify its user in real­ time if estimated optimal transport costs exceed the allowed limit due to given buying preferences or if some deadlines are at risk to be exceeded. In addition, each of the information and trading services is available on mobile WAP 1.1 enabled 123 devices and PCs connected to the Internet. A synchronization is co-ordinated by appropriate CASA agents (cf. fig. 1)[6]. These are holonic agents for users as buyers or sellers/auctioneers, and shipping companies. Buyers without logistics capabilities have to contract carriers appropriately. Participation in any trading process can be delegated to a personal user agent which then is in charge of negotiating or bidding at an auction, and notifying its user, e.g., via SMS or email. 3 Related Work There are just a few market places known which resemble the CASA system. Agriftow[8], for example, is putting Europe's arable industry on the fast track to e- business with a series of dynamic products, including Cigrex, an online co-operative independent grain exchange, and Agrivox, an information service. The Virtual Agricultural Market (VAM)[9] system has been built for B2B transactions in agricultural markets. It offers mechanisms for trading, and activities for distribution of products; VAM provides a set of generic functionality, in a stakeholder- independent, and interoperable way. However, these systems significantly differ from CASA in its architecture and provision of added values implied by the dynamic integration of logistics and information in mobile timber sales and production. References 1. Burckert, H.-J., Fischer, K., and Vierke, G., Transportation Scheduling with Holonic MAS — The TeleTruck Approach. Proc. 3rd Intl Conference on Practical Applications of Intelligent Agents and Multiagents PAAM'98, (1998). 2. Biirckert, H.-J., Fischer, K., and Vierke, G. Holonic Transport Scheduling With TELETRUCK. Applied Artificial Intelligence, 14, (2000), pp. 697-725. 3. Gerber, A. and RuB, C, A Holonic Multi-agent Co-ordination Server. In Proc. 14th Intl. FLAIRS Conference, 2001. pp. 200-204, ISBN 0-1-57735-133-9 4. Gerber, A., Klusch, M., RuB, C, and Zinnikus, I., Holonic Agents for the Coordination of Supply Webs. Proc. Intl. Conf. on Autonomous Agents, (2001) 5. Gerber, C, Siekmann, J., and Vierke, G., Flexible Autonomy in Holonic Agent Systems. Proc. AAAI Spring Sympos. on Agents with Adjustable Autonomy, (1999). 6. Gerber, C, Siekmann, J., and Vierke, G., Holonic Multi-Agent Systems. DFKI Research Report RR-99-03, (1999), ISSN 0946-008X. 7. Klusch, M., Information Agent Technology for the Internet: A Survey. Data and Knowledge Engineering, 36, 1&2 (2001) pp. 337-372 8. Agriflow: www.agriflow.com 9. C.I. Costopoulou, M.A. Lambrou, An architecture of Virtual Agricultural Market systems: Information services and use, Vol 20 (1), 2000), ISSN 0167- 5265, pp. 39-48 An Itinerary Scripting Language for Mobile Agents in Enterprise Applications" Seng Wai Loke School of Computer Science and Information Technology RMIT University, GPO Box 2476V, Melbourne VIC 3001, Australia swlokeQcs . rmit. edu. au Arkady Zaslavsky, Brian Yap, Joseph Fonseka School of Computer Science and Software Engineering Monash University, Caulfield VIC 3145, Australia Arkady.ZaslavskyQmonash.edu.au, brianllQhotmail.com, rukiQmbox.com.au We view an agent's itinerary describing which tasks to be performed when and at which location (e.g. which host) as a script glueing the tasks of the agents together in a (possibly) complex way. We present the ITAG (ITinerary AGent) scripting language which is based on the notion of the itinerary. We also discuss the enterprise-wide infrastructure needed for executing ITAG scripts, and illustrate our approach with examples of scripts for voting and distributed authoring. 1 Introduction This paper introduces a scripting language approach to developing mobile agent applications. In the scripting approach,2 a scripting language is used to glue components together to assemble an application rather than programming an application from scratch. Our scripting language is based on the concept of the agent itinerary. An agent's itinerary describes which actions (or tasks) are to be performed when and at which location (e.g. which host), i.e. an agent's itinerary glues the actions of the agent in a (possibly) complex way while each action at a location might involve complex algorithms and data structures. A scripting language should closely match the nature of the problem in order to minimize the linguistic distance between the specification of the problem and the implementation of the solution, thereby resulting in cost reductions and greater programmer productivity.3 Our itinerary scripting language provides a higher level of abstraction, and economy of expression for mobility behaviour: the programmer expresses behaviour such as "move agent A to place p and perform action a" in a simple direct succinct manner without the clutter of the syntax of a full programming language. aThe work reported in this paper has been funded in part by the Co-operative Research Cen­ tre Program through the Department of Industry, Science & Tourism of the Commonwealth Government of Australia. 124 125 In the following section, we first present our itinerary scripting language, and in §3, present an example of a distributed authoring application scripted in our language. We conclude in §4. 2 ITAG: The Itinerary Scripting Language We previously created an itinerary algebra.1 ITAG is an executable implemen­ tation of this algebra in the form of a scripting language. We first outline the algebra below. We assume an object-oriented model of agents (e.g., with Java in mind), where an agent is an instance of a class given roughly by: mobile agent = state + action + mobility We assume that agents have the capability of cloning, that is, creating copies of themselves with the same state and code. Also, agents can communicate to synchronize their movements, and the agent's code is runnable in each place it visits. Let A, O and P be finite sets of agent, action and place symbols, respec­ tively. Itineraries (denoted by I) are now formed as follows representing the null activity, atomic activity, parallel, sequential, nondeterministic, conditional nondeterministic behaviour, and have the following syntax: I ::=0 | Aap | (I ||e I) | (I • J) | (I | I) | (I:n I) where A e A, a € O, p € P, © is an operator which, after a parallel operation causing cloning, recombines an agent with its clone to form one agent, and IT is an operator which returns a boolean value to model conditional behaviour. We specify how © and II are used but we assume that their definitions are application-specific. We assume that all agents in an itinerary have a starting place (which we call the agent's home) denoted by h £ P. Given an itinerary I, we shall use agents(I) to refer to the agents mentioned in J. Agent Movement (Aav). Aav means "move agent A to place p and perform ac­ tion a". This expression is the smallest granularity mobility abstraction. It involves one agent, one move and one action at the destination. Parallel Composition ("\\"). Two expressions composed by "||" are executed in parallel. For instance, (Ap || Bbq) means that agents A and B are executed concurrently. Parallelism may imply cloning of agents. For instance, to exe­ cute the expression (A® || Abq), where p ^ q, cloning is needed since agent A has to perform actions at both p and q in parallel. When cloning has occurred, decloning is needed, i.e. clones are combined using an associated application- specific operator (denoted by © as mentioned earlier). Sequential Composition ("•")• Two expressions composed by the operator "•" 126 are executed sequentially. For example, (A^ • Aq) means move agent A to place p to perform action a and then to place q to perform action b. Independent Nondeterminism ("\"). An itinerary of the form (/ | J) is used to express nondeterministic choice: "I don't care which but perform one of / or J". If agents(I) n agents(J) ^ 0, no clones are assumed, i.e. / and J are treated independently. It is an implementation decision whether to perform both actions concurrently terminating when either one succeeds (which might involve cloning but clones are destroyed once a result is obtained), or trying one at a time (in which case order may matter). Conditional Nondeterminism (":"). Independent nondeterminism does not specify any dependencies between its alternatives. We introduce conditional nondeterminism which is similar to short-circuit evaluation of boolean expres­ sions in programming languages such as C. An itinerary of the form I -u J means first perform /, and then evaluate II on the state of the agents. If II evaluates to true, then the itinerary is completed. If II evaluates to false, the itinerary J is performed (i.e., in effect, we perform I • J). The semantics of conditional nondeterminism depends on some given II. We give an an example using agents to vote. An agent V, starting from home, carries a list of candidates from host to host visiting each voting party. Once each party has voted, the agent goes home to tabulate results (assuming that home provides the resources and details about how to tabulate), and then announces the results to all voters in parallel (and cloning itself as it does so). Assuming four voters (at places p, q, r, and s), vote is an action accepting a vote (e.g., by displaying a graphical user interface), tabulate is the action of tabulating results, and announce is the action of displaying results, the mobility behaviour is as follows: \/vote \/vote vote yvote ^/tabulate /^/announce II j/announce \\ \/announce II announce vp vq vr ' vs vh ^ P II 1 II r II s ) Implementation. To allow the programmer to type the itinerary expres­ sions into the computer, we provide an ASCII syntax and a Controlled English version. The translations are given in Table 1. When the operators are used without op, we assume a pre-specified system default one, i.e. using op is an optional clause. A° • Aq • A% can be described as follows: "(move A to a do p) then (move A to b do q) then (move A to c do r)." Apart from the above basic elements of the language, we define the following five phrases that map down to more complex expressions: 1. A^ is translated as return A do a. 2. Aav • Aq • A® • Aas is translated as tour A to p,q,r,s in series do a. 3. Ap|\Ag\\A"11Aas is translated as tour A to p,q,r,s in parallel do a. 4. ylpl^l^l^ is translated as tour A to one of p,q,r,s do a. 127 Symbol Aa :n I He ASCII [A,p,a] :{op} I #{op} Controlled English move A to p do a then otherwise using op or in parallel with using op Table 1: Translations. 5. Ap : Aaq : A^ : A" is translated as tour A if needed to p,q,r,s do a. Similarly, we also have A^ :n A% :n A% :n Aas translated as tour A if needed to p,q,r,s do a using II. Using the phrases, the voting itinerary can be described succinctly as fol­ lows: (tour V to p,q,r,s in series do vote) then (return V do tabulate) then (tour V to p,q,r,s in parallel do announce) Our current implementation is in the Java programming language and is built on top of the Grasshopper mobile agent toolkit.6 In our current imple­ mentation, the user first types in itinerary scripts into an applet (running in a Web browser). Then, the itinerary script is parsed into a binary tree represen­ tation and executed by an interpreter. Execution is as follows: the interpreter translates the actions specified in the script into commands which are then forwarded to Grasshopper agents which are initially at a place (the home). These agents on receiving the commands are then launched into the network of places to do their work. 3 An Example: Distributed Authoring We aim mainly for lightweight applications (e.g., ad hoc workflows), lightweight in the sense that they can be quickly scripted as long as the required actions code can be downloaded from a code server. Here, we consider an example adapted from Tripathi et al.4 concerning coordinating the activities of a dis­ tributed authoring system involving the author, an editor and two reviewers. In this collaboration among the four parties, the agent transfers the required information (e.g., the document draft, reviews, etc) and the itinerary repre­sents the order in which actions are to be accomplished. For example, in a typical scenario, the author first publishes the document to the editor, the edi- fcSee http://www.grasshopper.de 128 tor then sends the document to the reviewers, after which the reviewers forward reviews to the editor, and finally, the editor adds further comments and sends all the information to the author. Assuming agent A is launched by the au­ thor, places abbreviated as editor, author (the place from which the agent is launched), reviewerl, and reviewer2, actions are submit, review, finalize and notify, the following script can be written to enact this collaboration: (move A to editor do submit) then ((move A to reviewerl do review) in parallel with (move A to reviewer2 do review)) then (move A to editor do finalize) then (move A to author do notify) Note that data (including the draft document, the reviews, and editor's com­ ments) are carried with the agent. 4 Conclusions and Future Work We contend that a scripting approach is well-suited for developing mobile agent applications and presented ITAG based on the notion of the agent itinerary. Autonomy and flexibility are important aspects of intelligent agents. ITAG accommodates agents with a degree of autonomy and flexibility in performing tasks via the nondeterminism and conditional nondeterminism operators. References 1. S.W. Loke, H. Schmidt, and A. Zaslavsky. Programming the Mobility Behaviour of Agents by Composing Itineraries. In P.S. Thiagarajan and R. Yap, editors, Proceedings of the 5th Asian Computing Science Con­ ference (ASIAN'99), volume 1742 of Lecture Notes in Computer Science, pages 214-226, Phuket, Thailand, December 1999. Springer-Verlag. 2. J.K. Ousterhout. Scripting: Higher Level Programming for the 21st Century. IEEE Computer, March 1998. Available at . 3. D. Spinellis and V. Guruprasad. Lightweight Languages as Software En­ gineering Tools. In Proceedings of the USENIX Conference on Domain-Specific Languages, California, U.S.A., October 1997. 4. A. Tripathi, T. Ahmed, V. Kakani, and S. Jaman. Distributed Collab­ oration Using Network Mobile Agents. February 2000. Available at . INTELLIGENT AGENTS FOR MOBILE COMMERCE SERVICES MIHHAIL MATSKIN Department of Computer and Information Science, Norwegian University of Science and Technology, N-7491 Trondheim, Norway E-mail: mishaQiidi. ntnu. no We consider application of intelligent agents in mobile commerce services. Basic idea of the approach is providing customers of mobile devices and service providers with personal intelligent agents representing their interests in the Internet and usage of multi-agent system approach for coordination, communication and negotiation between the agents. We demonstrate how such agents and services can be implemented in the Agora environment that we developed earlier. Some properties of developed prototype mobile commerce services are briefly discussed. 1 Introduction Development of mobile communication technology in the last years opens new perspectives for providing services to the users of mobile devices such as cellular phones or PDAs. An essential feature of mobile services is that the user of mobile device can be available for services almost anytime and anywhere. This allows high reactivity of user responses and decisions. At the same time development of such technology as WAP [2,5] allows the users of mobile devices get access to the Internet, which was before a privilege of PC users only. In particular this means that the users of mobile devices get access to web-based technologies and computing network resources outside of telecom networks. However, opening access to the Internet resources mobile communication technology put quite serious restrictions to such communication. Basic restrictions are related to low bandwidth, high cost of communication, slow CPU, small memory, restricted power supply, small screen and complicated input for mobile devices. In order to relax such restrictions we think that precision and focus of information delivered to the mobile devices should be very high [4]. In particular, this means that: 1) the amount of delivered information should be as minimal as possible but sufficient enough to be interesting to the user, 2) user input should be minimized as much as possible 3) connection time of mobile devices to network during processing the user request should be shortened. In order to achieve such precision and focus, most of work for information analysis and processing should be done off-line, and the analysis and processing should be personalized as much as possible: they should take into account user preferences, interests as well as context of communications (a geographical position, time etc). We think that usage of intelligent agents [1] and agent technology is a constructive approach to intelligent and personalized off-line processing. In 129 130 particular, this assumes providing participants of the commercial activity (they are mobile device customers and service providers) with software assistants-agents. Some details of this approach are presented in [4]. Here we demonstrate how the approach can be applied to support of particular mobile commerce services. As a tool for implementing the approach we use the Agora environment for support of multi-agent cooperative work [3]. For communication with mobile devices we use WAP technology [2,5] and SMS messages. The rest of the paper is organized as follows. First we give a brief introduction to the Agora environment and present solutions for mobile services using the Agora based approach. Then we consider some details of implemented prototype services. Finally, we present conclusions and future work. 2 The Agora system and mobile commerce services In order to support agent creation and multi-agent cooperative work we use the Agora system which we developed earlier [3]. Basic idea behind this system is consideration of cooperative work as a set of cooperative acts which include coordination, negotiation and communication and providing means for supporting such cooperative acts. In order to get such support we propose a concept of cooperative node (we call it Agora). The Agora node allows registration of agents and provides means for support of cooperative activity such as matchmaking, coordination and negotiation between the registered agents. If we apply the Agora concept to the mobile commerce services then we, first, need to identify participants of the cooperative work and possible cooperative acts between them. In our case participants are customers and service providers, and we assume the following basic cooperative acts between participants: 1) buying/selling products/services by customers and providers, 2) product/service information exchange between different customers, 3) customer coalitions formation for co- shopping, 4) providers coalition formation for common policy development, 5) coordination between different agents of the same customer, 6) subscription service management. Our next step is to map participants into agents and cooperative acts into corresponding Agoras. For example this can be done as it is shown in Figure 1 (in this figure rectangles denote agents, diamonds denote Agoras and arrows show registration of agents at Agoras). Each agent in the Agora system has planner, knowledge base, communication block and goal analyzer. By default, knowledge base and planner use Prolog-like notation for knowledge representation. However, all agent components can be overridden when necessary. An important feature of such implementation is encapsulation of private data in agents and ability to get service without disclosing personal preferences to providers. The planner, knowledge base and ability to handle events by goal analyzer provide a basis for implementation of pro-activity. 131 Figure 1. Customers, providers and Agoras Ability to communicate is based on communication adapters and message wrappers in the Agora system. Both KQML and FIPA are supported. This is done by implementing an intermediate representation level (wrappers) which allows translation of constructions from both languages. Usage of wrappers also allows defining own syntax and parameter types for communicative acts. In particular, we use that for plan and action files exchange between agents. Different ontology can be described and their combination with performatives uniquely defines the communicative act. Information about registered agents .4 l/\? 1/ y Negotiator^ Communication Adapter /\ /Coordinators. ^ •* Manager Customer notificator Events handler History browser Provider notificator Matchmaker Registrator Figure 2. Subscription service Agora In the case of subscription services, customers specify information they are interested in and service provider sends the information to the customers in some time interval or upon a specified event. Basic steps of agent-based subscription service are registration, announcement of the offers, matchmaking, events generation and handling (both for providers and customers). These steps are supported by a manager of the Subscription service Agora (see Figure 2). 132 Customers present their interests to corresponding Agora by pointing rules, keywords or ranked list of interests. The Agora manager tries to match customer interests with providers' proposals and, when the matching is successful, notifies the customers. Both provider and user interests can be presented/updated anytime and asynchronously. Complexity of the matchmaker can be different for different applications. In the optimistic case (when customer discloses detailed preferences) the matchmaker does the whole work for matching customer requests and provider offers and notifies the customer when matching was successful. It is possible to implement a more intelligent behavior of manager with pro-active recommendation of offers which are relevant to customer's interests but are not presented explicitly. In the pessimistic case (when customer doesn't disclose his particular interests but rather subscribes for a wide scope information) the matchmaker does a pre-filtering of the information but particular analysis is performed by the customer agent. After successful matchmaking the customer agent may directly contact corresponding provider agent and perform additional information request or negotiation using the Negotiator component of the Agora. Managers for other types of Agoras (such as Customers, Providers, Buying/Selling or Coalitions Agoras) may have functionality different from the functionality of the Subscription service Agora. The Agora system allows attaching different manager agents for different Agoras. 3 Some applications There have been developed several prototype systems of mobile commerce services based on the above-described approach. They include: 1) Valued customer membership service and product search; 2) Financial services (notification of stock quotes change); 3) Real-estate agent (search and notification for real-estate property); 4) Advertising over Internet with agents. For the valued customer membership service a user of mobile device can register for a customer service which provides membership benefits. After registration a personal assistant agent is created. Basically, agent operates on a user's host providing a privacy of personal data; however, it may also operate on a server provider host when the user trusts the environment. When the agent finds that some special offer matches the customer interests, the agent may send corresponding message to the user's mobile device (if it requires quick reaction) or may place the offer to a user WML-page. In addition to analyzing offers from the customer service, the agent can perform a search of relevant products from other specified sources. In the case of financial services, notification of changes in quotes of specified stocks is implemented. The Agora system is used for deploying agents and matching required and provided services. Both the specified stocks and conditions of their change are kept privately in the agent. 133 Advertising service uses Agoras for formation of customer and service provider coalitions. The coalitions are used for co-shopping and for co-advertising. The real-estate agent searches for real-estate property which satisfies the user's preferences, notifies the user via cellular phone when it is found and, if it is of user's interest, starts a bidding process for the property according to user's instructions. 4 Conclusions We present an approach to usage of intelligent agents in mobile commerce services. The approach is based on providing users of mobile devices with personal software assistant agents and usage of the Agora system for support of cooperative work between agents. The general conclusions are as follows: 1) Usage of agents as personal assistants for users of mobile devices is a practical and feasible approach; 2) Even with a simple intelligence and functionality agents provide a great benefit by employing autonomy, communication ability and pro-activity; 3) A concept of Agora as a cooperative node is practical and convenient mean for multi-agent system design. Our future plans are directed to increasing intelligent capabilities of the agents and Agoras in the system. In particular we would like to use different negotiation protocols, rules for coalition formation and planning activity of the agents in mobile services support. This work is partially supported by the Norwegian Research Foundation in the framework of the Distributed Information Technology Systems (DITS) program and the ElComAg project. I also would like to thank Thomas Heiberg and J0ran Pedersen (product search and valued memberships services), Terje Wahl (financial services), Lars Killingdalen (advertising with agents) and Bj0rn Skogseth (real- estate search and analysis) for their work for implementing the prototypes. References 1. Bradshaw, J. M. (Ed.). Software Agents. Menlo Park, CA: AAAI Press/The MIT Press, 1997. 2. Mann, S. Programming Applications with the Wireless Application Protocol: The Complete Developer's Guide. John Wiley & Sons, 2000. 3. Matskin, M., O. J. Kirkeluten, S. B. Krossnes and 0ystein Saele. Agora: An Infrastructure for Cooperative Work Support in Multi-Agent Systems. T. Wagner, O. Rana (eds.) Infrastructure for Scalable Multi-Agent Systems. Springer Verlag, LNCS Volume 1887, 2000. 4. Matskin, M. and A. Tveit. Mobile Commerce Agents in WAP-Based Services. Journal of Database Management, Vol. 12, No. 3, 2001, pp. 27-35 5. WAP: URL:http://www.wapforum.org A NEW CONCEPT OF AGENT ARCHITECTURE IN AGENTSPACE T. NOWAK AND S. AMBROSZKIEWICZ Institute of Computer Science, Polish Academy of Sciences, al. Ordona 21, PL-01-237 Warsaw, and Institute of Informatics, University of Podlasie, al. Sienkiewicza 51, PL-08-110 Siedlce, Poland E-mail: sambrosz, tnowak@ipipan.waw.pl Agentspace is an emerging environment resulting from process automation in the Internet and Web. It is supposed that autonomous software (mobile) agents pro­ vide the automation. The agents realize the goals delegated to them by their human masters. Interoperability is crucial to assure meaningful interaction, com­ munication and cooperation between heterogeneous agents and services. In order to realize the goals, the agents must create, manage and reconfigure complex work­ flows. 1 Introduction Cyberspace, the emerging world created by the global information infrastruc­ ture and facilitated by the Internet and the Web, offers new application sce­ narios as well as new challenges. One of them is creating new infrastructures to support high-level business-to-business and business-to-consumer activities on the Web, see for example Sun ONE, Microsoft .NET, and UDDI. The sec­ ond one is Semantic Web4, conceptual structuring of the Web in an explicit machine-readable way. These two challenges are strongly related to each other, that is, semantic interoperability is necessary for integration of heterogeneous, distributed Web services. It is supposed that the integration will be performed automatically by autonomous software (mobile) agents. Agent is a running program that can migrate from host to host across a heterogeneous network under its own control and interact with other agents and services. Since the software agents are supposed to "live" in the cyberspace, they must be intelligent, that is, they must efficiently realize the goals delegated to them by their human masters. Hence, along the development of cyberspace the new world (called agentspace), inhabited by the software agents, is being created. It seems that the process automation in the Internet and Web makes the development of agentspace inevitable. Human users are situated at the border of the agentspace and can influence it only by their agents by delegating to them complex and time consuming tasks to perform. Since the Internet and Web are open distributed and heterogeneous environments, agents and services can be created by different users according 134 135 to different architectures. Interoperability is crucial to assure meaningful in­ teraction, communication and cooperation between heterogeneous agents and services. We can distinguish two kinds of interoperability: interaction inter­ operability and semantic interoperability. Interaction interoperability provides common communication infrastructure for message exchanging whereas seman­tic interoperability provides the message understanding. The semantic interoperability concerning the meaning of resources on the Web is a subject of current research, see DAML5 -f OIL8 as the most prominent example. In order to use services established by different users working in heteroge­ neous domains, agents must be capable of acquiring knowledge about how to use those services and for what purposes. There must be a common language for expressing tasks by the users, delegating these tasks to agents, as well as for describing services, and for communication between agents and services. There are several efforts for creating such language, see DAML-Enabled Web Services7, ATLAS3, CCL10, WSDL9, and FIPA ACL. As to the communication infrastructure, there is no need to force one trans­ portation platform (i.e. one message format and one message delivery way) as the standard. It seems that rather message language and its meaning is crucial here, not message wrapping. It is relatively easy to provide a transformation service between two platforms for translating message format of one platform to the message format of the other. Mobile agent platform (MAP, for short) gives also a communication infras­ tructure as well as "migration service" for the agents. One may ask if agent mobility is essential for creating agentspace, see for example JADE 6 frame­ work where mobility is not provided. In our approach, agent mobility may be seen as a means for learning between heterogeneous environments. Our project aims at creating an absolute minimum necessary for joining heterogeneous applications as services on the one hand and for using them by heterogeneous agents (on behalf of their users) on the other hand. As this minimum we propose the language Entish (a shorthand for e-language), and its intended semantics. We introduce a new form of agent migration. Usually, a MAP provides weak form of migration that consists in moving agent's data and code to a new place and executing this code at the new place whereas the agent process at the old place is closed. The data and the code is strictly related to each other in that agent architecture. We propose a new architecture where the data are independent of the code. As a result we get a much weaker migration form where agent's data can be moved without the code. The data are expressed in Entish and contain all parameters needed to continue agent process at the 136 new place. This agent's data is called agent "soul" and is separated from agent body responsible for reasoning and action execution. The idea of the new migration form is that a running agent process stores all its essential data and control parameters in its soul. The process may be closed at any time and then fully reconstructed at any new place. At the new place, agent soul is given a new body (may be a different code) and then the completed agent can continue its process. So that data (soul) are independent of the code (body). The new migration form is independent of MAP and it can be applied to communication platforms that does not support (weak) agent mobility, like JADE or a platform based on HTTP+SOAP transport. The structure of soul constitutes the core of language Entish. The main achievement of our project is a generic architecture of agentspace and its implementations. The idea of agentspace consists in constructing mid­ dleware that provides transparency between heterogeneous agents and hetero­ geneous services. We define agentspace as an implementation of the language Entish and its semantics on a communication platform. So far we have im­ plemented Entish on Pegaz - our own MAP, and we are completing Entish implementation on another communication platform, called Hermes, that is based on HTTP+SOAP transport. It seems that Hermes platform may serve as a middleware for Web Service integration. We are also implementing trans­ port protocol of Hermes in Pegaz and vice versa, so that we will achieve com­ plete interoperability between these two agentspaces. It means that agents (actually their souls) can migrate from one agentspace to the other as well as communicate with services located in the other agentspace. 2 Agentspace architecture The idea of agentspace consists in construction of open distributed infrastruc­ ture that would allow to join heterogeneous applications as services on the one hand and to use them by heterogeneous agents on the other hand. A user, delegating a task to an agent, need not to know the locations of services and resources necessary for task realization. The user expresses the task in our high level common language called Entish. The agent migrates across the agentspace, communicates with services and other agents looking for informa­ tion, services and resources needed to realize the delegated task. Since agentspace is an implementation of the language Entish and its in­ tended semantics on a communication platform, the layered architecture seems to be natural and generic. The architecture consists of three layers: interaction layer, agent/service layer, and language layer. The interaction layer specifies infrastructure that provides basic functionality for agents and services like 137 agent moving from one place to another and communication between agents and services. This layer is implemented by a communication platform. In our case it is done by Pegaz and Hermes. However, it may be any communica­ tion platform, like JADE6, or a new one built on, for example, on the top of CORBA, RMI-IIOP. The second layer, i.e., agent/service layer specifies some aspects of agent and service architecture that allow them to evaluate formulas (called situa­ tions) expressed in the language Entish as well as determining new situations resulting from performing elementary actions. The agents are equipped with mental attitudes: knowledge, goals, intentions and commitments represented as Entish formulas. These attitudes serve as data and control parameters of agent behavior. Agents and services execute actions (migration and message exchange) in the interaction layer, whereas the message contents is expressed in Entish. The agent/service layer implements the intended semantics of Entish. The language layer consists of Entish - a simple version of the language of first order logic, along with a specification how to "implement" it for open and distributed use. The implementation follows the idea of so called "webiz- ing language" see T. Berners-Lee 4. The language describes the "world" (i.e. agentspace) to be created on the basis of infrastructure provided by the pre­ vious layers. However, this description is purely declarative. Actions are not used in Entish; the formulas describe only the results of performing actions. So that no causal relations can be expressed here. The language is sufficient to express desired situations (tasks) by the users as well as by agents and services, however it can not explicitly express any idea about how to achieve them. This may be done by implementing distributed information services (called InfoS- ervices) where an agent may get to know how to realize the delegated task, or to get a hint. Usually, as the reply to its query (expressed also in Entish) agent gets a sequence of intermediate situations to follow. BrokerServices play the role of virtual brokers to facilitate complex task realization. A BrokerSer- vice forms, manages and reconfigures a workflow that realizes special type of complex tasks. The workflow can be quite sophisticated and consist of a large numbers of ordinary services. So that it may be seen as virtual organization in agentspace. The language is implemented in the second layer by DictionaryServices containing the syntax and new concept definitions. There are three additional types of services, namely SecretaryService, MailService, and BodyService. Let us note that all those services are not system services. They can be imple­ mented and developed independently by different users. It is important that only "operation type" of any of these services is specified in Entish. Roughly, operation type is a description of the function performed by a particular ser- 138 vice. A service implementation must only satisfy specification of the operation type. The paper presents our work in progress. The limit of space does not allow to present details. The first version of Entish syntax and semantics is completed. A prototype of agentspace based on Pegaz is already implemented. Implementation of Hermes, i.e., agentspace based on HTTP+SOAP transport, will be completed shortly. Now, we are developing (by implementing services) and testing our small agentspace in the frame of Pegaz Ring that consists of several research groups. Acknowledgment s The work was done partially within the framework of ESPRIT project No. 20288 CRIT-2, and KBN project No. 7 T11C 040 20. References 1. S. Ambroszkiewicz, W. Penczek, and T. Nowak. Towards Formal Specifi­cation and Verification in Cyberspace. Presented at Goddard Workshop on Formal Approaches to Agent-Based Systems, 5-7 April 2000, NASA Goddard Space Flight Center, Greenbelt, Maryland, USA. To appear in Springer LNCS. 2. S. Ambroszkiewicz, O. Matyja, and W. Penczek. "Team Formation by Self-interested Mobile Agents." In Proc. 4-th Australian DAI-Workshop, Brisbane, Australia, July 13, 1998. Published in Springer LNAI 1544. 3. ATLAS - Agent Transaction Language for Advertising Services http://www.cs.cmu.edu/ softagents/atlas/ 4. T. Berners-Lee - www.w3.org/DesignIssues/Webize.html -and- /Desig- nlssues/Logic.html 5. DAML www.daml.org/ 6. JADE - Java Agent DEvelopment Framework http://sharon.cselt.it/projects/jade/ 7. Mcllraith, S., Son, T. and Zeng, H. "Mobilizing the Web with DAML- Enabled Web Services", www.ksl.stanford.edu/projects/DAML/ 8. OIL, Ontology Interchange Language, www.ontoknowledge.org/oil/ 9. Web Services Description Language (WSDL) www.w3.org/TR/2001/NOTE-wsdl-20010315 10. S. Willmott, M. Calisti, B. Faltings, S. Macho-Gonzalez, O. Belakhdar, M. Torrens. "CCL: Expressions of Choice in Agent Communication" The Fourth International Conference on MultiAgent Systems (ICMAS-2000). 21s' CENTURY SYSTEMS, INC.'S AGENT ENABLED DECISION GUIDE ENVIRONMENT (AEDGE™) PLAMEN V. PETROV 21s' Century Systems, Inc., Omaha, Nebraska, USA E-mail: plamen@21csi.com ALEXANDER D. STOYEN University of Nebraska and 21s' Century Systems, Inc., Omaha, Nebraska, USA E-mail: alex@21csi.com JEFFREY D. HICKS University of Nebraska and 21s' Century Systems, Inc., Omaha, Nebraska, USA E-mail: jeff@21csi.com GREGORY J. MYERS 21s' Century Systems, Inc., Omaha, Nebraska, USA E-mail: greg@2lcsi.com 21s' Century Systems, Inc.'s Agent Enabled Decision Guide Environment (AEDGE™) is a standardized Commercial Off the Shelf (COTS), DII COE compliant, agent architecture that enables complex DSS to be developed as an expansion of the AEDGE core functionality. The AEDGE core consist of a Master Server, Entity Framework, Agent Infrastructure and Database Connectivity components. User service specific DSS tools, such as agents, servers or clients, are quickly and efficiently constructed above the core functionality through the use of common interfaces and data structures. The extender components (Simulation Server, Live Links, Visualization Client, Agent Client, and Data Bridges) serve as a template for extending the application. To facilitate Agent interactions, the AEDGE provides a number of local and remote mechanisms for service registration and invocation. In addition Agents can interact, synchronize, and cooperate via Agent Managers, which in turn provide the aggregate agent functionality to the user. The componentized structure of the AEDGE enables multiple levels of product availability that satisfies the needs of the user through different levels of product involvement. 1 Introduction In the past decade we have observed a significant increase in the demand for computer-based decision support systems (DSS), due primarily to the overwhelming 139 140 availability of data from multiple sources with various degrees of quality, coming from networked sensors, databases, archives, web-based applications, and other sources. Simultaneously, a new branch of distributed computing, based on intelligent, semi-autonomous processes, referred to as Agents, has been the center of attention because of its flexibility, extensibility, and network-friendliness. 21st Century Systems, Inc. (21CSI), a small company has pioneered the integration of agent-based computing into DSS applications. We have developed stand-alone and mobile agents and agent architectures to perform individual and team decision support for multiple defense-oriented environments such as AW ACS [1], Aero­space Operations Centers, Navy Ship Command Centers [2] etc. The need for a standardized common infrastructure has lead us to design an environment where both agents and simulated entities (or representations of real-world assets) are represented as first-class objects capable of interacting with each other. The Agent Enabled Decision Guide Environment (AEDGE™) (see Figure 1) is 21CSFs undertaking to build a common reference framework and a test-bed environment for integrated simulation and agent-based decision support. AEDGE defines Agents, Entities, Avatars and their interactions with each other and with external sources of information. This standardized architecture allows additional components, such as service-specific DSS tools to be efficiently built upon the core functionality. Common interfaces and data structures can be exported to interested parties who wish to extend the architecture with new components, agents, servers, or clients. When the core AEDGE components developed by 21CSI are bundled with customer-specific components in an integrated environment, a clean separation of those components, through APIs, is provided. | ftetahascs ! . . Bridf^__ Party Compts Figure 1. 21CSI's AEDGE Product Structure ^^** 141 2 Agent Enabled Decision Guide Environment (AEDGE™) 21CSI's DSS product [3] is based on an extensible architecture and a number of standard components that enable simulation and decision support capabilities. AEDGE is designed in an open, DII-COE and CORBA compliant manner. The architecture is unified and allows users to use and extend existing components, as well as to build new, compatible and customized add-ons. The kernel of the architecture consists of four core and five extender components. These define the internal structures, dataflows, and interface to the architecture. • Master Server. Tracks components and matches service providers with service requesters. The Master Server is a network component of AEDGE that facilitates connections and interactions among the rest of the AEDGE components. It provides component registration and tracking services, interface matching services and component search, identification and connection services. The Master Server is also responsible for synchronizing simulation time (and real time) among multiple simulation servers and live links. • Entity Representation Framework. The Entity Representation Framework is an integral part of AEDGE, which provides the basic entities and events for a time-event simulation or live-feed connections. The object-oriented hierarchy of entities represents a wide range of structures, vehicles, platforms, weapons, and sensors. The Framework includes a interfaces, which allow users to add new entities with new behaviors or with combinations of existing behaviors. • Agent Infrastructure. The Agent Infrastructure provides the basic inter-agent communication and synchronization mechanisms, as well as the interfaces for agents to use other data sources, such as simulation servers, live data links, databases etc. A base hierarchy of agents is also provided, and it could be extended and customized for particular user's need. • Database Connectivity. AEDGE provides the capability of storing and retrieving data to/from various databases. The Database Connectivity components provide generic and specific bridges to a number of proprietary and public databases. New Database Connectivity modules can be added by extending the provided bridges and implementing the connectivity interfaces. In addition to these kernel components, extender components define the basic functionality of information clients and servers and define interfaces for adding new functionality. These components are, in essence, templates for extending the platform with new functionality, while maintaining tight integration and efficient implementation. The following standard AEDGE extender packages are provided: • Simulation Servers. Simulation Servers model a particular aspect of the physical reality in terms of the AEDGE components. In other words, a simulation server maintains a set of entities, their properties, and those of the environment and models the interactions among those. For example, the vehicle movement model, based on kinematics, affects the position, speed, direction of motion and fuel burn rates for the entities; the weapon models affect the 142 outcome of engagements, the communication models determine how orders and subordinate feedback are distributed. A simulation server may potentially interact with all four-core components of AEDGE. It registers with the Master Server and posts its exported services (e.g. providing entity position information). The server manipulates a set of entities (object instances) from the Entity Framework that represent the current view of the world according to that simulator. The simulation server may interact bidirectionally with agents from that Agent Infrastructure, both providing information about the state of the world and receiving recommendations and action requests from Agents. Finally, a server may require information from various databases that is provided through the Database Connectivity component. Live Links. Live Links are similar to Simulation Servers in that they provide information about the world to the AEDGE components. However, this information is based on sensor information and reflects the state of the physical world in real-time. Thus, the information flow is unidirectional, since we do not yet support actuators placed in the physical world. The live links may provide entity or track information, weather information, or any other state or capability changes. The links can interface with all core AEDGE components, much like the simulation servers can, with the limitation of unidirectional communication. Visualization Clients. Visualization Clients are responsible for interactions with the human users. They present data from the AEDGE in a clear and intuitive manner, allowing for simple, yet powerful presentations of complex interdependencies in the simulated/sensor world. Visualization clients interact with all components through a bidirectional information flows. They receive information on the simulated entities, their environment and interactions, as well as on agent evaluations and recommendations. The users' interactions with the Visualization client provide feedback to the AEDGE core components. Agent Clients. Agent Clients host one or more Intelligent Agents, which monitor the simulated world, react to changes in it and interact among each other and with human users according to their specific agent behaviors. The agent client receives information from the AEDGE core on the state of the world and sends back agent requests and feedback. Database Bridges. These are a natural extension of the AEDGE core Database Connectivity. Bridges to characteristics and performance data, weapons performance and effectiveness data and terrain databases are provided. Interfaces for new database bridges are also provided. 3 Componentization The AEDGE Architecture enables the commercialization through four componentized availability levels that cover the needs of customers with different levels of involvement. The Demo availability level provides for execution and 143 evaluation rights of a binary (compiled) distribution of the product. This type of availability level is aimed at early users or prospective customers. The Enterprise availability level is designed to fulfill the needs of advanced customers who have a specific use of the platform. Often the Enterprise availability level is accompanied by customized extender components. The Research availability level delivers the best configuration for scientists who may use and/or extend the platform for their particular experimental needs. More interfaces to the system are provided to enable the researcher to tap in the rich data sources of the platform. The Development availability level enables advance users to build components on top of the base platform. It provides all interfaces and some source code examples to key components. Under this level, customers are able to extend the core functionality with their own agents or graphical displays. While none of the availability levels enable re-distribution of the platform, the developer availability level permits the distribution of the binaries only, with proper disclosure. 4 Conclusion 21s' Century Systems, Inc. has developed the Agent Enabled Decision Guide Environment (AEDGE™), an open DII COE and CORBA compliant agent-based environment that enables the development of componentized decision support systems. AEDGE's core functionality can be easily extended with new capabilities by using extender components and bridges to third party products. A number of commercial and military customers already benefit from this decision support environment in a variety of applications (AWACS Command and Control, Griffin Special Forces Rote planner, IDAS Aerospace Operations Center, Navy's Advanced Battle Station, etc). Customers use AEDGE at multiple levels of component availability to satisfy their specific needs for intelligent agent DSS architecture. 5 Bibliography 1. Petrov, P. V., Stoyen A. D. An Intelligent-Agent Based Decision Support System for a Complex Command and Control Application. Proceedings of the Sixth IEEE International Conference on Engineering of Complex Computer Systems, ICECCS'2000, Tokyo, Japan, September 2000. 2. Hicks, J. D., Stoyen, A. D., Zhu, Q Intelligent Agent-Based Software Architecture for Combat Performance under Overwhelming Information Inflow and Uncertainty. Proceedings of the Seventh IEEE International Conference on Engineering of Complex Computer Systems, ICECCS 2001, Skovde, Sweden, June 2001. 3. 21st Century Systems, Inc. Extensible Multi-Component DSS Architecture - a Multi-agent Decision Support Environment. Technical report. 21CSI, Omaha, NE. January 2001. PROACTIVENESS AND EFFECTIVE OBSERVER MECHANISMS IN INTELLIGENT AGENTS JON PLUMLEY, KUO-MING CHAO, RACHID ANANE AND NICK GODWIN School of Mathematical and Information Sciences Coventry University,Coventry CVI 5FB, UK E-mail:{j.plumley, kxhao, r.anane, a.n.godwin} @coventry.ac.uk Proactiveness is a necessary property for an autonomous intelligent agent. We believe that to exhibit this property, agents require an effective and efficient observing mechanism. In this paper, we propose a novel method that enables agents to observe dynamic change in other agents. The method incorporates two components: an ORB-based observing mechanism and a mobile element. This new approach goes beyond the observing mechanism in Java. The scope for interoperability is improved, and the dynamic generation and transfer of observable conditions between agents is fully supported by the flexibility of the mechanism. Under this new scheme, the ORB-based observing mechanism enables agents to monitor any changes in the component objects of other agents. The mobile agent is used to transfer the filtered knowledge between agents in order to set the monitoring conditions in a dynamic manner. 1. Introduction Intelligent Agent technology has attracted a number of researchers and industrialists in the field of distributed systems[8,9]. We argue that agent technology can be useful in the integration of diverse systems in a distributed environment. The realisation of an agent's proactiveness through the use of a two level ORB based observer mechanism could reduce the tendency to redundant computation. This paper proposes a Dynamic Mobile Agent (DMA) with an Observer/Observed Mechanism (O/OM) operating at two levels- the global level and the object level. The observation of change allows the proposed agent to change the rule set of its mobile elements. It is this dynamic behaviour, which is described in detail in the next section, that makes the agent more proactive and more autonomous, and therefore better suited to dynamic distributed systems. 2. An overview of the Proposed Dynamic Mobile Agent functionality. Two essential elements of the DMA are the Observation strategy and the reasoning process. 144 145 2.1 The Observation Strategy The DMA maintains a table of active objects with its Observer mechanism at the global level by observing any object creation or deletion. The logical integrity of any decision taking process would be flawed if the client held objects of which the DMA was not aware. Likewise a lack of knowledge of deleted objects would lead to run time error if the DMA were to attempt to reference such a deleted object. A set of meta rules is held in the Belief Desire Intention (BDI) [7] module, and with the built in mobile element rule generator, rules can be generated for each mobile element. Specific mobile elements can then be dispatched to observe particular objects. Fig. 1 illustrates the separation of the static and mobile elements of a DMA. The observer mechanism and rule set of the mobile element allows it to monitor any changes in the object states, and the significance of such a change. With the knowledge of the observed change a decision is made (by human intervention) as to whether any changes in the rule sets are needed. If so then a mobile element(s) with a revised rule set(s) can be dispatched to continue observation. Figure 1. A conceptual view of the elements of a Dynamic Mobile Agent 2.2 The reasoning process Once dispatched with its own rule set, the mobile element observes its designated object. When a change is observed in the object the mobile element is able to use its rule set to determine the significance of the change. If appropriate it will pass a message to the DMA notifying it of the observed change. The DMA will then be able to use its global rule set to determine the significance to the whole system. This 146 may result in the need to change the 'observation rules' of one or more of the mobile elements. When a change of rule set is appropriate the specific mobile element is retracted, a new rule set generated, and then a new mobile element is dispatched to continue observation. This dynamic behaviour ensures that each of the agents involved in a multiple agent system responds to the dynamics of the system as a whole and that they are able to cooperate together efficiently. 3. Architecture of a Dynamic Mobile Agent The proposed mechanism is supported by a three level architecture. The three levels and their appropriate mechanisms are described below. 3.1 The Three-level Architecture The three levels of the system architecture are the communication mechanism, the mental model, and the interaction with the observer mechanism. The communication mechanism involves message passing built upon the Object Request Broker (ORB) principles. It transports the agent's message using the syntax of an Agent Communication Language (ACL) [4] to the recipient, which subsequently parses the message. The mental model interprets the content of the message, reasons with it and asks the underlying application to perform the task. The underlying application returns the result to the mental model. The mental model generates the appropriate reply and forwards it to the requesting agent. Remote method invocation is used to invoke the functions in the application at the lowest level. The interface between the application and the mental model uses the ORB in order to support applications that are implemented in different programming languages. 3.2. The Mental Model The Belief Desire Intention (BDI) module parses the incoming message from the ACL module and reasons with its content. The BDI then invokes appropriate methods. The BDI is a reasoning mechanism that interprets the information, motivational, and deliberative states of the agents. "These mental attributes determine the system's behaviour and are critical for achieving adequate or optimal performance when deliberation is subject to resource bounds" [7]. 4 Conclusions and future work 147 4.1 Discussions Wooldridge and Jennings [9] identify proactiveness as a key property of an intelligent agent. A proactive agent is able to exhibit goal-directed behaviour by taking the initiative through its ability to observe the internal and external environment. An effective and efficient observation mechanism is required for the agents to be proactive. In this respect the A-Design system [3] is a proactive system requiring a constant flow of information, and a failure to note that the object being observed has been deleted could cause system errors. The mobile agent has been widely used in the area of information retrieval over the internet [2, 6]. We exploit this feature to work with our global observation mechanism in order to ensure that the system maintains a consistent state. The JAM agent [5] supports agent mobility with BDI representation. It provides a flexible representation for mobile agents. We use this feature and apply it in agent observations. Ahmad and Mori [1] proposed using mobile agents to push and pull data to cope with ever-changing situations in information services and to reduce access time for the users. Our proposed method provides a more flexible approach that allows the intelligent agent to generate new monitoring rules as required and introduced the ORB observing mechanism to cater for changes to the objects in the environment. 4.2 Conclusion The main contribution of this work is the proposal of a method that supports an intelligent agent's proactiveness with an observing mechanism that operates at two levels: global and local (object level). The global observation allows the agent to be aware of any changes such as creation and deletion of objects, thus enhancing the robustness of the system. The local observer associated with the BDI and mobile element generator, enables the observer agent to generate and dispatch an autonomous mobile element to observe the state of a particular object. Changes to the monitoring rules in the mobile element can be made when the need arises without recompiling the code. The architecture of the system enables the intelligent agents to be autonomous and to reflect the dynamic environment. The volume of communication between agents can be reduced, because the mechanism in the mobile element only sends filtered information to the agent rather than the raw data. The ORB observer mechanism also contributes to the reduction of communication traffic, because it is server side, the observable agent, pushing the data out to the client side, the observer agent. Thus, the observer agent does not need to constantly monitor the status of the objects in the observable agent. This 148 then, is an effective method of maintaining system consistency in a dynamic environment where the objects and monitoring rules may change frequently. The agent framework has been partially implemented. A simple example was used to test the ORB observing mechanism and the mobile element in order to evaluate its feasibility. A further implementation of these components is needed in order to carry out a demonstrable case study. References 1. Ahmad H. F., Mori K., Push and pull of information in autonomous information service system, Proceedings 2000 International Workshop on Autonomous Decentralized System, IEEE Comput. (2000), pp. 12-18. 2. Cabri G., Leonardi L., Zambonelli F., Agents for information retrieval: issues of mobility and coordination, Journal of Systems Architecture, 46(15), (2000) pp. 1419-33. 3. Campbell, M. I., Cagan, J., Kotovsky, K., A-Design: An Agent-Based Approach to Conceptual Design in a Dynamic Environment, Journal of Research in Engineering Design, 11(3), (1999), pp. 172-192. 4. FIPA, Agent Communication Language Specifications 97, http://www. fipa.org. (1997). 5. Huber M. J., JAM: a BDI-theoretic mobile agent architecture. Proceedings of the Third International Conference on Autonomous Agents. ACM. (1999), pp.236-43. 6. Lieberman H., Selker T., Out of context: computer systems that adapt to, and learn from, context, IBM Systems Journal, 39(3-4), (2000), pp.617-32. 7. Rao, S. A., & Georgeff. M. P., BDI Agents: From Theory to Practice, Conference Proceedings of 1st international conference on multiple agent system, (1995), pp. 312-319. 8. Shen, W. M., Douglas H. N., Agent-based Systems for Intelligent manufacturing: A State-of-the-Art Survey, International journal of Knowledge and Information Systems, 1(2), (1999) pp. 129-156. 9. Wooldridge, M. and Jennings, N. R., Agent Theories, Architectures, and Languages: a Survey, Intelligent Agents, ed. by Wooldridge, M., Jennings, N. R., (1995), pp. 1-22. CHAPTER 3 LEARNING AND ADAPTATION PARRONDO STRATEGIES FOR ARTIFICIAL TRADERS MAGNUS BOMAN Swedish Institute of Computer Science, Box 1263, SE-164 29 Kista, Sweden E-mail: mabQsics.se STEFAN J. JOHANSSON Department of Software Engineering and Computer Science, Blekinge Institute of Technology, Box 520, SE-372 25, Ronneby, Sweden E-mail: sja@bth.se DAVID LYBACK Financial Market Systems, OM AB, SE-105 78 Stockholm, Sweden E-mail: david.lyback@omgroup.com On markets with receding prices, artificial noise traders may consider alterna­ tives to buy-and-hold. By simulating variations of the Parrondo strategy, using real data from the Swedish stock market, we produce first indications of a buy- low-sell-random Parrondo variation outperforming buy-and-hold. Subject to our assumptions, buy-low-sell-random also outperforms the traditional value and trend investor strategies. We measure the success of the Parrondo variations not only through their performance compared to other kinds of strategies, but also rela­ tive to varying levels of perfect information, received through messages within a multi-agent system of artificial traders. Keywords: Artificial trader, Parrondo strategy, on-off intermittency, multi-agent system, artificial stock market 1 Introduction Stock markets to an ever-increasing extent allow for trading controlled by ar­ tificial agents, or more generally, program trading. For instance, the Swedish Securities Dealers Association finds that it has no objections to program trad­ ing, and already in 1992 declared that only the means to exploiting unlawful quotes manipulation, resulting from program trading, should be controlled19. Nasdaq, in a communication to their members write17: Recent events show that the way some stocks are traded is changing dramatically, and the change in trading methods may affect price volatility and cause increased trading volume. This price volatility and increased volume present new hazards to investors, regardless of whether trading occurs on-line or otherwise. 150 151 In general, stock markets do not apply restrictive policies to program trading. A primary objective of the market place operator is to promote a high liquidity in the traded instruments. This can be done through reducing the transaction costs: one typical implicit cost is lack of orders, leading to wide spreads, or non-existing quotes. The operators thus have reasons to encourage inbound orders. As long as these are authenticated, and the network can keep up disseminating the market info in a proper fashion so that the situation stays in line with the overall aim of up-keeping a fair and orderly market, the operator should have nothing against a large number of valid orders per second being placed by artificial agents. Hence, we feel motivated to relate certain theoretical results from physics to artificial traders of the future. We do not assume markets populated solely by artificial traders, however. If we did, we could move on to claim that the Efficient Market Hypothesis and rational choice theory yield effi­ cient equilibria14, since the vast empirical evidence against such assumptions are directed almost exclusively towards human traders13. We instead believe that artificial traders have gradually and almost unnoticeably slipped onto the same markets as human traders, and we will treat them as speculat­ ing noise traders (traders with non-rational expectations and potentially zero intelligence)6. Artificial stock markets possibly exhibit volatility (i.e., stan­ dard deviation) of a different kind than ordinary excess volatility markets2, as argued, e.g., in the ban of crawlers from the Internet auction site eBay20. In practice, Internet marketplaces supply information on their acceptance of artificial traders and other softbots in a file named robots .txt, and on Inter­ net markets that do allow for softbots, their behavior is usually monitored in some way, in order to mitigate the effects of speculation through unconven­ tional methods such as denial-of-service attacks. Program trading has also in general reached a level where flocking behavior worry policy makers7. On an artificial stock market, in contrast to an ordinary market16, active portfolio management should also incorporate the price dynamics, because of the in­ tense trading. This factor has also led to transaction fee policies being radical on some artificial trader markets. Since significant transaction fees can render the Parrondo strategies described in sections 2 and 3 below useless, the exis­ tence of markets with low or no transaction fees is important to our object. We will consider portfolios on markets with receding prices. We will represent artificial traders as agents in a multi-agent system (MAS), in which agents affect each other's behavior through trusted message passing, as explained in section 3. In the MAS setting, variations of Parrondo strategies are then sub­ ject to experiments on a simulation testbed, the results of which are reported in section 4. In the last section, we present directions for future research. 152 2 The Parrondo Strategy in Games The flashing ratchet (or Brownian motor)1 is a molecular motor system con­ sisting of Brownian particles moving in asymmetric potentials, subject to a source of non-equilibrium18. In its game-theoretical formulation9, the flashing ratchet can be described in terms of two games (A and B) in which biased coins are tossed. • Game A is a single coin game in which the coin comes up heads (=win) 50 — e per cent of the time (for some small e > 0) and results in tails the rest of the times (Parrondo himself18 used e = 0.005, and the constraints are described, e.g., at seneca.fis.ucm.es/parr/GAMES/discussion. html). • Game B involves two coins. The first coin comes up heads 10 — e per cent of the time, and the second coin 75 — e per cent of the time. What coin to flip is decided through looking at the capital of the player. If it is divisible by 3, the first coin is flipped, while the second coin is used in the rest of the cases. Clearly, game A is a losing game, but the same holds for game B. This is because the player is only allowed to flip the second coin if her capital is not a multiple of 3. The latter situation comes up more often than every third time: The player will start with the unfavorable coin, which will very likely place her in loss -1. She will then typically revert to 0, and then back again to -1, and so on. Whenever the unfavorable coin lands tails twice in succession, however, she will end up with capital -3, and then the pattern will repeat, leading to -6, etc. Hence, game B is a losing game, just like game A. The Parrondo strategy for playing games A and B repeatedly is to choose randomly which game to play next. Somewhat counter-intuitively, this dis­ crete representation of a ratchet yields a winning game. 3 The Parrondo Strategy in Artificial Trading Artificial trading and herd behavior have often been studied through bottom- up simulations, as in Sugarscape8 or the Santa Fe artificial stock market2. We have concentrated on speculating investors that use variations of the Par­ rondo strategy. Table 1 briefly describes these strategies, as well as some control strategies. Value investors (exemplified by BLSH in Table 1) seek profits, while trend investors (exemplified by BHSL in Table 1) try to identify upward and downward movers and adjust their portfolios accordingly10. In 153 Strategy Buy-and-hold (BaH) Random Insider Buy low, sell high (BLSH) Buy low, sell ran­ dom (BLSR) Buy random, sell high (BRSH) Buy high, sell low (BHSL) Description The buy-and-hold strategy here acts as a control strategy that trades no stocks. This strategy trades stocks randomly. The insider gets quality ex ante information about some stocks on which it may react before the market. This Markovian value investor strategy monitors if the stock increased or decreased in value during the latest time interval. If the value increased, it sells the stock, and if the value dropped, it buys the stock. Like BLSH, except BLSR randomly chooses what stock to sell. Like BLSH, except BRSH randomly chooses what stock to buy. This Markovian trend investor strategy is the oppo­ site of BLSH. Table 1. The artificial trading strategies. our simulations, the value investor proportion is larger, but this significant fact notwithstanding, our object is not the study of how it affects the market dynamics. Instead, we augment the Parrondo variations by market informa­ tion, in the form of agent messages. The agents may thus influence each other by passing hints on what to buy, or what to sell. A message is treated by the receiver as trusted information, and the receiving agent will act upon the content of the message, interpreting it as normative advice. A message can be interpreted as perfect (or even insider) information, randomized for the sake of our experiment. We considered a portfolio often stocks with receding prices, assumed to be unaffected by agent trading. The data used is real daily data from the Swedish stock market, from the one-year period starting March 1, 2000. The stocks are listed in Table 2, and in Figure 1 their development is shown. Values have been normalized to 100 for the start of the period. The strategies initially held $10000 value of each stock. One trade was done per day, in which the strategy decided what to sell and what to reinvest in. Three different levels of hint probabilities were used: 1%, 5%, and 10% chance of receiving a hint. A 1% level means that the strategy will on average receive a hint for one of the ten stocks every tenth day of trading. When choosing randomly what to buy and what to sell, 10 integers were randomized and taken modulo 10 in 154 Stock ABB Allgon Boliden Enea Data Hennes&Mauritz Ericsson OM Scania Securitas Skandia Business area Industrial Telecom Mining IT Clothes Telecom Financial Industrial Security Insurance Value 83.33 24.55 37.19 20.09 60.40 36.36 48.67 77.80 80.35 53.22 Table 2. The ten stocks used in the experiment, and their normalized values on March 1, 2001. order to get (at most 10) stocks that were then traded. For each of the stocks sold, a percentage of the possession p e [0.2,0.8] was sold. The values of all sales were then reinvested according to their relative part in a similar selection process. If the strategy did not get at least one stock to buy and one to sell, it held its possessions until the next day. Each strategy was evaluated towards the same set of stocks and the same set of hints (if used). In order to even out differences due to the randomness of the trading, the simulations were repeated 1000 times. Alignment and docking experiments are encouraged, and specifics are available upon request. 4 Experiment Results As can be seen in Figure 2, most of the strategies over the 252 trading days followed the major trends of the market and none of them managed to main­ tain the initial portfolio value. There was considerable movement, as shown in the blowup of the last days of trading in Figure 3, but also significant differences between outcomes (Table 3). Buy-low-sell-random was the only strategy that outperformed Random. Strategies also differed with respect to volatility. For instance, BLSH was inferior to all strategies for most of the year. However, around day 100 through day 120, it outperformed all other strategies. As expected, BHSL amplified the receding trend. In spite of its poor performance, there are still many reasons for pol­ icy makers and speculators to use buy-and-hold even on supposedly receding markets. One reason is to declare and uphold a clear company investment 155 > 60 r^pg»p| ABB Allgon Boliden Enea H&M Ericsson OMG Scania - Securitas Skandia 1 50 100 150 Time Figure 1. The development of the values of the stocks used in the experiment. Strategy BLSR Random BaH BLSH BHSL BRSH Value 6110.88 5524.60 5383.40 5338.15 5202.71 5140.29 Table 3. Strategy results without hint probabilities (strategies are explained in Table 1). policy, another is that frequent re-investments could be undesirable (e.g., due to transaction fees). Nevertheless, we feel that BLSR produced good enough results to merit further study. For now, we will be content with comparing it to various levels of hint probabilities, however. From those results, shown in Figure 4, we see that BLSR is comparable to the insider strategy with a hint probability of approximately 4%. 156 11000 10000 9000 I 8000 •> 7000 6000 5000 i '•..» :;i i v ^'T'r in I/ Tr . ki . ! A &r y w ' ^4 A A ^%;:F ^.•-••iC^!»K:..,. i BaH Random BLSH BHSL BRSH BLSR iift&J K - - 1? ? 100 150 Time Figure 2. The development of the values of the experiment portfolios. 5 Conclusions and Directions for Future Research We have shown that the use of certain Parrondo-based strategies may im­ prove the performance of artificial traders. Our model is simplistic, in the following respects. The messages sent must be allowed to have richer content, and may be indicators or signals, rather than simple instructions. Instead of interpreting received messages as normative advice, trust could somehow be represented. For instance, a probability distribution may be associated with messages, and trust assignments can then be represented as second- order probabilities. Market norms should be modeled and adhered to by the traders3. Message content can then depend on market dynamics. Artificial traders have two ways of communicating such dynamics. Firstly, they may observe and recognize other traders and try to model them with the intent of communication and possibly co-operation5. Secondly, they may monitor prices, as in the Trading Agent Competition4 (see tac.eecs.umich.edu/) or artificial stock market approaches11. Naturally, each trader itself also observes the market dynamics. We have placed no reasoning facilities in the trader at this stage, and so the trader cannot act on sense data. Another simplifica- 157 6500 6000 5500 4500 BaH Random BLSH BHSL BRSH BLSR 244 245 246 248 Time 249 250 Figure 3. Magnification of the last days of trading. tion is that our models should incorporate transient phenomena, including not only crashes and bubbles, but also transient diversity, i.e. we must find the homogeneity and heterogeneity drivers in our MAS15. A related point in need of further investigation is learning in artificial traders12. For purposes of prediction, our results are almost useless, since we cannot in general design in advance a portfolio of stocks, the prices of which are all re­ ceding. In rare circumstances, such as during the period of almost universally receding prices of IT stocks in the autumn of 2000, ex ante portfolios could relatively easily be assembled, and then Parrondo variations would indeed be an interesting alternative to buy-and-hold. For our experiment, the real data was chosen ex post from a large sample space with the criterion that each stock should have a saw-tooth receding price curve. While the above shortcomings together render our results useless for prac­ tical purposes, they should be seen as directions for future research. We in­ tend to pursue the important question of strategy programming for artificial traders, as we feel that such programming will be of increasing importance in the future. By replacing our unrealistic assumptions one by one, we hope to achieve our ultimate goal of reasonably efficient strategies on real-time markets with non-linear dynamics. 158 -T 1 BaH Insider 1% Insider 5% Insider 10% Figure 4. The development of the values with three different levels of hint probabilities. Acknowledgements Magnus Boman was in part financed by a NUTEK (VINNOVA) grant within the PROMODIS (Programming modular and mobile distributed systems) pro­ gramme. Stefan J. Johansson was financed by the KK-foundation. David Lyback was supported by a research assignment in the OM corporation. The authors wish to thank Fredrik Liljeros, as well as their respective colleagues, for comments on drafts. References A. Ajdari and J. Prost, Mouvement Induit par un Potentiel Periodique de Basse Symetrie: Dielectrophorese Pulsee, C. R. Acad. Sci. Paris 315, 1635 (1992). W. B. Arthur, J. Holland, B. LeBaron, R. Palmer, and P. Tayler, Asset Pricing under Endogenous Expectations in an Artificial Stock Market, in The Economy as an Evolving Complex System II, eds. W. B. Arthur, S. Durlauf, and D. Lane, pp. 15-44, Addison-Wesley, Reading, MA, 1997. 159 3. M. Boman, Norms in Artificial Decision Making, Artificial Intelligence and Law 7, 17 (1999). 4. M. Boman, Trading Agent Competition, AgentLink News 6, 15 (2001). 5. M. Boman, L. Brouwers, K. Hansson, C-G. Jansson, J. Kummeneje, and H. Verhagen, Artificial Agent Action in Markets, Electronic Commerce Research 1, 159 (2001). 6. J. B. De Long, A. Shleifer, L. H. Summers, and R.J. Waldmann, The Sur­ vival of Noise Traders in Financial Markets, J. of Business 64, 1 (1991). 7. V. M. Eguiluz and M. G. Zimmermann, Transmission of Information and Herd Behaviour: An Application to Financial Markets, Phys. Rev. Lett. 85, 5659 (2000). 8. Epstein and R. Axtell, Growing Artificial Societies (Brookings, Washing­ ton D.C., 1996). 9. G. P. Harmer and D. Abbott, Losing Strategies can Win by Parrondo's Paradox, Nature 402(6764), 864 (1999). 10. P. Jefferies, M. Hart, P. M. Hui, and N. F. Johnson, ^From Market Games to Real-World Markets, cond-math0008387 (2000). 11. B. LeBaron, Agent Based Computational Finance: Suggested Readings and Early Research, Economic Dynamics and Control 24, 679 (2000). 12. M. Lettau, Explaining the Facts with Adaptive Agents: The Case of Mutual Fund Flows, Economic Dynamics and Control 21, 1117 (1997). 13. T. Lux, Herd Behaviour, Bubbles and Crashes, The Economic Journal 105, 881 (1995). 14. T. Lux and M. Ausloos, Market Fluctuations I: Scaling, Multi-Scaling and Their Possible Origins, in Theories of Disasters: Scaling Laws Governing Weather, Body and Stock Market Dynamics, eds. A. Bunde and H-J. Schellnhuber, Springer-Verlag, Berlin, in press. 15. D. Lyback, Transient Diversity in Multi-Agent Systems, DSV Report 99-X-097, Royal Institute of Technology, Stockholm, 1999. 16. S. Maslov and Y-C. Zhang, Optimal Investment Strategy for Risky As­ sets, Theoretical and Applied Finance 1(3), 377 (1998). 17. NASD Regulation Issues Guidance Regarding Stock Volatility, NASD Notice to Members 99-11, 1999. 18. J. M. R. Parrondo, J. M. Blanco, F. J. Cao, and R. Brito, Efficiency of Brownian Motors, Europhys. Lett. 43(3), 248 (1998). 19. Swedish Securities Dealers Association, Recommendations on Program Trading and Related Topics, May 19, 1992 (In Swedish). 20. T. Wolverton, Judge Bars eBay Crawler, CNETNews.com, May 25, 2000. BDI MULTIAGENT LEARNING BASED ON FIRST-ORDER INDUCTION OF LOGICAL DECISION TREES ALEJANDRO GUERRA HERNANDEZ, AMAL EL-FALLAH SEGHROUCHNI AND HENRY SOLDANO Universite Paris 13 , Laboratoire d'Informatique de Paris Nord, U.P.R.E.S.-A. CNRS 7030, Institute Galilee, Avenue Jean-Baptiste Clement, Villetaneuse, 93430, France. Email: {agh,elfallah,soldano}@lipn.univ-parisl3.fr This paper is about learning in the context of Multiagent Systems (MAS) composed by intentional agents, e.g. agents that behave based on their beliefs, desires, and intentions (BDI). We assume that MAS learning differs in subtle ways from the general problem of learning, as defined traditionally in Machine Learning (ML). We explain how BDI agents can deal with these differences and introduce the application of first-order induction of logical decision trees to learn in the BDI framework. We exemplify our approach learning the conditions in which plans can be executed by an agent. Key words: MAS learning, BDI systems, Logical Decision Trees. 1 Introduction We are interested in learning in the context of Multiagent Systems (MAS) composed by intentional agents, e.g. BDI agents. In this paper, we deal with the issue of adding learning competences to a BDI architecture, which lead us to consider learning methods applied to systems which behavior is explained in terms of beliefs, desires, intentions (BDI propositional attitudes), and partial hierarchical plans, as proposed in practical rationality theories *, and that can be characterized as autonomous, reactive, pro-active and social 15. Usually, MAS learning10'14 is characterized as the intersection of Machine Learning (ML) and Distributed Artificial Intelligence (DAI). Motivations for this are reciprocal: i) MAS community is interested in learning, because it seems to be central to different properties defining agents; and ii) an extended view of ML dealing with agency and MAS can improve the understanding of general principles underlying learning in natural and artificial systems. A learning agent9 can be conceptually divided into four components: i) a learning element responsible for making improvements executing a learning process; ii) a performance element responsible for taking actions, e.g. the agent without learning competences; iii) a critic responsible for providing feedback; and iv) a problem generator responsible for suggesting actions that will lead to informative experiences. Then, the design of the learning element, and consequently the choice of a 160 161 particular learning method, is affected by five major issues: i) which elements of the performance element are to be improved? ii) what representation is used for these components? iii) what feedback is available? iv) what prior information is available? v) is it a centralized or decentralized learning case? In this paper we expose the way BDI agency can be used to conceive learning agents able to operate in MAS, using induction of logical decision trees. In order to do that, the paper is organized as follows: Section 2 recalls briefly BDI architectures, introducing an example used in the rest of the paper. Section 3 presents our approach to MAS learning, it considers the design of a BDI learning agent, the learning method used (first-order induction of logical decision trees), and examples. Section 5 focuses on discussion, related and future work. 2 BDI Agency BDI theories of agency are well known. Different aspects of intentionality and practical reasoning have been studied formally using extensions of modal and temporal logics 5-11-15. The goal of the section is just to recall the way BDI architectures work to complement the discussion on learning. Examples in this paper comes from a very simple scenario proposed origi­ nally by Charniak and McDermott 2 (see figure 1). This scenario is composed by a robot with two hands, situated in an environment where there are: i) a board; ii) a sander; iii) a paint sprayer; iv) a vise. Different goals can be proposed to the robot, for example, sand the board or even get self painted! which introduces the case of incompatible goals, since once painted, the robot stops being operational for a while. The robot has different options to achieve its goals, it can use both of its hands to sand the board, for example, or well, use the vise and one hand. Eventually, another robot will be introduced in the environment to deal with examples about different interactions. In general, a BDI architecture contains four key data structures: beliefs, desires or goals, intentions, and a plan library. Beliefs represent information about the world. Each belief is represented symbolically as a ground literal of first-order logic. Two activities of the agent update its beliefs: i) the perception of the environment; and ii) The execution of intentions. The scenario shown in Fig. 1 can be represented by the following beliefs of robot rl as: somewhere(sander), somewhere(board), somewhere(sprayer), free-hand (left), free-hand (right), operational (rl). Desires, or goals, correspond to the tasks allocated to the agent and are usually considered logically consistent. Two kinds of desires are considered: i) to achieve a desire expressed by a belief formula, i.e. !sanded (board); and 162 Environment sant board ier ,/ ^ ^ i^S) f h vise ^__ rov r?r^ I; \ Vy robot r2 — •;, i'''*wf'"T'''*Jti^' IMKii":' lUH M •'i; k1* ' I • . run 2- - - - ran 10 0 50 100 150 200 250 300 350 400 450 500 Generations (c) (d) Figure 4: Plots of Population Averaged Fitness in each generation, (a) for a = 0.5,/3 = 0.25,7 - 0.25. (b) for a = 0.1,/? = 0.05,7 = 0.05. (c) for a = 0.04, j3 = 0.02,7 = 0.02. (d) Means of 10 runs of (a), (b) and (c). In this paper, we provide two experiments on our model. In each one, a population of 100 trial strategies evolves until certain generations. In the first experiment, in each generation, each individual plays an iter­ ated game against other members of the same population (round-robin). The number of iterations in each game is decided randomly. We make the test by changing the probability parameters of genetic operations. 10 runs for each of 177 three parameter sets, (a = 0.5,(3 = 0.25,7 = 0.25), (a = 0.1,(3 = 0.05,7 = 0.05) and (a = 0.04, [3 = 0.02,7 = 0.02) are made and each three of them are plotted in Figure 4(a), 4(b) and 4(c) respectively. The mean value of 10 runs for each of three parameter sets are plotted in Figure 4(d). In the test with parameter set (a — 0.5,(3 = 0.25,7 — 0.25), 3 out of 10 runs did not get the optimal score, example: run 9 in Figure 4(a). In other two tests, all runs reached the nearly optimal score, (the optimal score is 3.5), after 400 generations. In all three tests, we can see the tremendous change of population-averaged fitness in earlier generations. As there is only one state in each FSA in the initial generation, agents behave as blind. An individual has no other incentives except the output value of the initial state. If the output of the initial state is "G", agent gives up the way whenever he meets dilemma in spite what his opponent's action would be. On the other hand, if the output of the initial state is "T", he advances without considering whether that would risk his life, and so on. As evolution proceeds, population members become better judges of each other. Genetic mutations create more states in FSAs, therefore an individual can alter to the transition paths to get higher score. From the mean plots in Figure 4(d), we find that the less individuals are genetically mutated, the longer it takes for evolving to reach their saturation, but the higher saturated score (nearly optimal) they can get. 1000 1200 1400 1600 1800 2000 Generations 0 200 400 600 800 1000 1200 1400 1600 1800 2000 Generations (a) (b) Figure 5: Results for second experiment (a) 3 out of 10 runs, (b) Mean of 10 runs On the other hand, in a real world problem, for example, multi-agent burden carriage problem, agents cannot expect when and where they will meet with which opponent. Therefore, in the second experiment, we let individuals 178 play iterated games against only part of the members in a population. Each agent plays games in random iterations with randomly selected opponents. We fixed the probability parameter set as (a = 0.04, (3 = 0.02,7 = 0.02). At this time, individuals cannot get their optimal score in 500 generations. 10 runs are made and three of them are plotted in Figure 5(a) and the mean of 10 runs is plotted in Figure 5(b). We found that agents can evolve to reach their optimum after 1000 generations in almost all runs. In all tests, agents behave as blind in the earliest generations. As evolution proceeds, they improve in playing the game by taking more and more complex actions, and emerging cooperative interactions. Here, cooperative interaction in CD means that agents take their action alternatively to avoid damage or loss. In later generations, agents keep their cooperative interactions while generating optimal score and keeping their community peaceful. 5 Conclusions In this paper, we proposed a game model for Compromise Dilemma problems and observed the evolutionary behaviors of simulated agents with such a model. According to the experimental results, evolutionary approach makes the agents able to evolve their own strategies for dealing with an uncertain environment in an intelligent manner. The reason is that autonomous agents are able to share maximum average score by avoiding two kinds of extreme cases: occurrence of damage by collision and undesired loss of space-time (resource). In other words, agents can achieve their optimal strategy that enables them to utilize the resources of environment as much as possible. Compromise Dilemma is naturally a competitive problem in which all in­ dividuals try to maximize their own benefit. However, since their opponents are who also concurrently evolve in the same way to upgrade their fitness, co­ operative interactions between agents are established. Accordingly, population fitness becomes escalated as the evolution goes on. Following the experimen­ tal results, we speculated that rational agents yielded a communal pronto with suitable circumstances. Our future work is to use this model to simulate real world problems. Acknowledgments This work is partially supported by the Hori Information Science Promotion Foundation, Japan. 179 References 1. Bengt Carlsson and Stefan Johansson: "An Iterated Hawk-and-Dove Game" Proceedings of the Third Australian Workshop on Distributed AI and Lecture Notes in Artificial Intelligence 1441, 1997. 2. Boyd, R., & Lorberbaum, J.P.: "No pure strategy is evolutionarily stable in the repeated prisoner's dilemma game" Nature, 327 pp 58-59, 1987. 3. Fogel, D.B : "Evolving Behaviors in the Iterated Prisoner's Dilemma" Evolutionary Computation, 1(1) pp 77-97 1993. 4. Akira Ito and Hiroyuki Yano: "The Emergence of Cooperation in a So­ ciety of Autonomous Agents- The Prisoner's Dilemma Game Under the Disclosure of Contract Histories" ICMAS'95 5. Lindgren K.: "Evolutionary Phenomena in Simple Dynamics" Artificial Life II pp 295-311, 1991. 6. Peter J Angeline: "An Alternative Interpretation of the Iterated Pris­ oner's Dilemma and the Evolution of Non-Mutual Cooperation" Arti­ficial Life IV, Proceedings of the fourth international workshop on the synthesis and simulation of living system pp 353-358 7. Reiji Suzuki and Takaya Arita: "Meta-Pavlov: Strategies that Self- Adjust Evolution and Learning Dynamically in the Prisoner's Dilemma Game" Game Informatics 1999. 8. R.M.Axelord: "The evolution of Cooperation" Basic Books, New York 1984. 9. Multiagent Systems : "A Modern Approach to Distributed Artificial Intelligence" The MIT press, 1999. 10. Paul G.Harrald and David B.Fogel: "Evolving continuous behaviors in the Iterated Prisoner's Dilemma" Biosystems, 1996. 11. Yao.X, Darwen.P : "How Important Is Your Reputation in a Multi-Agent Environment" Proceedings of the 1999 IEEE international conference on systems, man, and cybernetics(SMC99) . A STRATEGY FOR CREATING INITIAL DATA ON ACTIVE LEARNING OF MULTI-LAYER PERCEPTRON KAZUNORI IWATA AND NAOHIRO ISHII Dept. of Intelligence and Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya, 466-8555, Japan E-mail: {kiwata,ishii} Qegg.ics.nitech.ac.jp Keywords : active learning, multi-layer perceptron, network inversion, pseudo-random number, low-discrepancy sequence Many active learnings in the training of a partially trained Multi-Layer Perceptron (MLP) have been proposed. We note any active learning performance depends on initial training data. The initial training data plays an important role for active learning performance, because any active learning algorithm generates additional training data that is useful for improving the classification accuracy, based on ini­ tial training data. Most of conventional methods have generated initial data at random using a pseudo-random number. However, in practical case, we can not prepare enough data by the limit of time and cost. Therefore, the bias of initial training data becomes critical, especially in the case of input space dimension to be large. In this paper, we propose a strategy by the use of low-discrepancy sequence for creating more uniform initial data than pseudo-random numbers. For the clas­ sification problem of MLP, we analyze the experimental performances of network inversion algorithm which use a pseudo-random number and a low-discrepancy se­ quence as initial training data. In experimental results, we found low-discrepancy sequences give a good strategy to create initial training data. Finally, we also discuss some advantages and disadvantages of low discrepancy sequences as initial training data. 1 Introduction Learning by the use of queries through training data generation mechanism is well-known as active learning1'2. Active learning includes the interaction with the oracle, which always responses a correct answer when queried with exam­ ple. In other words, the oracle gives a correct classification for a given data point. Examples of oracle include a human expert, costly experimentation, computer simulators and so on. The classifier adds the properly classified point by the oracle as training data. Such a learning with additional train­ ing data can significantly increase the resulting classification accuracy with a small computation1,2, and has recently attracted considerable attention. In this paper, we consider only the case of active learning on Multi-Layer Perceptron (MLP)3. Many active learnings in the training of a partially trained MLP have been proposed1'2,4'5'6'7. We note any active learning performance depends on 180 181 initial training data. The initial training data plays an important role for active learning performance, because any active learning algorithm generates additional training data that is useful for improving the classification accuracy, based on initial training data. In practical case, it is desirable that we prepare various initial data, that is, uniformly distributed data for a given space. There are several reasons for uniformly distributed data to be required. One is that each class should have a few initial data, because if no training data initially exist within each class region, most of active learning algorithms can not refine these classification boundary. However, in many case, we cannot recognize each class region in advance. A good strategy is to prepare as uniform data as possible for a given space avoiding the repetition of the same data. Another reason is to detect the whole boundary by active learning algorithm. The bias of initial data may cause the classification bias for a given space. Most of conventional methods have generated initial data at random us­ ing a pseudo-random number. By the low of large number and central limit theorem in statistics, the pseudo-random number can distribute uniformly for a given space as the number of data approaches infinity. However, in practical case, we can not prepare enough data by the limit of time and cost. There­ fore, the bias of initial training data becomes critical, especially in the case of input space dimension to be large. In this paper, we propose a strategy by the use of low-discrepancy sequences for creating more uniform initial data than pseudo-random numbers. For the classification problem of MLP, we ana­ lyze the experimental performances of network inversion algorithm which use a pseudo-random number and a low-discrepancy sequence as initial training data. Network inversion algorithm is one of effective active learnings to cre­ate additional training data in terms of classification independence of input distribution, computational cost and complexity of implements. The organization of this paper is as follows. In section 2, we briefly explain the back-propagation and network inversion algorithms. Low-discrepancy se­ quence is discussed in section 3. In section 4, for the two-class classification problem, we compare the experimental performances which employ a pseudo­ random number and a low-discrepancy sequence as initial training data, and discuss some advantages and disadvantages of low-discrepancy sequence. Fi­ nally, we summarize and give some conclusions in section 5. 2 Dynamics of Multi-layer Perceptron It is helpful to review the dynamics of MLP before moving to the main task. We start with the forward and learning (backward) phases of MLP, and then 182 proceed to the Network Inversion (NI) algorithm. 2.1 Forward and Learning Dynamics Let the number of layer be L and the lth layer has 7Vj neurons. The 1st layer, the Lth layer and the other (from 2nd to L — 1th) layers are called input layer, output layer and middle layer, respectively. The output at each layer is expressed in the following equations, JV,_i Ui{l) = Y, ^{1)^(1 - 1) + 9i(l) (1) Oi(l) = f(Ui(l)) (2) where Ui(l) and ai{l) denote the net value and activation value of the ith neuron at the lth layer, respectively. 6i(l) is the bias of the ith neuron at the Zth layer. Wij(l) denotes the weight connected between the jth neuron at the I — lth layer and the ith neuron at the lth layer. /(•) is an activation function (e.g. sigmoid function). The back-propagation method is the most popular method for learning of MLP. Using an iterative gradient descent algorithm, the mean squared error E between the teaching vector £ = (ii, • • • ,tNL) and the actual output vector a(L) = (oi(L), • • • , CLNL (L)) is minimized according to the rule : 3E «,„(!)<- »,,(()- n-g^ (3) -"*w-"®£$> (4) where n is the learning rate, and the mean squared error E and the error signal Si (I) are calculated recursively : 1 NL E=-Y,(ti-ai(L)f (5) Si{l) = 2 dE da.i{l) [-{U-ai{L)) (l = L) (6) Ejr M*+ 1)^1^ (otherwise) 183 2.2 Network Inversion Algorithm NI algorithm7'8 is designed to move each existing data to one specific boundary point. This idea is similar to the back-propagation algorithm. In NI algorithm, by using a gradient descent algorithm, the error signal e is propagated from output layer to input layer in order to update the input vector so that the desired output vector r = (T± , • • • , TNL ) will be produced by the network. 1 NL i=l m_ dE £i() ~ daiil) _{-{Ti-ai{L)) (l = L) ~ \ EfiV ej(l + l)^1 (ot^rwise) NI algorithm works concurrent with back-propagation algorithm. 2.3 Additional Data by Network Inversion In order to represent the classification problem concisely without loss of generality, we consider MLP with outputs is designed to output a two- class classification value for an input vector. That is, each output neuron Oi(L)(l < i < NL) is trained to output either 1 for one class or 0 for the others. The input vector which produces the desired output vector r, typi­ cally T = 0.5, can be considered the classification boundary of MLP. In other words, the most likely explanation from an ambiguity point of view is that the input vector corresponding to the output vector T includes the region of maximum classification ambiguity (see Figure 1). We employ NI algorithm to invert initial training data toward the region of maximum classification ambiguity, and use the inverted data as additional training data so that MLP effectively improves the boundary. Such an additional training data can sig­ nificantly increase the resulting classification accuracy. Note any additional training data is created based on the initial training data. 3 Low-Discrepancy Sequences In general, any active learning algorithm can generate effective additional training data to improve the classification accuracy. NI algorithm which is one of active learning algorithms also creates additional data based on the initial training data as has discussed above. This means the initial training data (7) (8) 184 \^(^region of maximum classification ambiguity Figure 1. A concept of the region of maximum classification ambiguity plays an important role on its performance. Suppose that we can generate any point inside a given input space, and are allowed to get the properly classification (teaching signal) by the means of interaction with the oracle. But each class region is not tell us in advance. A good strategy is to generate as uniform initial data as possible inside the input space without the repetition of the same data so that a few initial data exist inside each class at least, and the whole boundary can be detected. Most of traditional ways use a pseudo­ random number to generate data uniformly. Low-discrepancy sequence9 (LDS) is well-known in the field of Quasi Monte Carlo method. One of the notable features of LDS is to have low discrepancy. Discrepancy means a measure of the uniformity of distribution of finite point sets. In short, LDSs create more uniform distributed data than pseudo-random numbers for a given space. In addition, LDSs never generate the repetition of the same data point. For many cases of multidimensional integration, Quasi Monte Carlo method by the use of LDS is more effective than the conventional Monte Carlo method by the use of pseudo-random num­ ber. We employ LDS as a strategy for creating initial training data for the multidimensional classification. In this section, we briefly review the basic of LDS at first, and then explain Faure sequence which is a kind of LDSa. 3.1 Discrepancy To carry the discussion of properties of LDS further, let us define the term discrepancy in detail. Let x{n) — (xi(n),--- ,XK(TI)) and E{x) be the nth training data of K dimensions and the subset [0,X\) x ••• x [0,xx) in the K dimensional hypercube [0,1]K, respectively, ./^-discrepancy TK{U) of the training data set P — {x(n) \ n = 1, • • • , N} by the measure of L2 norm for "For further details of LDSs, see 185 Lebesgue integrable function space is defined as follows. Tx{n)*g{f (#(£(* I A0)_fix(n))2^ (9) where #{E(x \ N)) denotes the number of data inside E(x). In the same way, imax-discrepancy DK (n) by the measure of maximum norm is defined as the following equation. def | #{E(x I AQ) TT ... ,im DK(n) = sup | \\x{n) | (10) *e[o,i]K ^ k~i Equations 9 and 10 are the amount of the uniformity of distribution of N data point sets by the measure of each norm. Only Z/2-discrepancy is known to be calculated by the following equation. N N K TK^ = ^EEII(1- msK{xk(n),xk{m)}) n=l m=\ k=l Ol-K N K —ArEII(1-^H) + 3-if (ii) n=l Jfc=l For N > 1, the relation between Z/2-discrepancy and Lmax-discrepancy satis­ fies the following equation. TK(n) < DK(n) (12) With the large number of training data so that data points are distributed as uniform as possible, we can consider DK(n) N-^° 0 (13) asymptotically. Equations 12 and 13 lead the following equation. TK(n)N-^T0 (14) LDS keeps the following minimum discrepancy for N > 1. DK(n) < cK^p!L (15) where CK is the constant number depends on K dimensions. The multidimen­ sional LDSs include the Halton, Sobol', Faure and other sequences9'10. We will concentrate on Faure sequence in the next section and leave the details of other sequences to references9'10. 186 3.2 Faure Sequence Faure sequence is generated based on a prime number p more than K as the radical number for the K dimensional problem. The first step in the calculation of nth data point is to compute the first element x\(n) as follows. ar1(n)=53ai,ro(n)p-m-1 (16) m=0 where ai,m(n) is the number which satisfies the following digit expansion. oo " = 5Z a^m{n)pm (17) m=0 Then, in the next step, the other elements Xk(n)(2 < k < K) are computed as xk(n) = ^2 ak,m(n)p (18) m=0 where each ak,m(n) is the number which satisfies the following equation. fak,o(n)\ /oC0 iC0 2C0 • •-^ ak,i(n) 1C1 2Ci •• ak,2(n) = 2C2 • • fc-i V / fa1>0(n) ai,i(n) ai,2(n) V (mod p) (19) / V 0 ••/ where 0C, denotes o combination •. We use Faure sequence as typical LDS in the experiment in the next section. 4 Experimental Results and Discussion In order to be simple without loss of generality, we will take up the two-class hyper-sphere classification problem with K dimensional input vector where classification target is 1 inside hyper-sphere and 0 otherwise within hypercube [0,1]Kas shown in Figure 2. That is, when queried with the nth data, the correct classification is h(n) _/l if£f=1(^W-0.5)2 1. For values of S close to 1, finite-horizon alternating-offers bargaining games give a great advantage to the player making the last offer, since it becomes similar to an ultimatum game. 3 Evolutionary Computation and Economic Agents The dissertation by Oliver 5 was the first work that succeeded to show that evolutionary algorithms can be used in the design of strategies for multi- issue negotiations. Oliver's motivation originated from the observation that negotiation problems are rather inefficiently resolved by humans, who often settle in suboptimal agreements. In his framework, a strategy consists of a vector of numbers that represent offer and threshold values. Offer values indicate the portion that the proposer is willing to share with an opponent; threshold values correspond to the minimum value a received offer should have in order to be accepted. The work by van Bragt, Gerding and La Poutre 6 has a different spirit from the previous one; an interesting and detailed game-theoretic analysis of the evolved trading strategies is performed. Using Rubinstein's alternating- offers bargaining game with a finite-horizon as a base model, they perform several numerical experiments with traders that evolve strategies in the same format devised by Oliver.5 The results show that despite the bounded ra­ tionality of the traders (since they are only aware of their own payoffs and discount factor), the evolved behaviors are aligned with what is predicted by game theory. 193 4 Sequential Multi-issue Bargaining Model In the classic multi-issue alternating-offers model, the issues are negotiated simultaneously, in a bundle. If there are N issues in the setting, trader Ak makes an offer to its opponent by passing a vector Ok = (o\, 02,03,..., ON) with one offer relative to each one of the issues. Usually in multi-issue sit­uations, a trader describes its preferences on the several issues through a vector of weights, Wk — {w\, w-i, W3,..., WN), indicating the relative im­ portance of the issues. If an offer Ok is accepted, the offering trader re­ ceives an utility of (J — Ok) • Wk, where J is the unity vector, assuming that 0 < Oj < 1, i e [0,1, 2,... N]. Accordingly, the agent receiving the offer gains an utility of Ok • Wq, where Wq denotes its own weight vector. However, often the issues are inter-related and complementary; the util­ ity of an issue is a function of the values obtained from other issues. One could devise situations where the weights attributed to the issues may change according to the value obtained in other issues, or vary as a function of some other external parameter, such as time. If there are single issues or subgroups of issues within the whole set that are substitutable, it may be the case that the utility obtained with one issue or a subgroup of issues affects the weight assigned to other issues. Building up in an example presented by Boutilier et al., 7 if a producer is negotiating with a transportation company the most suitable way to carry its goods to the consumers, the agenda of issues may contain options such as trucks, ships, and airplanes. However, if the producer succeeds in obtaining a reasonable deal with the trucks, the utility of the ships and airplanes would be diminished. Negotiating all the issues at once in such a scenario demands the agent to consider all the inter-dependencies between the issues, before computing the utility of a contract or making an offer to its opponent. The calculation of all the possible combinations and trade-offs can be computa­ tionally expensive, especially if the the number of related issues is large. By negotiating the issues sequentially, this cost could be naturally avoided. 4-1 Model Description and Experiments Experiments were performed" with a model inspired by van Bragt et al.'s framework.6 Two bargaining agents, Ai and A2, each one equipped with its own evolutionary algorithm, optimize the numerical parameters of the negoti­ ation strategies. The strategies consist of vectors of floating point numbers in "The system was implemented using the Swarm Simulation System, developed by the Swarm Development Group. Detailed information about the software can be found at www.swarm.org. The source code used in the experiments described in this paper is avail­ able upon request. 194 the interval [0,1], encoding offers and thresholds, as the strategies employed by Oliver.5 Being a finite-horizon model, the total number of offers that can be exchanged between the traders has a maximum value of n. If n is even, as Ai always makes the first offer, the last offer is granted to A2. If n is odd, Ai has the first and the last offers. Traders should reach an agreement before the maximum number is exceeded, otherwise they receive a null payoff. As the issues are negotiated in sequence, each strategy corresponds to a set of N sub-strategies, each one relative to one issue. Each agent uses a conventional (/z + A) evolution strategies (ES).2 In one complete iteration, all the strategies are evaluated and ordered according to their fitness values. In an (/x + A)-ES, the best /i strategies (parents) remain in the set from one iteration to the next; in addition, A new strategies (offspring) are produced at each iteration. Offspring is generated by applying operators such as mutation and recombination in the set of parents. In the experiments, only the mutation operator was employed when gener­ ating offspring. In an ES, mutation consists of adding or subtracting samples from a Gaussian distribution with standard deviation s to the parameters of a certain parent strategy. The parameter s is self-regulated and determines the strength of the mutation. Each strategy keeps the s value of the Gaussian distribution from which it was generated; at each iteration, the average of the parents' standard deviations is used to produce the Gaussian distribution that generates the next set of offspring. Threshold and offer values were only accepted in the range [—1,1] (neg­ ative values were used in their absolute form); any strategy that contained a value out of that range received a penalty, if the respective parameter was demanded by the negotiation process. The parameters /i and A were both set to 25. Each simulation instance was run for at least 750 generations. At every generation, each one of the strategies owned by A\ had to confront a randomly chosen subset of size 25 of A2 strategies, and vice versa. The fitness value of a strategy was calculated as the mean value of the all the payoffs and penalties obtained in the confrontations in one generation. Ak's payoff, t/jt, was calculated as follows. Assume a deal on the first issue I is reached at t = 77, yielding Ak a share of a, and a deal on the second issue is reached at t = 777, yielding a share of (3, then Uk is: _ ST' -a-wfk + oo (full line) and n = 10 (dashed line), for agents A\ and A2, respectively. These partitions were calculated by regarding the negotiation process of each one of the issues as a single game. After calculating the values of each agent's shares in the equilibrium for each one of the games, the utilities were calculated by (1) discounting 5 in the share obtained from issue II, as it is negotiated one stage after issue /, and (2) weighting the equilibrium shares with the respective set of individual weights. The dotted lines are the payoffs obtained by the evolutionary traders (mean value over the whole set of strategies in the last 100 generations from a total of 750); 20 runs were performed for each of the 5 values. The vertical bars at each of the tested points show the standard deviation of the results. 0.0 0 2 0 4 0.6 0 8 10 discounbng factor Figure 1. Relation between the discount factor and agent Ai's utility, in the multi-issue sequential model of sizes n = 10 (dashed) and n —> 00 (full). The dotted line shows the utility actually obtained by the evolutionary agent in the experiments, when (vjj 1, WJJ1 ) = (0.3,0.7). As noted by van Bragt et al.,6 despite the bounded rationality of the bar­ gainers, who have no explicit representation of the size of the game or any knowledge about the opponent's discount factor values, the traders achieve 196 0.0 0.2 0.4 0.6 0.8 1.0 discounting factor Figure 2. Same as in Figure 1, for agent A2's utility, when (wj 2,Wjf) = (0.7,0.3). outcomes which are qualitatively close to what is predicted by game theoret­ ical models. In these results, a phenomenon that was previously observed 6 is also detected: A\ does a little better than the game theoretic predictions, whereas Ai performs considerably worse. It is suggested 6 that the poor per­ formance of A2 is due to the fact that, especially for small values of S, it is too severe to A^ not to accept A\& first offer; the latter, taking advantage of this fact, then offers very small shares of the pie. Indeed, it is also observed in the results of this first session that the smaller the S the higher is the frequency of deals that are closed right away in the first stage (Table 1). Also, it is interesting to notice that there is a great leap in the average size of the nego­ tiation process between 5 = [0.9,1.0]. Intuitively, one would expect that the negotiation process sizes would grow smoothly, following the decrease in time pressure. However, the parameter S is strongly perceived by the evolutionary processes, which leads the agents to play the game as if there were only one offer to be exchanged, resulting in a great advantage to the agent that makes the first offer. Table 2 shows the average value of the first offer for issue / by Ai over all the strategies at the 750*^ generation, and the correspondent average threshold by A%, against which the first offer is checked. It is inter­ esting to notice that despite the spread of ^2's threshold values, the offers by Ai are equally spread and just higher enough to be accepted right away by A2 (in average). In the second session of experiments, the vector of weights {wf1, wf/) was set to different values. 20 runs were performed for each case, with S = 0.9. The data shown in Table 3 is an average of the payoffs obtained in all confrontations in the last 100 generations (total of 750). 197 Table 1. Average number of stages until an agreement is reached as a function of the discount factor (average for both issues over 20 runs, with n = 10). 5 # stages (er) 5 # stages (a) S # stages (a) 0.1 1.06 (0.00) 0.5 1.11 (0.01) 0.8 1.24 (0.02) 0.2 1.04 (0.00) 0.6 1.13 (0.01) 0.9 1.39 (0.04) 0.3 1.06 (0.00) 0.7 1.22 (0.02) 1.0 8.19 (0.12) 0.4 1.05 (0.01) Table 2. Ai's average first offer and ^'s average first threshold for issue /, across all the strategies at generation 750th, for each one of the runs (S = 0.9, n = 10). Run Ax off. A2 thr. Run Ai off. A2 thr. 1 2 3 4 5 6 7 8 9 10 0.28 0.79 0.55 0.35 0.55 0.17 0.85 0.37 0.79 0.82 0.01 0.68 0.45 0.31 0.37 0.02 0.80 0.27 0.68 0.72 11 12 13 14 15 16 17 18 19 20 0.63 0.78 0.73 0.65 0.14 0.65 0.38 0.91 0.48 0.35 0.55 0.73 0.62 0.54 0.05 0.57 0.30 0.83 0.38 0.28 Table 3. Results using different weights for Alt for 20 runs each. (wf2,wfj2) = (0.7,0.3), and S = 0.9, n = 10 (ut. = utility; * are the values used in the previous session). (wf^wfp Ai ut. Ai ut.std. A2nt. A2 ut.std. (0,1) (0.1, 0.9) (0.3, 0.7)* (0.5, 0.5) (0.7, 0.3) (0.9, 0.1) (1,0) 0.42 0.47 0.43 0.48 0.48 0.54 0.53 0.01 0.00 0.01 0.00 0.00 0.00 0.01 0.64 0.53 0.50 0.42 0.43 0.39 0.46 0.01 0.01 0.00 0.00 0.00 0.00 0.00 Noticeably, there is a tendency that makes A\ to receive higher values of utility as Wj 1 increases. This can be justified as a combination of two factors. First, the influence of the discount factor applied to IPs share decreases, as II's relative importance gradually diminishes in the total utility received by 198 A\. As the importance of issue I increases, A\ manipulates ^2's inability of using the advantage of offering last, in the presence of a relative slight discount factor. In the last session of experiments, a simple case of a negotiation over inter- substitutable issues was simulated. The values of wnk were made dependent of the size of the share obtained from issue I, i.e., if an agent obtained more than 0.9 of issue I, then WJJ was made 0.1, otherwise, 0.9. The weight assigned to I by both agents was fixed in 0.3. As n — 10 and 6 = 1.0, Ai has the last offer advantage in both issues. We were particularly interested to see whether it would be possible for the players to engage in an agreement which would be socially fair, i.e., once A2 obtained a large portion of issue I, and therefore becoming "less interested" in disputing issue II, would that allow A\ to obtain a more satisfactory share of issue III In fact, the weight ratio is kept constant (3:1) between the most and least valued issues in both agents; however, whether issue I or 77 is the most valued depends on how the negotiation of I develops. As the relative importance of the issues may shift between them, they are considered to be substitutable. The question addressed is whether learning agents equipped with evolutionary algorithms are capable to achieve a mutually satisfactory solution in such setting. Figure 3 shows the histograms of the utilities obtained by A\ in the setting with variable valuations (right) and in a test case (left), where both issues are equally valuated with fixed weights by the players throughout the bargaining game {WJJ = 0.5). This latter case can be interpreted as a situation where the agents regard the issues as being perfectly inter-substitutable; as the agents valuate them with the same importance, both issues are disputed with the same strength. From the results, it is possible to observe that, as expected, when the weights are fixed, A2 makes use of its last offer advantage very frequently, yielding A\ very low payoffs (leftmost bar at U\ =0). When the valuation is variable, though the frequency of low payoffs is still relatively high (i.e., A\ looses I and II entirely to A2), with almost the same frequency Ai is able to obtain everything of issue II. The bar at 0.7-0.8 in the right figure cover the cases where A\ obtains practically nothing of I and almost everything of II, representing a situation of high social welfare, which suggests that there are situations where the sequential negotiation can be beneficial. 5 Conclusions This paper presented a model of a sequential multi-issue alternating-offers bargaining model, where the agents have their strategies devised by an evolu­ tionary algorithm. Differently from the usual bargaining model, where several issues are negotiated simultaneously, in this setting the issues are disputed one 199 Figure 3. Histograms for the average utility obtained by A\ over 50 runs, in the last 100 generations (total of 1000), with fixed (left), and variable weights (right). by one, in sequence. Numerical experiments were performed; the results are qualitatively aligned with game theoretic predictions, as previously shown in a simultaneous multi-issue model,6 despite the fact that the evolving agents have no restrictions concerning rational behaviors. A simple case with inter- substitutable issues was also presented, illustrating a possible scenario where a sequential negotiation may actually be beneficial for both parties to achieve a satisfactory agreement. Acknowledgments Thanks to four anonymous reviewers for their helpful comments. NEN re­ ceives partial financial support from CNPq under grant #200050/99-0. References C. Boutilier, Y. Shoham, and M. P. Wellman, editors. Artifical Intelligence, vol. 94 (1-2). July 1997. T. Back, G. Rudolph, and H.-Paul Schwefel. Evolutionary programming and evo­ lution strategies: Similarities and differences. Proc. the 2nd Annual Evolutionary Programming Conference, 11-22, February 1992. A. Muthoo. A non-technical introduction to bargaining theory. World Economics, 145-166, 2000. A. Rubinstein. Perfect equilibrium in a bargaining model. Econometrica, 50(1):97- 109, January 1982. J. R. Oliver. On Artificial Agents for Negotiation in Electronic Commerce. PhD thesis, U. of Pennsylvania, 1996. D. D. B. van Bragt, E. H. Gerding, and J. A. La Poutre. Equilibrium selection in alternating-offers bargaining models: The evolutionary computing approach. In 6"1 Int. Conf. of the Society for Computational Economics on Computing in Economics and Finance (CEF'2000), July 2000. C. Boutilier, M. Goldszmit, and B. Sabata. Sequential auctions for the allocation of resources with complementarities. In Proc. of the Int. Joint Conf. on Artificial Intelligence (IJCAI-99), 527-534, 1999. AFFECT AND AGENT CONTROL: EXPERIMENTS WITH SIMPLE AFFECTIVE STATES MATTHIAS SCHEUTZ Department of Computer Science and Engineering University of Notre Dame, Notre Dame, IN 46556, USA E-mail: mscheutz@cse.nd.edu AARON SLOMAN School of Computer Science The University of Birmingham, Birmingham, BI5 2TT, UK E-mail: axs@cs.bham.ac.uk We analyse control functions of affective states in relatively simple agents in a variety of en­ vironments and test the analysis in various simulation experiments in competitive multi-agent environments. The results show that simple affective states (like "hunger") can be effective in agent control and are likely to evolve in certain competitive environments. This illustrates the methodology of exploring neighbourhoods in "design space" in order to understand tradeoffs in the development of different kinds of agent architectures, whether natural or artificial. 1 Introduction Affective states (such as emotions, motivations, desires, pleasures, pains, attitudes, preferences, moods, values, etc.) and their relations to agent architectures have been receiving increasing attention in AI and Cognitive Science.1'2'3 Detailed analyses of these subspecies of affect should include descriptions of their functional roles in contributing to useful capabilities within agent architectures4, complemented by empirical research on affect in biological organisms and concrete experiments with synthetic agent architectures, to confirm that the proposed architectures have the claimed properties. Our approach contrasts with most evolutionary AI research, which attempts to discover what can evolve from given inital states. Instead, we explore "neighbourhoods" and "mini-trajectories" in design space, by starting with examples of agent architectures, then explicitly provide possible extensions with evo­ lutionary operators that can select them, and run simulations to investigate which of the extensions have evolutionary advantages in various environments. This can show how slight changes in environments alter tradeoffs between design options. To illustrate this methodology, we next analyse functional roles of affective states and then describe our simulation experiments which show how certain sim­ ple affective control mechanisms can be useful in a range of environments and are therefore likely to evolve in those environments. 200 201 2 What Affective States are and aren't If we attempt to define "affective" simply in terms of familiar examples, such as "desiring', "having emotions", "enjoying", etc. we risk implicitly restricting the notion to organisms with architectures sufficiently like ours. That could rule out varieties of fear, hunger, or aggression found in insects, for example. We need an architecture-neutral characterisation, which is hard to define if it is to be applicable across a wide range of architectures (such as insect-like reactive architectures or deliberative architectures with mechanisms able to represent and reason about non­ existent and possible future states). Our best hope is to define "affective" in terms of a functional role which can be specified independent of the specific features of an architecture. The intuitive notion of "affect" already has two aspects that are relevant to a variety of architectures, namely direction and evaluation. On the one hand there is direction of internal or external behaviour, for instance, wanting something or trying to avoid something. On the other hand there is positive or negative evaluation of what is happening, for instance, enjoying something or finding it unpleasant. However, even evaluation is linked to direction insofar as enjoying involves being disposed to preserve or repeat and finding painful involves being disposed to terminate or avoid. Either way affective states are examples of control states5. Yet, not all states in control systems are affective states, even if they have some effect on internal or external behaviour. For instance, perceiving, knowing, reason­ ing, and self-monitoring can influence behaviour but are not regarded as affective. Suppose an agent can use structures as representations of states of affairs (never mind how). Anything that represents must be capable of failing to represent. There are various kinds of mismatch, and in some cases the mismatch can be detected, for instance perceiving that some desired state has or has not been achieved, or that a goal is being approached but very slowly. If detection of a mismatch has a disposition to cause some behaviour to reduce the mismatch there are (to a first approximation) two main cases: (1) the behaviour changes the representation to fit the reality, or (2) the behaviour changes reality to fit the representation. In (1) the system has a "belief-like" state, and in (2) a "desire-like" state. In other words, belief-like states tend to be changed to make them fit reality, whereas attempts are made to change reality to make it fit desire-like states. It is this distinction between belief-like and desire-like control states that can give us a handle on how to construe affective states, namely as "desire-like" control states whose role is initiating, evaluating and regu­lating, internal or external behaviour, as opposed to merely acquiring, interpreting, manipulating, or storing information (that might or might not be used in connection with affective states to initiate or control behaviour). A state representing the current position of an effector, or the location of food 202 in the environment, or the agent's current energy level is, therefore, not an affective state. However, states derived from these which are used to initiate, select, prioritise, or modulate behaviour, either directly or indirectly via other such states would be af­ fective states. An example might be using a measurement of the discrepancy between current energy level and a "target" level (a "hunger" representation), to modulate the tendency of the system to react to perceived food by going for it. This might use either a "hunger threshold" to switch on food-seeking or a continuous gain control. In complex cases, the "reference states" used to determine whether corrective action is required may be parametrised by dynamically changing measures or de­ scriptions of the sensed state to be maintained or prevented, and the type of correc­ tive action required, internally or externally. For instance, an organism that somehow can record how frequently food sources are encountered might use a lower hunger threshold to trigger searching for food. If sensitive to current terrain it might trigger different kinds of searches in different terrains. Thus while the records of food fre­ quency and terrain features are acquired they function as components of perceptual or belief-like states, whereas when they are used to modulate decision making they function as components of affective states. Affective states can vary in cognitive sophistication. Simple affective mecha­ nisms can be implemented within a purely reactive architecture, like the "hunger" example. More sophisticated affective states which include construction, evalua­ tion and comparison of alternatives, or which require high-level perceptual categori­ sations, would require the representational resources of a deliberative architecture. However, recorded measurements or labels directly produced by sensors in reactive architectures can have desire-like functions, and for that reason can be regarded as affective states that use a primitive "limiting case" class of representations6. The remainder of this paper describes simulation experiments where agents with slightly different architectures compete for resources in order to survive in a carefully controlled simulated environment. Proportions surviving in different conditions help to show the usefulness of different architectural features in different contexts. It turns out that simple affective states can be surprisingly effective. 3 The Simulation Environment The simulated environment consists of a rectangular surface of fixed size (usually around 800 by 800 units) populated with various kinds of agents and other objects such as "lethal" entities of various sizes, some static and some moving at different speeds in different directions, and "food items" (i.e., energy sources which pop up at random locations and disappear after a pre-determined period of time unless con­ sumed by agents). Agents use up energy at a fixed rate, when stationary, and require additional energy proportional to their speed, when moving. Hence, they are in per- 203 manent need of food, which they can consume sitting on top of a food source in a time proportional to the energy stored in the food source depending on the maximum amount of energy an agent can take in at any given time. Agents die and are removed from the simulation if they run out of energy, or if they come into contact with lethal entities or other agents. All agents are equipped with a "sonar" sensor to detect lethal entities, a "smell" sensor to detect food, a "touch" sensor to detect impending collisions and an internal sensor to measure their energy-level. For both sonar and smell sensors, gradient vectors are computed and mapped onto the effector space (see below), yielding the direction in which the agent will move. The touch sensor is connected to a global alarm system, which triggers a reflex to move away from anything touched, unless it is food. These movements are initiated automatically and cannot be controlled by the agent. They are somewhat erratic and will slightly reorient the agent (thus helping it to get out of "local minima"). On the effector side, agents have motors for locomotion (forward and backward), motors for turning (left and right in degrees) and a mechanism for consuming food. After a certain number of simulation cycles, agents reach maturity and can pro­ create asexually, in which case depending on their current energy level they will have a variable number of offspring which pop up in the vicinity of the agent one at a time (the energy for creating a new agent is subtracted from the parent, occasionally caus­ ing the parent to starve). While different agents may have different short term goals at any given time (e.g., getting around lethal entities or consuming food), common to all of them are the two implicit goals of survival (i.e., to get enough food and avoid running into/getting run over by lethal entities or other agents) and procreation (i.e., to live long enough to have offspring). For evolutionary studies, a simple mutation mechanism modifies with a certain probability some of the agent's architectural parameters (e.g., the parameters respon­ sible for integrating smell and sonar information). Some offspring will then start out with the modified parameters instead of being exact copies of the parent. This mu­ tation rate as well as various other parameters need to be fixed before each run of the simulation (a more detailed description of the simulation and its various control parameters is provided elsewhere)7. In is worth pointing out that our setup differs in at least two ways from other sim­ ulated environments that have been used to study affective states.8'9'10'11'12 First, by allowing agents to procreate (i.e., have exact copies of themselves as offspring) we can study trajectories of agent populations and can thus identify properties of archi­ tectures that are related to and possibly influence the interaction of agent populations. And second, by adding mutation, we can examine the potential of architectures to be modified and extended over generations of agents. In particular, by controlling which components of an architecture can change while allowing for randomness in 204 the way they can change, we are able to study evolutionary tradeoffs of such exten­ sions/modifications. From these explorations of "design space" and "niche space"13 we cannot only derive advantages and disadvantages of architectural components, but also the likelihood that such components would have evolved in natural systems using natural selection. 4 The Agents and their Architectures In the following we consider two kinds of agents: reactive agents (R-agents) and simple affective agents (A-agents) (other studies have compared different kinds7). R-agents process sensor information and produce behavioural responses using a schema-based approach, which obviates the need for a special action selection mech­ anism: both smell and sonar sensors provide the agent with directional and inten­sity information of the objects surrounding the agent within sensor reach, where intensity = 1/'distance2 (i.e., the distance of the object from the current position of the agent). The sum of these vectors (call them S and F for sonar and food, re­ spectively) is then computed as a measure of the distribution of the respective objects in the environment and passed on to the motor schema, which maps perceptual space into motor space yielding the direction, in which to go: SS + 7F (where 5 and 7 are the respective gain values).0 A-agents are extensions of R-agents. They have an additional component, which can influence the way sensory vector fields are combined by altering the gain value 7 based on the level of energy. In accordance with our earlier analysis of affective states as modulators of behaviours and/or processes, this component implements an affective state, which we call "hunger". The difference in the architecture gives rise to different behaviour: R-agents are always "interested" in food and go for whichever food source they can get to, while A-agents are only interested in food when their energy levels are low. Otherwise they tend to avoid food and thus competition for it, reducing the likelihood of getting killed because of colliding with other competing agents or lethal entities. 5 The Behavioural Potential of a Simple Affective State We start our series of experiments by checking whether each agent kind can survive in various kinds of environments on its own. Five agents of the same kind are placed in various environments (from environments with no lethal entities to very "danger­ ous" environments with both static and moving lethal entities) at random locations to "average out" possible advantages due to their initial location over a large number 0 Note that this formula leaves out the details for the touch sensor for ease of presentation. 205 Table 1. Surviving agents in an n-environment when started with 5 agents of only one kind. Env 0 5 10 20 30 40 50 R-agents A* 14.60 13.20 11.90 11.60 7.50 2.90 0.20 (7 2.80 4.78 3.81 3.47 4.43 3.57 0.63 Con 1.73 2.96 2.36 2.15 2.75 2.21 0.39 A-agents A* 19.20 17.20 17.20 15.40 13.00 10.40 8.00 a 2.74 3.05 3.77 3.95 3.56 3.57 3.56 Con 1.70 1.89 2.33 2.45 2.21 2.21 2.21 Table 2. Surviving agents in an n-environment when started with 5 agents each of boths kinds. Env 0 5 10 20 30 40 50 R-agents A« 0.00 0.00 1.60 0.10 0.00 0.00 0.00 a 0.00 0.00 5.06 0.32 0.00 0.00 0.00 Con 0.00 0.00 3.14 0.20 0.00 0.00 0.00 A-agents A* 17.20 16.30 14.50 14.50 15.10 12.80 10.00 a 3.61 2.91 6.54 4.22 3.35 2.49 3.16 Con 2.24 1.80 4.05 2.62 2.08 1.54 1.96 of trials. The "food rate" is fixed at 0.25 and the procreation age at 250 update cy­ cles. Table 1 shows for each agent kind the average number ([/,) of surviving agents as well as standard deviation ( 2 assuming action sequences Planti, Harvesti, Plantu, and so on. Action Planti in state U — 1 has no effect with regard to the state of the environment, similarly action Harvest/ in state U = 0. Furthermore, a Planter might successfully apply in state U = 2 only action Plantu, not actions Plantj or Plantuj. At the beginning of a game two agents are randomly selected from the population, one of them is the start agent. This agent begins by sending a message Mo. The other agent receives this message and performs an action ai and sends another message Mi to the start agent. Then, the first agent performs an action ai and sends a message M^ to the second agent, and so on. A round is defined by a successive sequence of performing one action and generating a message for each of the two agents. Both types of agents have the same repertoire of actions regardless of the efficiency: apart from plant and harvest actions they have a ./VuZZ-action without any effect, a Sanction action, an action Exit, and an action Replace. The later action affects the opponent agent in the way, that it gets replaced by another agent, randomly selected from the population. This may increase the general possibility for a successful coordination. A game may end by three different outcomes: an agent performed the Exit action, the environment reached the final state Ue, or the number of rounds in the games exceeded the defined threshold rounds. There is a predefined set of symbols S = {0,1,2, ...,Smax} for message generation. A message consists exactly of one of these symbols. A symbol itself has no meaning to an agent, there is no predefined semantics at all. A game ends successfully if the environment was transformed into the final state Ue. In this case, the last two agents, participating in the game, get a certain amount E* of "energy". In other cases there is no energy payoff. Every action that an agent performs consumes a specified amount of energy 224 of the agent. There are low cost actions (Null, Exit, and Replace) and high cost actions (Plantx, Harvestx). For a low cost action the agent consumes energy E[ > 0, for a high cost action Ei + E^, Eh > 0. The cost of the action Sanction is Ei + E^, Eb > 0. This action affects the other agent in the way that the sanctioned agent looses pain energy Ep > 0. At the beginning of an agent's life time its energy is set to E = Es > 0, its start energy. If E ever falls below 0, the agent dies, that is, the agent is removed from the population. An agent does not know its own type nor perceives the type of another agent. They are black-boxes to each other. An agent perceives the message of another agent, the state of the environment, and the fact of being sanctioned. In any case not all relevant aspects of the environment are known in the same way to all the participants, for instance the direct result of an action. Agents must test different actions at different times and the only hint to whether an action or message was appropriate is given by a reward signal. This signal is always generated by the agent itself, based on the energy difference between two consecutive actions. A sigmoid function generates the reward signal r based on the energy difference e^; a positive energy difference results in a positive reward, a negative difference results in a negative reward. Thus, individual agents employ reinforcement learning. This definition of a reward signal is a weak one, since it does not assume any intelligent observer (outside the agent) who generates a reward signal based on its knowledge about correct actions. Beside an energy value agents have an age A, which at the beginning of an agent's life time is set to 0. Any time an agent gets selected to play the game, its age will be incremented by 1. If the age reaches an individual maximum, Amax, the agent will be removed immediately from the population. At the start of the simulation, the population P consists of a certain number of agents Ps. The number of agents during the simulation may shrink or grow, depending on the fitness of the agents. An agent may enter the population if there are at least two agents, whose age is above value Asex and whose energy value is above a value Esex. The two "parents" are selected by a "Roulette wheel"11 from all possible parent agents based on their energy value. Once a successful breeding occurred, the two parent agents are prevented from reproduction for a certain period of time tpause. Whenever the number of agents in the population Pt falls below Ps, agents are randomly added to the population until Pt = Ps- We focused explicitly on one certain aspect of media, namely the relevance of expectations in choosing an appropriate answer to a received message. Thus, we combine an internal state with the expectation of a received message. This results in a frame-like structure which will be executed on two levels. In 225 a first step a set Ft of frame structures is chosen based on the state of the environment. This step will be performed without any learning by the agent and is totally determined by the environment. In a second step the agent chooses one frame structure from the previously chosen set Fi. The selected frame will be executed resulting in an action at+i and a new message Mt+\. A frame F is defined with respect to a received message Mr = Mt in the following way: if MT = Mei then a := acti and M := mes elseif Mr = Me2 then a := acti and M := mes2 else execute a trouble frame in FT , where at+1 = a and Mt+i — M. A "trouble frame" Ff will be executed in the case that the received message was neither Me\ nor Me2- This frame has a special structure, because it does not check the occurrence of a certain message, rather it checks whether the agent has been sanctioned or not in order to determine the new action and message: if sanctioned = true then a :— actri and M := mesri else then a := actr2 and M := mesx2 • For every state of the environment the agent has two frames. The selection of a frame at time t will be guided by a Q-value Qp, that is, reinforcement learning12 takes place in order to choose an appropriate frame in a given (en­ vironmental) situation. The entire collection of frames for an agent by a given final state Ue of the environment is: Fu = {F(k,o),F(k,i)}, for fc = 0,..., Ue. An additional frame set is employed by an agent when the agent starts the communication by generating the start message M0. For the trouble state UT the agent can choose also between two (trouble) frames FT = {F^Ff}. Evolution is based on frames, agents do not change frames during their life time, they are just able to change the Q-value of a frame with respect to the other frame inside the same frame set. At the beginning of the sim­ ulation, all frames of all agents are initialized randomly. In particular, vari­ ables Mei, Me2, mesi, mes2, mesri, and mesT2 get randomly chosen values from S = {0,1,2,... tSmax], and variables acti, act2, actxi, and actT2 get randomly chosen values from A = {Null, Sanction, Exit, Replace, Planti, Harvesti, Plant11,...}. Inheritance happens on the frame level, that is, cross-over takes place between frames, not inside a frame (but inside a frame set). Individual parts of a frame are subjected to mutation. Therefore, e. g. part Mei or act2 may get a new random value during mutation process. Q- values are not passed on to offspring, and are set to a small random value at the beginning of an agent's life time. 226 I yAt*tw <=) Figure 1. Simulation of 1000000 games (results averages 1000 games). Result of the simu­ lation: a) maximum possible success (counting the occurrence of a "correct" pairing of the agents); b) the actually achieved success; c) correctly performed Exit; d) Exit in wrong situation; e) stopped, because maximum rounds exceeded. For example: after around 500000 games, the average result of 1000 games was 60% successful games, out of a max­ imum of 75% possible successful games, 25% were correctly and 10% were incorrectly ex­ ited by an agent, and 5% were stopped by the system (values approximated). Ue = 4, Smax - 3, rounds = 10, E* = 10.0, Et = 0.5, Eh = 2.5, Eb = Ep = 0.1, Es = 50.0, Amax e {550,..., 800}, Asex - 20, tpause = 20, a = 5.0, b = 1.0. 4 Simulation results Figure 1 shows the general outcome of a simulation, and Figure 2 shows the statistics of the number of sanctions in 1000 games, the number of living agents, and the average energy of the agents. The maximum number of agents was set to 1024. The simulation started with 3 agents and as long as the number of agents was below 15 a higher energy pay off E* was given for success than indicated in the caption of Figure 1 (to support an onset of evolution). The number of agents grew rapidly until the limit was reached. Later, evolution still took place optimizing the frame structures. This may result, for example, in changing cooperation sequences, or in a "competition" of different sequences as indicated in Figure 3. A sequence was denned by MQ, M\ a\ M2 a,2 • •., that is, Mo is the start message of the first agent, Mi the answer message and a\ the action of the other agent and so on. The coding of actions is: 0 - Null, 1 - Sanction, 2 - Exit, 3 - Replace, 4 - Plant\, 5 - Harvestj, Because we analyzed only sequences which did not contain a i?ep/ace-action, and which were successful, all these sequences end with action 7 (Harvestji). Figure 3 shows the eight most frequent sequences of the entire simulation. The sequence 1 occurred 160877 times, out of 346727 successful sequences, without a -Rep/ace-action. In detail, sequences are shown 227 £ 400 3rLw~—»w 1000 - :L Figure 2. From top to bottom: Number of sanctions ("Bites", not averaged), number of living agents, and average energy of the agents. The number of agents was restricted to 1024. When this number was reached, agents did increase their amount of energy on the average. Figure 3. The eight main sequences of the frame based evolution. Left: Absolute occurrence of sequences (average of 1000 games), right: relative occurrence of the sequences (in relation to 346727 successful sequences). The eight sequences occurred 329895 times. in Figure 4. The communicative behavior of agents became more and more regular. Because there were two frames for each environmental situation it is obvious that a frame set is assumed to contain exactly one appropriate frame for Planters and one for Harvester. An individual only has to explore which one is better suited. A detailed analysis of the communicative behavior reveals indeed that communication controls the behavior of agents. As the results indicate, the agents were able to set up a population wide semantics for the 228 number (see Fig. 3) 1 2 3 4 5 6 7 8 number 160877 66551 37402 26721 19039 7118 6453 5734 of occur. seq. M0 M\ en M2 a2 • • • 104051627 20404051627 004051627 01504051627 21504051627 00404051627 20504051627 21704051627 Figure 4. Eight most sequences in detail. exchanged symbols. The meaning of a symbol depends - of course - on the environmental state, however symbols became functional for the agent's choice of the next message or action. Sanctions became less important as the behavior became more normative. Although not shown here, simulations are easily adapted to cases where several thousand agents may evolve, still acting in a coordinated manner. 5 Conclusion We have shown that a growing population of agents may act in a coordinated manner even in the case when the cognitive capabilities of the agents are limited and, moreover, when agents do not know anything about each other (apart from received messages). From an observer's point of view agents reveal a normative behavior, although we did not predefine any norms. We started by questioning what kind of mechanisms human society evolved in order to cope with a growing number of individuals. We found an interesting answer in the work of sociologists, especially SGCMs proposed by Luhmann. We have modeled one SGCM (power) in a first approach. However, our simulation is still too simple to establish all aspects of a symbolic medium. Nevertheless, Luhmann's suggestions regarding SGCM, especially the as­ pect of structuring a situation by expectations, turned out to be useful. We modeled some aspects of his theory, mainly aspects of a closed communica­ tion system, but found interesting approach to answer well-known problems in multi-agent research, namely problems of scalability and the definition of norms. In subsequent work, we will deal with a more elaborated model of a symbolic medium. Further, the impact of more than one medium has to be analyzed, especially their potential for a more heterogeneous agent society and more complex problems to be solved by the agents. 229 Acknowledgement We are grateful to three anonymous reviewers for their comments. This work is supported by the Deutsche Forschungsgemeinschaft under grant number DI 452/10-1 and part of a research project headed by Werner Dilger and Bernhard Giesen. References 1. A. S. Rao and M. P. Georgeff. Modeling Rational Agents within a BDI- Architecture. In Proceedings of the Second International Conference on Principles of Knowledge Representation and Reasoning, pages 473-484, Cambridge, Mass., 1991. 2. R. Dunbar. Grooming, Gossip, and the Evolution of Language. Harvard University Press, Cambridge, Mass., 1996. 3. T. Parsons. The Structure of Social Action. Free Press, New York, 1968. 4. N. Luhmann. Social Systems. Stanford University Press, Stanford, Ca., 1995. 5. M. Paolucci and R. Conte. Reproduction of Normative Agents: A Sim­ ulation Study. Adaptive Behavior, 7(3/4) :307-322, 1999. 6. K. Binmore. Game Theory and the social contract, volume 1: Playing fair. Cambridge, Mass.: MIT Press, 1994. 7. Y. Shoham and M. Tennenholtz. Social Laws for Artificial Agent Soci­ eties: Off-line Design. Artificial Intelligence, 73, 1995. 8. M. P. Wellman. A Market-Oriented Programming Environment and its Application to Distributed Multicommodity Flow Problems. Journal of Artificial Intelligence Research, 1:1-23, 1993. 9. S. Park, E. H. Durfee, and W. P Birmingham. Emergent Properties of a Market-based Digital Library with Strategic Agents. In Y. Demazeau, editor, Third Int. Conf. on Multi-Agent Systems (ICMAS98), pages 230 - 237, Los Alamitos, Cal., 1998. IEEE Computer Society. 10. G. Ballot and E. Taymaz. Technological change, learn­ ing and macro-economic coordination: An evolutionary model. Journal of Artificial Societies and Social Simulation , 2(2), 1999. 11. D. E. Goldberg. Genetic Algorithms in Search, Optimization and Ma­ chine Learning. Addison-Wesley Publishing Company, Inc., Reading, Mass., 1989. 12. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. The MIT Press, Cambridge, Mass., 1998. THINKING-LEARNING BY ARGUMENT ALADDIN AYESH De Montfort University, The Gateway, Leicester LEI 9BH Email: aayesh@dmu.ac.uk Humans argue all the time. We may argue with one's self, with a partner or even with some one we have just met. The argument can take a decision-making form, discussion form, thinking form or in some cases it could be for the argument sake. In this paper we describe a system that uses three object-oriented components referred to as cells to utilize the argument concept to enable thinking-learning process to take place. 1 Introduction Our argument ability allows us to express our concerns, possibilities and make collective decisions. We may argue with one's self, with a partner or even with a complete stranger. The argument may take the form of decision-making, discussion, thinking or argument for argument sake. Argument with one's self for learning, thinking and decision-making purposes is the concern of this paper. This paper describes a system that uses three object-oriented components to utilize the argument concept into a thinking-learning process. These components are developed using agents' theory and techniques. However these components form one entity and they are not individual agents. Therefore and for clarity sake these components will be referred to as cells through out the paper. The paper discusses the argument concept and outlines the system proposed to utilize this concept. 2 Preliminaries There are two relevant subjects to be discussed before proceeding further: arguing as a human's mental process, and argumentative agents. Arguing is a powerful tool we use individually and socially [1]. We use this tool to reach to agreements or understanding with our social partners. We use it individually to form understanding about ourselves and about matters of individual concern, as part of our thinking process. And finally we use it as a way of learning new facts from perceived knowledge. The relation between arguing and the three processes of understanding, thinking and learning can be seen in the early work of Plato and who followed his technique of philosophers. Also this relationship is evident in our social life. Consider the statement 'the more we discuss (argue about) issue X the more I learn about your personality'. This could be about your attitude towards or beliefs about the subject of discussion and so on. Finally arguing is affected greatly by our 230 231 perception and by our initial and developed set of beliefs [2], Arguing as a communication protocol in multi-agents systems has been studied intensively. An example is the work done by Mora et al. on distributed extended logic programs [3]. Another example is the work done by Jennings et al. on negotiation [4]. Nonetheless there are differences. In multi-agents systems there is usually a problem to be solved by negotiation. Each agent participates in the argument autonomously. In contrast the agent-like components in our system are limited to three components that form collectively one entity. These components are chosen to converge the argumentation nature and agents technology. Each agent has pre-determined function. 3 Learning by argument system - basics The proposed system comprises three cells, which are represented as object-agents. These cells are named Observer cell (O cell), Questioner cell (Q cell) and Memory cell (M cell). Each of the three cells is explained here. 3.1 Observer cell (O cell) O cell represents the perception system. It observes the environment and feeds back to the Questioner cell (Q cell) which is described next. From the observations provided the Q cell forms some knowledge about the observed objects. The cycle continues to perceive as much needed observations to form an opinion about the object or a set of facts describing that object. To demonstrate the working mechanism of O cell let us take for an example our eyes and the argument we have with our perception system. Let us assume that I want to buy a car. I go to a car dealership showroom and look at cars. I see a nice car so in my brain I say 'it is a nice car', a reply comes back with 'but it is a blue car and I want a red car'. I see another car, which is red, but it is not as nice. Now I have one of two choices either to decide in favor of a nice car or a red car, or I can ask if they make the nice car in red. O cell deals with qualitative and quantitative information. Therefore a representational language is being devised using Hybrid logic [5] and adaptive neural nets [6, 7] to represent both qualitative and quantitative information. 3.2 Questioner cell (Q cell) Q cell is the voice that replies to our observations and stipulate further information. It is the part of the brain that says ' yes it is nice car but it is not red' . The main task of Q cell is to interrogate the information provided by the O cell and feeds back. This will provide stimulus to trigger O cell to provide further observations. Q cell can be viewed as a knowledge management component, which review the M cell to determine ignorance points. Once this is done the questions are formulated and passed to O cell. Q cell uses the same representational framework used by O cell. 232 However, determining ignorance points, which is the consequential issue of this component, is determined by using three-valued logic where U predicate implies 'do not know' [8]. The work is still undergoing to build the representational language. 3.3 Memory cell (M cell) There are two types of fact perceived by the system, which are asserted facts and observations. The following definitions state these two types. Definition 1 An observation may be defined as a feature a of an observed object T in relation to subject K with a relevance 'low', annotated: 0(a, T) -> Relevance(r, K, low) v Relevance(cc, T, low). • Definition 2 An asserted fact may be defined as a feature a of an observed object T in relation to subject K with a relevance 'strong' or 'definitive', annotated: 0(a, T) - > Relevance(r, K, strong) v Relevance(cc, T, definitive).* The Memory cell imitates the memory concept as defined in psychology: working memory and persistent memory which may also be identified respectively as short and long memory [9, 10]. This encourages the investigation of two types of neural nets: self-organizing neural nets (NN) [7] and adaptive architecture NN [6]. Self-organizing NN are well known in machine learning [7]. However, the size and type of information, which M cell deals with, vary greatly depending on the argument process between O cell and Q cell. 3.4 System architecture Figure 1 shows the communication between the proposed system's main segments. Arguments OCell Feeding back MCell OCell Reading Figure 1: Overview of OMQ System Definition 3 An OMQ system may be defined as a tuple of components <0,M,Q> where O is an Observer cell, M is a Memory cell and Q is a Questioner cell; under a communication mechanism E in which packets are quadruple defined as follow: Observation packet (O) ; Question packet () ; and Hold packet (11) ; with two main template operations defined which are Receive (9?) and Feed (8). • 233 3.5 Illustrative example Here is an example of learning through interceding with the environment and stipulating information. Table 1 shows the example in which a car is experienced. Table 1. Learning example Cell Activity Operations O cell: I see a red car. Write to M cell short memory: car and color: red with inter-relation between them. Inform Q cell that two facts have been observed. Let us assume this observation is of interest to the system. Q cell: What is a car? Inform O cell with a request for further observations. O cell: A car is an Write to M cell short memory: car: car-features and inform object with four Q cell. Assuming a four-wheel object abstract exists, Q cell wheels. will classify car and update M cell with: car: four-wheel object. M cell will re-organize its long memory. It is noticeable that the system requires constant feed of observations and depending on the interest in a given observation a scrutiny may be conducted. 4 Practical and implementation issues There are several practical difficulties to be resolved. First there are two major processes need to run simultaneously: arguments process between O and Q cells and M cell re-organization process. This leads to the difficulty of deciding when an information X is maintained within the long memory or kept in the short memory. Additionally if robots are considered for physical implementation, real time processing would be desired. The second problem is the representation of information. While neural nets may be useful for long memory, it may not be as suitable for short memory that may keep contradictory information. Furthermore different types of neural nets (NN) may be used such as Specht's self-organizing NN [6] whereby pieces of information can be added or deleted as neurons. Trials are being carried out on different versions of self-organizing NN and logical models for the development of knowledge and communication language. 5 Conclusion and Future work In this paper a system that deploys the concept of argumentation to enable learning- thinking process was presented. The system consists of three agents-like components, which are referred to as cells and identified as Observer cell (O cell), Questioner cell (Q cell) and Memory cell (M cell). Collectively they form one entity namely OMQ system. A definition of the three cells and their functionality is 234 provided with explanation of the OMQ system working mechanism by example. A full implementation is still under development. Server-client concepts will be examined to facilitate the predicated expansion in which case each cell will consist of several components. Parallel computation will be considered to speed processing. The current system is developed for a single existence and it enables the agent to argue with one's self. The next stage is to permit team argumentation within a multi- OMQ-agents system. 6 References 1. A. L. Wilkes, Knowledge in Minds: Individual and Collective Processes in Cognition. UK: Psychology Press (of Erlbaum(uk) Taylor & Francis), 1997. 2. N. H. Frijda, A. S. R. Manstead, and S. Bern, "Emotions and Beliefs," . Paris: Cambridge University Press, 2000. 3. I. Mora, J. J. Alferes, and M. Schroder, "Argumentation and Cooperation for Distributed Extended Logic Programs," presented at Working Notes of the Workshop on Non-monotonic Reasoning, Trento, Italy, 1998. 4. N. R. Jennings, P. Faratin, A. R. Lomuscio, S. Parsons, C. Sierra, and M. Wooldridge, "Automated Negotiation: Prospects, Methods and Chellenges," Journal of Group Decision and Negotiation, vol. 10, 2001. 5. C. Areces, P. Blackburn, and M. Marx, "Hybrid logics: Characterization, interpolation and complexity," The Journal of Symbolic Logic, 1998. 6. D. F. Specht, "Probabilistic Neural Networks," Neural Networks, vol. 3, pp. 109-118, 1990. 7. K. H. Pribram and J. King, "Learning as Self-organization,". Mahwah, New Jersey: Lawrence Erlbaum Associates, Publishers, 1996. 8. A. Ayesh, "Self Reference in AI," in Computer Science Dept. Colchester: University of Essex, 1995. 9. L. R. Squire, B. Knowlton, and G. Musen, 'The Structure and Orgnaization of Memory," in Human Memory: A Reader, D. R. Shanks, Ed. London: Arnold, 1997. 10. A. Ayesh, 'Towards Memorizing by Adjectives," presented at Submitted to AAAI Fall Symposium on Anchoring Symbols to Sensor Data in Single and Multiple Robot Systems, 2001. Evolution of a Foraging Model with Many Individuals by Kin-selection Kazue Kinoshita, Atsuko Mutoh, Tsuyoshi Nakamura, Hidenori Itoh Department of Intelligence and Computer Science, Nagoya Institute of Technology, Gokiso-cho, Showa-ku, Nagoya-shi, Japan, 466-8555 e-mail: kaz@juno.ics.nitech.ac.jp Genetic Algorithm (GA) has a property, which prompts evolution of superior in­ dividuals by weeding out others under standards of estimation. So, GA is weak in evolution of altruistic behavior. We used an selection algorithm based on the theory called "Kin-selection"1, which is a popular rationale of altruistic behavior. Efficacy of this algorithm was confirmed by simulation of a model, which sending signal and communication among a group regards as altruistic behavior. As a result, a group had consisted of some subgroups of individuals who has the same property and signal pattern to communicate among a subgroup. 1 Introduction Some animal species coexist in the same space. They use species-specific sig­ nals to communicate to members in species and avoid confusion. This paper proposes new evolution model, which many individuals are classified to some groups of similar characteristic individuals, and each group acquires group- specific signal to cooperate with its company. This model is named "foraging model". Groups are characterized by types of foraging (search and intake) foods. "Signal" means "food call", which gathers members to found feeders. The evolution algorithm for such model needs possibility of evolving "altru­ istic behavior", which is behavior that individuals behave for increase benefit of many members in the same group in exchange for decrease benefit of actors. Because "food call" is regarded as an altruistic behavior for many signal re­ceivers by a sender. It is difficult to evolve such altruistic behavior for general selection in GA which weeds out individuals with low fitness (gained benefit). Altruistic behavior is explained by "kin selection" in biology. Kin selection is a theory that altruistic behavior is tactics to indirectly gain offspring whose gene is partially similar to actor, by increasing chance of bearing children of many similar parents. This probability of indirectly bearing similar offspring is called "inclusive fitness". This paper uses it in place of general fitness (gained benefit by one individual). Other literature3 proposed such replacement too, however, this paper uses new definition to be suited to evolve plural coexisting group and altruist in each group. In following, new inclusive fitness and model are defined, and confirms efficiency of inclusive fitness. 235 236 Table 1: Symbolic expressions of words x = (x\...,xn),Y = (»' s-),r,r = {0,1} /i(X,Y) : hamming-distance between X and Y w(X) : Weight of X, (number of 1 in X) XeY=(i' 9ji',...,i° ®yn), (® '• exclusive OR) gene g={0,1}, individual a; = {g}, ...,gt) group A = {ai,..., ajj}, species P, C A distance dij = /i(a;,aj) benefit of a; = 6; benefit of A =BA = T"!,-_i &» di = M 2^ = 1 dV 2 Definition of Inclusive Fitness Table. 1 shows definition of symbolic expressions. "Species" Pi is defined as a subset consisting of members with partial common gene. ei is general inclusive fitness3 of individual a;. ei =ip(bl,...bM,ra,...,rtM) = Y,r'ij -bJ (rij = K'(du) = 1 ~ ~^hj (X) In formula(l), similarity degree r- is as larger, di3 is smaller. It means that if many so similar individuals get more benefits then e; is raised. Conversely it, individuals whose distance betweena^ is shorter than average d^ve, r^ is negative value. If these negative similarity individuals get some benefit then ei is lowered. In this formula, however, plural species can not coexist, even if their acquisitions benefit behavior does not conflict with other's. So this paper defines new similarity degree r^ by function K. ei is influenced by benefits of low hamming-distance individuals. These individuals are usually in the same species. On the other hand, ei is not influenced by benefits of high hamming-distance individuals, who are usually in other species. Definition 2.1 Tij: = K{dij) = a'lV" 0 0, signal Si„ = lit, © Si is produced and sent to all individuals b . 5. When ar receives signal Ss, from as, inside-code 1T = Ss» (B sr is produced. 6. Each individual has a "memory", which is the list of item (c, Lc,s). c is the index of cell where the signal sender exists. Lc is rank of food (expected number of foods the receiver can take within 1 turn) in cell c. Lc = w(Ir). s is similarity between sender and receiver0 . Items in memory are staked in Lc order. (The highest Lc is top of memory) 7. If memory is not empty, when individuals change cell, they move to a top cell of index c. And, item in the top of memory is deleted. If signal sender as and receiver ar have the same signal gene (ss = sr), Ir = Isd . In this model, "species" is regarded as set of individuals who has some common taste-gene and common ranks of foods. If members in one species have common signal-gene, correct informations are transmitted among the members. It can be raised efficiency of foraging by signals. If individuals " 1 "turn" is a period that all individuals act once. b individuals who are not similar to sender is difficult to receive. Probability that aT receives signal from as is rec(ar,Ss) = exp( <•**£'•) y c If duplicate information about the same cell is added, one item is deleted, which s shows more non-similar individual as to gene. d Aj = s,;„ e Sj = (A; e s^ e s^ = A; S{ ® SJ = & 238 o?|6°l **"*Gcn( Inclusive Firm -ltncss General Fitness (average of 20 times) % t»'00 f 80 •£•«> 5 40 3 20/ Generation ,_ ^ 0 100 2(X) 300 400 500 ~ g.16' 100 200 300 400 500 Generation 4 5ti (lluii i :(01(X))~ It ,s=(0101) s=(1010) s=(0100)reciPlent donor PI P2 P3 other Figure 1: Left : Evolution of foraging efficiency, Center Right : Final signal patterns and appeared donors transition of the signal pattern, have same signal-gene as members in other species, incorrect rank is added to memory. Their efficiency of foraging can not be raised by signal information. Following experiments, number of individuals M = 36, A = 1/20, e = 200, and (p,q,r) = (1,4,4), C = 2 and Ft, = {Fi,F2,F3}={(llll),(1110),(0000)} in Table.2. If action-gene x; = (0), a; is called donor, and others are called recipient. Donors are extreme altruists who are specialist of search and send­ ing signal. Recipients do search and intake of found foods. Use of signal from donors raises efficiency of foraging of many recipients. Recipients move using memories (3.1). Donors, however, move to other unsearched cell. If memory is empty, they moves at random. 4 Experiment This section is the description about some experiments for proving effect of individual selection by inclusive fitness. Several experiments are comparison of general fitness (b; : taken number of foods by a individual within 1000 turns) and inclusive fitness (Def.2.1). e As a result of experiments, finally a group consists of only 3 types of in­ dividuals whose taste-gene fj coincides to Fi, F2 or F3. These 3 classified individuals are called species Pi (fj = Fi), species P2 (fi = F2), and species P3 (fi = F3). At first, members in Plt P2 or P3 increased. Next, signals are unified in each species. Finally, individuals can be able to send and receive in­ formations correctly among the same species. It is confirmed that experiments with inclusive fitness can make a group get higher benefits than them with general fitness (Figl(Left)). Its causes appear in Figl(Right). It was difficult for general fitness to gain donor. Inclusive fitness, however, makes it possible. In the evolutionary process of species-specific signals, some small sub­ groups P;G Pj appeared. These subgroups can be classified by signal-genes. e Here this paper describes an information that an algorithm used in these experiments is GA with plural groups and using group selection. 239 Figl(Center) shows subgroups P^GPi and Figl(Right) shows finally acquired specialized signal-gene. After several generations, these subgroups were weeded out except for one optimum subgroup P|, which has Property4.1^ . Property 4.1 3i Pi (fi, Si) is fixed, \/j PjSj = f; © f,- © s, (j = 1,..., K) • Only when almost all individuals have Property4.1, information of correct rank of foods is transmitted even among other 2 species. 5 Conclusion This paper used Def.2.1 as a similarity degree function/c, which realizes evolu­ tion of several species in one same field. It realized evolution of communication in each species and appearance of altruist. Inclusive fitness, however, has re­ strictions. First, there are the premise of existence of "altruistic behavior gene", and the premise that altruistic behavior is always done for other similar individuals as to gene. So, inclusive fitness can not be applied to any mod­ els. Next, it needs as long gene arrangement as possible, because of gaps of similarity degree for effects of inclusive fitness. We will inspect about other functions satisfied propertied in functions, and will make definition about effects and limits of inclusive fitness clear. This paper also confirmed a phenomenon about evolution of signal-genes which make using signals produced by other species possible each other. In the real ecosystem, for example, the common alarm call for common enemies is used through several species. Each individual can distinguish it from private signals inside species. This distinguished signals problem is one of the future work. References 1. Hamilton.W.D,The genetical evolution of social behaviorl,2, J.Theor-Biol,1964 2. Eiichi Kasuya,Primer of behavior ecology, Publisher of Tokai Univ,1990 3. Ezequiel A.Di Paolo,A Little More than Kind and Less than Kin: The Unwarranted Use of Kin Selection in Spatial Model of Communication, "Advances in Artificial life" Proc. ECAL'99, Springer-Verlag, 1999 4. Kazue Kinoshita, Toshikazu Suzuki, Nobuhiro Inuzuka, Hidenori Itoh ,An Evolutionary Acquisition of a Cooperative Population by Selection Methods., MACC99,1999 •f ir = sj e sr = (F„ ®f,® ss) e ST, ir = F„ e fs e ss e (f» efr e ss) = F„ e fr (a. and ar has Property4.1), so C — w(lT) = C — /i(FtJ,fT.)=rank of ¥v for ar THE USE OF EMERGENT BEHAVIOUR IN A MULTI-AGENT SYSTEM TO DRIVE SELF-ADAPTATION AT THE INTERFACE PETER MARSHALL AND SUE GREENWOOD Intelligent Systems Research Group, School of Computing and Mathematical Sciences, Oxford Brookes University, Oxford, England, UK. e-mail pmarshallfdibrookes. ac. uk. sereen wood(d).brookes. ac. uk Multiple simple agents have been used to drive adaptive behavior in a system that presents data in various graphical and tabular forms. Agents observe the users' actions and review the data that is input into the system. Based on their observations, the community of agents make decisions about which display formats to recommend when new data is loaded. Rather than carrying out high-level decision-making, the agents work as an emergent system where the result of their interactions provides the set of recommended displays. This approach has been deployed in the real world domain of medicine. 1 Introduction Previous work in the Intelligent Systems Research Group employed a system that enabled data to be displayed according to the needs of a particular user. Due to the time critical nature of the problem, spending time searching through the data was not feasible. This prohibited the system being usefully employed in the clinical setting for which it was designed. Thus a more automated approach was required. Current work involves redeveloping the earlier system from the ground up. A multi agent system has been utilised to drive the adaptivity. A set of simple agents, each concerned with a single aspect of the system, communicate with each other and the suggested summary is a result of the emergent behavior of the whole system. While emergent behavior is used in other areas where agents have been applied, notably robotics, it is novel to use this approach in adaptive interfaces. This paper first considers the use of reactive agents to provide a context for the application of emergence in the area of self-adaptive interfaces. The field of adaptive interfaces is also considered to identify approaches that have been used in the past. An emergent multi-agent system using a two-layer model is then described. This approach has been applied to the problem of providing self- adaptivity at the interface. 240 241 2 Intelligent Agents Jennings et al [4] provide an argument to show how reactive agents that utilize planning from first principles will not be viable. They consider that by not employing a world model, the agents must have enough information in their local environment to allow them to decide on a reasonable action. Reactive agents are therefore restricted to relying on this local information and must therefore take a 'short term view'. Hence they do not see how these agents could learn from experience to improve performance over time. Their analysis suggests that reactive agents might not be a good idea except in specialist areas such as Brooks' work with robot control [1]. However the work of Wavish and Graham [6] shows that reactive agents can produce interesting results in interface work. They have created systems with agents as actors where the behavior of the system emerges from the interactions of the 'actors'. This would indicate that the application of reactive agents to the user interface could be valid. In the proposed system, simple agents make decisions based on whether values exceed numeric thresholds and by comparing values directly. The agents can modify these thresholds when patterns of behavior repeat. The agents' internal state is stored between sessions thus allowing the overall system behavior to adapt over time. 3 Adaptivity at the User Interface Providing adaptivity requires obtaining user data. In static adaptation the user is initially classified then the system configures itself to match this at their first use. In dynamic adaptation, the system takes account of the user's behavior while using the system. In this approach, the system can take time to learn the user's habits. Korvemaker & Greiner[5]discuss this problem. The use of stereotypes can address this. These initial stereotypes can then be modified over time as in Bushey et al's [2] CDM method. To allow adaptation over time, the user must be monitored. There are two ways to attempt this. The first is to build up a discourse model over time. User choices can be tracked and utilized to show patterns in behavior as in Goecks [3] for example. The second is to ask the user for ratings. This can cover everything from simply asking the user whether or not to carry out an action up to and including modifying the content to allow it to be rated. This second more direct approach gives more concrete feedback as to the users' opinions of the content but it could be considered as intrusive. In the problem being considered here where data needs to be interpreted by the user in a time critical situation, it is probably more important not to disturb the user than to gain direct feedback. 242 4 System Development The aim of this system is to produce a form of adaptivity where the user can be offered what the system thinks are the most relevant data views while not taking away from the user control of the system. When requested, agents decide on a summary that consists of a list of possible data views that appear in a new window to the side of the main window thus not interfering with normal activity. The user can view or ignore all or part of the recommendations as they see fit. Figure 1 below shows the architecture of the system. To drive the adaptivity, a community of agents is used. Each agent is relatively simple in itself but the power of the system comes through the interactions of the various agents. The agents are divided into two layers, the interaction layer, comprising of interface, data and reasoning agents and the control layer comprising of overseer and scheduler agents. Interaction level agents are concerned with monitoring the actions of the system and propose changes to the summary while the control layer agents are concerned with coordinating the actions of the interaction layer agents. Below, the various types of agent are considered. 4.1 Interface Agents Each interface action has an interface agent monitoring it. When an agent's action occurs, it notifies its reasoning agent then notifies the discourse model. By observing which actions occur near to when it fired, it can decide which other agents might be linked to its firing and update its internal belief model accordingly. 4.2 Data Agents Unlike interface agents, which respond after every action, the actions of the data agents occur as a group when a new dataset is loaded. Each data agent watches a particular facet of the data and when new data is loaded, decides whether it differs from the norm enough to warrant rating high or low. It then tells the domain model whether its value was high, low or normal. Once all data agents have reported, Any with high or low values can, by observing others with these results, update their internal beliefs about direct or inverse relationships. 4.3 Reasoning Agents Reasoning agents are each associated with a single interface agent but can interact with all data agents. When a summary is requested, each reasoning agent derives a final rating for its interface agent based on the raw rating from the number of times the interface agent has fired and modifiers from patterns discerned by its interface agent or any of the data agents. 243 User Interface Overseer Agent Control Layer Interface Agents Discourse Model Interaction Layer Jb Reasoning Agents Data Agents Blackboard Dataset 1 Figure 1: The agent architecture 4.4 Overseer Agents Once all reasoning agents have determined their ratings, the overseer ranks them to produce the final summary. As the agents use emergence, one must be wary of the behavior moving outside of accepted bounds. The overseer, looks at both the raw and total ratings and so can decide if the modifiers applied are out of range. 4.5 Scheduler Agents These agents are concerned with the smooth running of the system. They notify the relevant implementation layer agents when events of interest to specific agents occur. Because of the way the agents are implemented, where each agent action is the direct result of a particular action, there have not been coordination issues but the role of the scheduler could be expanded to deal with this if required. 5 Analysis The agents as described above are able to make decisions that allow the summary the system offers to adapt over time and usage. As noted above, there is the problem of lead time before a system such as this can hope to perform adequately. This is addressed by giving each user a stereotype that is modified over time. To 244 check how effective the summary offered is, the agents watch to see whether the user selects items from the summary. If they do, this is taken as positive reinforcement of the item's inclusion. In this way, there is a feedback loop. If one has a strong model of what a user is trying to achieve then one can simply map this to the actions they are taking. Without such a model, one runs into difficulties. In this system, the agents directly observe the data that the user uses and by linking patterns in this to patterns in user behavior, attempts to overcome the lack of an explicit model of the user's goals. Using a community of simple agents that communicate with each other it is possible to consider the actions at the interface and the patterns in the data separately while still having a mechanism in place which allows these two analyses to be combined to provide final decisions. 6 Conclusions This work demonstrates the use of emergent behavior in a community of agents to provide a self-adaptive system. To achieve this using a conventional approach would have required the construction of a far more complex set of high-level models. We have produced a model for a group of agents that can combine together through their interactions to produce a working system. With the relationships between the various patterns in the data and user actions implicitly modeled, one can, at least in some cases, sidestep the need for complex high-level models. 7 References 1. Brooks R A, Intelligence without Representation, Artificial Intelligence I 47 (1991), pp. 139-159. 2. Bushey R, Mitchell Mauney J and Deelman T, The development of Behavior- Based user models for a computer system, UM99 The 7th Nat. Conf. on User Modeling (1999) pp. 109-118. 3. Goecks J and Shavlik J, Learning User's interests by Unobtrusively Observing their Normal Behavior, IUI 2000 Int. Conf. On Intelligent User Interfaces, (2000), pp. 129-133. 4. Jennings N, Sycara K and Wooldridge M, A Roadmap of Agent Research and Development, Autonomous Agents and Multi-Agent Systems, 1 (1998), pp275- 306. 5. Korvemaker B and Greiner R, Predicting Unix Command Lines: Adjusting to User Patterns, Proc. 17'h Nat. Conf on A I (2000), pp. 230-235. 6. Wavish P and Graham M, Roles Skills and Behavior: A Situated Action Approach to Organizing Systems of Interacting Agents, in Wooldridge M and Jennings N R (Eds), Intelligent Agents - Theories, Architectures and Languages, Lecture Notes in A 1890, (1994) Springer Verlag pp. 371-385. A BIOLOGICALLY INSPIRED FOUR LEGGED ROBOT THAT EXHIBITS SOME NATURAL WALKING BEHAVIOURS S. Peng, G.R.Cole, C.P.Lam School of Engineering, Murdoch University, Perth, WA, Australia 6150 email: shiqi@eng. murdoch. edu. au This paper presents the design and implementation of a four legged walking robot that can exhibit some natural walking behaviours. The robot is inspired from biological systems and aims to address the complex behaviours of legged locomotion from a biological perspective. The robot displays its natural walking manners via some overall emergent walking behaviours generated from the real-time interactions with its environment. A parallel Subsumption Architecture is applied to the robot for the physical implementation of the design. 1 Introduction Legged robots have been studied for a long time and a number of them have also been built for laboratory investigation and practical application in recent years. Based on the number of legs, these can be mainly classified into three types: • Biped robots that have two legs robot (eg. Honda humanoid robot [ 1 ], Eyebot [2]). Quadruped robots which have four legs (eg. BISAM [3], TITAN [4]). Insectoid robots which have more than four legs (eg.[5, 6]). Although most of these walking robots are also inspired from biological systems, the approaches used to generate their walking behaviours have been very much from an engineering perspective. Smooth and natural walking behaviours exhibited by animals are seldom shown in existing walking robots. Based on many biological studies on animal locomotion (eg. [7-10]), it has been shown that a natural rhythmic cycle of animal locomotion is composed of several different phases (also known as duty factor [11] in most biophysics documents). For different gaits, there are different numbers of phases. For example, a walking gait has four phases while trotting, pacing and bounding have two phases. The reason why there are different gaits in animal locomotion is that certain gaits result in the most efficient energy consumption at certain speeds [12]. For example, the walking gait is suitable for low-speed locomotion while the pacing and trotting are suitable for high-speed locomotion. The walking gait has four different phases (eg. phase 0,1,2,3, as shown in Figure 2). Each of the legs undergoes these four phases during walking. However, at any one time, all the legs are at different phases. At phase 0, a leg (referred as the leading leg [13]) is lifted and swung forward. At phase 1,2,3, a leg will move backwarded. All four legs cooperate to generate the force to move the body forward. While a leg is moved forward, all the other three legs on the ground are pushing in the backward direction simultaneously. The difference between phase 1,2,3 is that the position of the leg is at various positions relative to the body. For example, at phase 3, a leg is at the fully extended position (eg. left back leg of Figure 1-i) and phase 1 is at a minor extended position (eg. right back leg of Figure 1-i). 245 246 Another essential issue for natural walking is balance. Raibert addressed this issue in many of his research documents [14]. There are two types of balance strategies in animal locomotion: dynamic balance and static balance. In static balance, the center of gravity of the animal is always kept within its supporting area formed by its legs on the ground. The animal can statically keep its posture and not fall down. For dynamic balance, the animal's center of gravity is sometimes outside its supporting area. The animal must use its movements, which generate momentum, to compensate for its temporal instability. For instance, when a leg is off the ground and swung forward, the center gravity of its body may be outside its supporting area and results in falling down. However, as long as the leg can complete its forwarding actions before it falls beyond a tolerant limit, the falling is acceptable and utilizable to the animal. Animals employ both balance strategies during their locomotion. The faster an animal moves, the more the dynamic strategy is employed. Subsumption Architecture (SA) [15] is a robotic architecture inspired from biological systems. It is a bottom-up reactive AI approach without a model and/or representation of its environment. A Subsumption Architecture is made up of a hierarchical set of pre-defined behaviours, which all operate in parallel. A behaviour is defined as a set of actions triggered by certain sensor(s) (physical or virtual) conditions for achieving a certain goal that will eventually facilitate the achievement of the final system target goal. According to the preset suppression rules, higher level behaviours, if triggered, can suppress lower level ones. This paper presents the design and implementation of a four legged walking robot, inspired from four legged animals (eg. a dog) and aims to investigate the problem of natural walking, an issue yet to be addressed sufficiently in the robotic community. This is an attempt whereby the robot has reasonable complexities and similarities close to its biological counterpart as without these, some animal walking issues may be overlooked. The walking behaviours of the robot are implemented using one SA for each leg (i.e. parallel SA). 2 Design and Implementation As addressed previously, the cycle of the walking phases are set as : 0-3-2-1- 0... . Given that the full moving distance of a leg (relative to the body) is defined as 100%, four leg positions (d,c,b,a) represent positions of 100%, 66%, 33% and 0%. Movement details of these four phases are listed as followings: Phase 0: Move from position a to position d (in the forward direction). Phase 3: Move from position d to position c (in the backward direction). Phase 2: Move from position c to position b (in the backward direction). • Phase 1: Move from position b to position a (in the backward direction). A detailed illustration of the walking gait is shown in Figure 1. By implementing these cycles of leg motion, a walking behaviour for the robot can be achieved. The legs enter phase 0 in the order of left front, right hind, right front and left hind (the 247 normal walking gait for four legged animals). A picture of the robot, built as an experimental platform, is shown in Figure 2. Pneumatic cylinders attached to the limbs act as "muscles", providing the actuation through the usage of solenoid valves. An independent Subsumption Architecture with its own action execution unit has been developed for each leg of the robot, resulting in four SA functioning in parallel in the system. There are no direct communications among the four architectures. The only connection between them is the physical body of the robot and a simple Central Pattern Generator (CPG) to coordinate leg movement phases. Right hind leg: Phase 1 Left hind leg: Phase 3 (i) Step 1 (left front leg off-ground forward, others on-ground backward) Right front leg: Phase 1 Right hind leg: Phase 0 Left hind leg: Phase 2 (ii) Step 2 (right hind leg off-ground forward, others on-ground backward) Right front leg: Phase 0 Left front leg Phase 2 (iii .-'TV d t b Step 3 (r othe Right hind /.' \ \ leg: Phase 3 / ." \ a d c b a Left hind leg: /7\ s Phase 1 / J \ ght front leg off-ground forward, rs on-ground backward) Right front leg: Phase 3 Left front leg: Phase 1 (iv) Step 4 (left back leg off-ground forward, ^ others on-ground backward) Figure 1: Walking gait of the design Six types of behaviours: Stand, Forward, Backward, Balance, LegDown and Protect, listed in Table 1, are defined for the four SA. Behaviour Stand Forward Backward Balance LegDown Protect Behaviour Sub-goal To lock a leg at the current position To carry out the phase 0 actions of a leg to swing a leg forward To carry out other phase (1,2,3) of a leg to push body forward To balance the body to avoid falling over To put a leg down to make contact with the ground To set the legs and body of the robot to "safe" positions Priority 0 1 1 2 3 4 Table 1: List of behaviours In total, there are 24 behaviours running in parallel in the system, co-operating together to generate an overall emergent walking behaviour during locomotion. The Forward and Backward behaviours are at the same priority level and thus do not suppress each other. All the behaviours in the four SA operate concurrently. The Forward behaviour swings a leg forward from position a (eg. start position of left front leg of Figure l-i ) to position d (eg. stop position of left front leg of Figure l-i ). The Backward 248 behaviour moves a leg backward a unit distance (eg. from position d to position c, c to b, etc.) at a time to generate the force for pushing the body of the robot forward. At any moment, if the Forward behaviour of a leg is triggered, the Backward behaviours of the remaining three legs are also triggered at the same time, with different phases. These two types of behaviours can automatically record their phase status and transfer to the next status in the order of phase 0-3-2-1. The Balance behaviour is designed to supplement the Backward and Forward behaviours to implement the robot's balance strategies (both dynamic and static, depending on the real-time situations). The Balance behaviours will be activated when the body of the robot tilts at an angle (eg. 10 degrees) to the horizontal surface. If the robot tilts beyond a critical degree (eg. 20 degrees), the LegDown behaviour will be activated, resulting in the lowering of leg(s) down to the ground to support the weight of the robot to prevent the robot from tipping over. To protect the robot from situations whereby it may fall over, there is a Protect behaviour (for each SA) that can be activated. This behaviour resets the robot to a pre-defined "save" sate. In terms of the physical implementation of these architectures, the methodology proposed in [16] has been used. Behaviours are implemented as behaviour objects that are instantiated from the Behaviour class and composed of reusable components (eg. Action, Trigger and Executor components). A behaviour encapsulates all its functionality and characteristics (eg. its trigger condition, suppressible behaviour list, actions and operating knowledge) so that it can operate independently and no extra behaviour arbitrator is required. The development language employed here is Swiftx 2.5 [17], which provides a simple multi-thread (task) programming and operating environment. Each behaviour, as well as the Action Execution Unit of an architecture, is implemented as an instance via an independent thread. 3 Experiment results A laboratory floor is used as the testing terrain. The robot successfully walks from one end to the other end of the floor at a speed of about 2.5 meter/minute, exhibiting certain walking behaviours. The initialisation stage involves setting conditions whereby each of the leg is positioned to a preset phase (legs are preset at the phases of 0, 2, 3, 1 for left front leg, right front leg, left hind leg and right hind leg). Given the Stand behaviour has no "trigger" conditions, it will automatically activate provided no other behaviours are active. The sequence of behaviours is not deterministic but a typical scenario is now described. When the CPG is first started, the Forward behaviour of the left front leg and the Backward behaviours of the remaining three legs are triggered. The Forward behaviour suppresses the Stand behaviour to become activated and moves the leg forward. The leg extends downward onto the ground, lifts off, fully swings forward and places down on the ground. At the same time, the Backward behaviours of the other three legs push backward on the ground to move the body forward. They cooperate to generate the 249 necessary force to enable the robot to move forward. A smooth transition of leg position phases is shown during movement. Visually, it is seen that the robot is walking forward. During these activities, the LegDown behaviour may be activated if the body of the robot tilts beyond a tolerant degree. If it is triggered, this behaviour will suppress any lower level behaviour (eg. Forward or Backward behaviour) to become the activated behaviour. Its actions involve putting the leg onto the ground in an attempt to prevent the robot from tipping over. Once this behaviour is completed, the Balance behaviour is triggered subsequently to further stabilize the robot's posture. In the worst case scenario when the robot loses its balance and reaches an abnormal unstable posture, the Protect behaviour is triggered to reset the posture of the robot to a certain predefined "safe" position. When the robot regains its balance, the Forward or Backward behaviours are again activated. This alternation of behaviours may occur repeatedly until all the stepping actions have been completed. After the first "phase of walking", the phases of legs are changed to 3 (left front), 1 (right front), 2 (left back), 0 (right back), so that it is ready for the right hind leg to be moved forward. Overall, all the interactions inside the system will generate an emergent walking behaviour that enables the robot to move forward. The process discussed above is for one of the four legs and will occur simultaneously for all the four legs. Figure 2 shows a walking cycle of the robot on a flat ground, 2a to 2d sequentially show the steps of left front leg, right hind leg, right front leg and left hind leg. The phase transitions of left front leg are shown by the arrow box pointed to the leg (note positions of legs relative to tie.body). Transition phases of the other three legs are similar. A point to note here is that a pulsed mode of operation is used with the pneumatic cylinders, the movement of a limb consists of a number of smaller pulsed movements or jerks. For this reason, we are not at a stage where it would be fair to compare the walking quality of this robot to others, given that our movements have yet to be fully optimized. An Mpg format video clip of the walking behaviours of the robot can be obtained via the 250 Internet from this address: http://eng.murdoch.edu.au/~shiqi/Robotwalker.mpg. Experiments with the robot employing different gaits and being subject to different terrain are being carried out. 4 Conclusion & Future works This paper presents the design and implementation of a four legged walking robot that incorporates some biological inspiration, which enables the robot to walk. Four parallel SA are used in the robot to physically implement the concepts. All of the behaviours within each of the four parallel SA and a simple CPG co-operate to generate emergent walking behaviours. In the future, a more complicated CPG will be incorporated for walking phase optimization together with machine learning to enable the robot to carry out more sophisticated and flexible natural walking behaviours. References 1. L. HONDA MOTOR CO., "The HONDA HUMANOID ROBOT," : http://www.honda.co.jp/english/technology/robot/tec 1 .html, 2000. 2. T. Braunl, "The eyebot mobile robot family," presented at International Conference on Artificial Ingelligence (IC-AI), Las Vegas., 1999. 3. W. Ilg and K. Berns, Jedele, H., Albiez, J., Dillmann, R., Fischer, M., Witte, H., Biltzinger, J„ Lehmann, R., Schilling, N., "Bisam: From small mammals to a four legged walking machine," presented at Fifth International Conference of the Society for adaptive Behaviour, 1998. 4. K. Yoneda, "TITAN VIII," in Tokyo Institute of Technology, http://www.fzi.de/ipt/WMC/preface/nodel50.html, Ed„ 1998. 5. A. Onat, "The Six Legged Walker Gokiburi," in University ofTyoko, http://turbine.kuee.kyoto- u.ac.jp/staff/onat/altibacak.html, Ed., 1998. 6. M. Binnard, "Boadicea - A Small, Pneumatic Walking Robot,", http://www.ai.mit.edu/projects/boadicea/boadicea.html, Ed., 1998. 7. E. von Hoist, "Uber relative Koordination Bei Arthopoden," in Pflugers Archive, vol. 246, 1943. 8. E. R. Kandel, J. H. Schwartz, and T. M. Jessell, "Principles of Neural Science," . Norwalk, CT.: Appleton & Lange, 1991. 9. S. Grillner, "Control of locomotion in bipeds, tetrapods, and fish," in Handbook of physiology, 1981, pp. 1179-1236. 10. M. L. Shik and G. N. Orlovsky, "Neurophysiology of Locomotor Automatism," Physiology Revidw, vol. 56, pp. 465-501, 1976. 11. R. M. Alexander, Locomotion of Animals. New York: Blackie & Son Limited, 1984. 12. R. M. Alexander, Exploring Biomechanics: Animals in Motion. New York: Scientific American Library, 1992. 13. A. B. Howell, Speed in Animals. New York: Hafner Publishing Company, Inc., 1965. 14. M. H. Raibert, "Legged Robots," in Robotics Science. Cambridge,MA.: The MIT Press, 1989, pp. 563-594. 15. R. A. Brooks, "A Robust Layered Control System For A Mobile Robot," IEEE Journal of Robotics and Automation, vol. Vol.RA-2,No.l, pp. 478-485, 1986. 16. S. Peng, G. R. Cole, and C. P. Lam, "A generic framework for implementing Subsumption Architecture," presented at RA2000, Honolulu, Hawaii, USA, 2000. 17. Swiftx Ref., "SwiftX Reference," . SwiftX Co., Ltd.: http://www.forth.com. 2000. CHAPTER 4 KNOWLEDGE DISCOVERY AND DATA MINING AGENTS CM-RELVIEW: A TOOL FOR CAUSAL REASONING IN MULTIAGENT ENVIRONMENTS BRAHIM CHAIB-DRAA Computer Science Department, Pavilion Pouliot, Laval University, Ste-Foy, PQ, Canada G1K 7P4 email: chaibQift.ulaval.ca Analytical techniques are generally inadequate for dealing with causal interrela­ tionships among a set of individual and social concepts. In this paper, we present a software tool called CM-RELVIEW based on relational algebra for dealing with such causal interrelationships. Then we investigate the issue of using this tool in multiagent environments, particularly in the case of: (1) the qualitative distributed decision making and, (2) the organization of agents considered as a wholistic ap­ proach. For each of these aspects, we focus on the computational mechanisms developed within CM-RELVIEW to support it. 1 Introduction Cognitive maps follow personal construct theory, first put forward by Kelly 8. This theory provides a basis for representing an individual's multiple per­ spectives. Kelly suggests that understanding how individuals organize their environments requires that subjects themselves define the relevant dimensions of that environment. He proposed a set of techniques, known collectively as a repertory grid, in order to facilitate empirical research guided by the the­ ory. Personal construct theory has spawned many fields and has been used as a first step in generating cognitive maps. Huff 7 has identified five generic "families" of cognitive maps. Among these families, there is one that show influence, causality and system dynamics: This type of maps, called causal maps, allow generally the map maker to focus on action, as for example, how the respondent explains the current situation in terms of previous events, and what changes she expects in the future. This kind of cognitive map is currently, has been, and is still, the most popular mapping method. We generally use causal maps for dealing with such cause-effect relations embedded in deciders' thinking. Theses maps are represented as directed graphs where the basic elements are simple. The concepts an individual (a decision-maker or a group of decision-makers) uses are represented as points and the causal links between these concepts are represented as arrows between these points. This representation gives a graph of points and arrows, called a causal map (CM). The strategic alternatives, all of the various causes 252 253 Japanese attrition Japan ^^ ^^ Japanese remains ~~-^_ ^^ success idle ^~~\_ ^^^"^ in war +^-s- US preparedness Figure 1. An example of causal map and effects, goals, and the ultimate utility" of the decision-maker can all be considered as concept variables and represented as points in the CM. Causal relationships can take on different values based on the most basic values + (positive), — (negative), and 0 (neutral). Logical combinations of these three basic values give the following: "neutral or negative" (0), "neutral or positive" (©), "non-neutral" (±), "ambivalent" (a) and, finally, "positive, neutral, or negative" (i.e.,"universal") (?) 1-5'11. The real power of this approach appears when a CM is pictured in graph form. It is then relatively easy to see how concepts and causal relationships are related to each other and to see the overall causal relationships of one concept with another, particularly if these concepts are the concepts of several agents. The CM of Fig. 1, taken from 10, explains how the Japanese made the decision to attack Pearl Harbor. Indeed, this CM states that "remaining idle promotes the attrition of Japanese strength while enhancing the defensive preparedness of the United States, both of which decrease Japanese prospects for success in war". Thus, a CM is a set of concepts as "Japan remains idle," "Japanese attrition," and so forth, and a set of signed edges representing causal relations like "promote(s)," "decrease(s)," and so forth. Note that the concepts' domains are not necessarily defined precisely be­ cause there are no obvious scales for measuring "US preparedness," "success in war,", and so forth. Nevertheless, it seems easy to catch the intended mean­ ing of the signed relationships in this model 14. As any causal map, the CM of Fig. 1 can be transformed in a matrix called an adjacency or valency matrix which is a square matrix, with one row and one column for each concept. Inferences that we can draw from a CM are based on a qualitative rea­ soning similar to "friend's enemy is enemy, enemy's enemy is friend, and so forth." Thus, in the case of Fig. 1, "remaining idle" decreases the prospects "Utility means the unspecified best interests of a decision maker. 254 for Japanese success in a war along two causal paths. Notice that the rela­ tionship between idleness and war prospects is negative because both paths agree. In these conditions, Japan has an interest in starting war as soon as possible if she believes that war is inevitable. Thus, causal maps and the qualitative reasoning that it sustains serve generally as the modeling language for problem resolution through decision making, particularly in multiagent systems where decision emerges generally from interrelationships among agents' concepts. Such is the case for the pre­ vious example that reflects a multiagent system in the sense where "Japan" and "USA" are individual agents. In this paper, we present an implementation of a formal model (details on this model can be found in 6) which has been implemented in a system used as a computational tool supporting the relational manipulations. 2 CM-RELVIEW: An Implementation of the Relation Model of CMs The CM-RELVIEW has been built over the RELVIEW software6, a free­ ware package developed by Berghammer and Schmidt 2. In the CM-RELVIEW system, all data are represented as binary relations, which the system visu­ alizes in two different ways. For homogeneous relations, CM-RELVIEW offers a representation as cognitive maps, including several different algorithms for pretty-printing. As an alternative, an arbitrary relation may be displayed on the screen as a Boolean matrix. With matrix representation, we can visu­ally edit and also discover various structural properties that are not evident from the causal map representation. The CM-RELVIEW system can manage as many graphs and matrices simultaneously as memory allows and the user may manipulate and analyze the relations behind these objects by combin­ ing them with the operators of relational algebra. The elementary operations can be accessed through a simple mouse-click, but they can also be combined into relational expressions, mapping, and imperative programs. CM-RELVIEW allows also users to store relations and CMs. In addition, CM-RELVIEW offers a menu window (Fig. 2) that can be divided into different parts. The first part, deals with tasks as: (1) FILES: opens the file-chooser window; (2) INFO: helps users by giving some appro­ priate information, (3) QUIT: quits the system. The "Editors" part includes the following : (a) RELATION: opens the "This software can be obtained by anonymous ftp from "http://www.informatik.uni- kiel.de/^ progsys/relview.html". 255 CM- Relview Editors: CREUATIOISQ CGRAPH) Directories: (XRV/FROG) (LABET) Used—defined functions and tests: CPEFINE) CHVAL") QTER ) (TESTS') Basic operations: CD © O CO © Residuals and quotients: (S/R ) (R\S ) CSYQ) Closures: (TRANS ) CREFL) ("SYMMJ Product, sum and powersets: Ca*b") (a+b) fa^b) Figure 2. The menu window of CM-RELVIEW. window of the relation editor; (b) GRAPH: pops the window of the CMs editor, By clicking onto the button RELATION, one opens the relation editor. One can then load a relation by simply selecting this relation in the first scroll list of the directory window. Typically, the window of the relation editor looks like as a grid network in which a single entry of the relation unequivocal defined by a row and a column of a relation is represented by one of the set C := {a, +, -, 0, ©, Q, ±, ?}. If the mouse pointer is located on an item of a relation, the mouse buttons invoke the following different actions: • the left mouse button sets the item if it was cleared, or clears it if it was set, • the middle mouse button allows one to choose one relation (which is used by the left mouse to set) of the set C := {a, +, -, 0, ©, 0, ±, ?}, finally, • the right mouse button pops up a menu where appears (i) NEW: it cre­ ates a new relation; (ii) DELETE: It deletes the relation displayed in the relation editor window from the workspace (the causal map associated with the deleted relation is also deleted), (iii) RELATION —> GRAPH: it creates a CM from homogeneous relation with the same name as the relation (such CM is displayed in the graph editor). The window of the graph editor (i.e., CM editor) can be opened by press- 256 ing the button GRAPH in the menu window. Similar to relations, all actions within this menu are selected with the same right mouse button. By pressing such a button, we reach the graph menu, within, we can particularly invoke the following actions: • DELETE: it deletes all nodes of a causal map, • NEW: it opens a dialog window which allows one to enter a name for a causal map, • GRAPH —^RELATION: it creates a relation from a causal map, • GRAPH-DRAWING: it opens a submenu from which different graph al­ gorithms can be chosen, particularly LAYER, which places the edges vertically, FOREST, which draws a directed forest, and WHOLISTIC- APPROACH, which draws a particular causal map that we will detail in sub-section 4. CM-RELVIEW offers also a "directory" part which contains: • XRV/PROG: it displays the directory window showing the state of the workspace and the reasoning on causal maps, • LABEL: it opens the label directory listing label set which is, in our case, C:={a,+,-,0,©,e,±,?}. The buttons in the "user-defined functions and tests" part are mostly needed while working with the CM-RELVIEW system: • EVAL: pops up the evaluation window for entering a relational term (a relational term can be a relation, a function, or a relational program), • TESTS: pops up a window for invoking tests. With this command, one can perform the following actions: (i) TEST-l-R: to execute various kinds of tests on a relation (is it empty, injective, symmetric? etc.); (ii) TEST- 2-R: to execute tests on two relations (are they equal, included? etc.); (iii) SUBJECTIVE VIEWS: to do tests on CMs in the case of the reason­ ing on subjective views (COMPARISON, PREDICTION, EXPLANATION and NEGOTIATION); (iv) WHOLISTIC-CM to execute some strategies of changes on the particular CMs representing an organization of agents as discussed in Section 4. Finally, the other parts of the menu window offer a number of relational operations which are directly accessible via push buttons. Among those oper­ ations, TRANS allows one to calculate the transitive closure of a given relation. 257 3 CMs as a Tool for Qualitative Distributed Decision Making CMs can also help an agent or a group of agents considered as a whole to make a decision. Given a cognitive map with one or more decision variables and a utility variable, which decision should be taken and which should be rejected? To achieve this, the concerned agent should calculate the total effect of each decision on the utility variable. Those decisions that have a + or © total effect on utility should be chosen, and decisions that have a — or 0 total effect should be rejected. Generally, no advice can be given about decisions with a, that is an ambivalent, total effect on utility, whereas that a ± or ? total effect on utility should not be rejected because it raises the undetermined decision problem. To solve such undeterminated decision, we propose here an original algorithm which is based on the principle of superposition adopted for CMs. This principle stipulates that the result of applying together two concepts Ci and Ci is the same as applying C\ and C-i in sequence. Algorithm for solving the undetermined decision For any concept C that has an undetermined result on the utility U, calculate all the indirect effects between C and U; then separate those indirect effects in positive and negative paths; i.e., paths with "+" and "—" total indirect effect respectively; Cut off all the negative paths and evaluate the effect of positive paths on U, then note Pi this evaluation; Repeat the previous step for the effect of negative paths on U (without taking into account the positive paths) and note P2 this evaluation; Compare Pi and Pi (a) if Pi is more valuable than P2 then the sign between C and U is "+"; (b) else if Pi is less valuable than P2 then the sign between C and U is "—"; (c) else if Pi ia as valuable as P2 then the sign between C and U is "0". We will show below how this algorithm operates with a concrete example. Before that, we now illustrate the decision-making process in the context of multiagent environments using CMs. To achieve this, consider for example the causal map of a professor Pi (considered as an agent) shown in Fig. 3 who supervises a research group called G12) and who has to choose between two courses D\ and D% (D\ and Di are decisions variables). The question now is how P\ can choose between D\ and Di knowing the facts reflected by 258 the causal map, shown in Fig. 3. This causal map includes the following Pi beliefs: (i) D\ favors the theoretical knowledge of Gi2's students; (ii) Greater theoretical knowledge gives a greater motivation to students; (iii) Greater motivation of students gives a better quality of research for group G12, which gives a greater utility of G\i which, in turn, has a positive result on utility of Pi. Finally, the second decision variable D2 is an easy course that decreases the workload of Pi. Obviously, decreasing Pi's workload increases her utility. D1 + Theoretical knowledge of students D2 Figure 3. An illustrative example for decision-making in a multiagent Environment. In this case, how can Pi make her choice between the two courses Z?i and £>2? Notice that in the context of our example, Pi should reason about another agent which is the group G12 to make her decision. In other contexts, and for other decisions, she can also collaborate with her group to develop her decision. In this sense, the decision-making process considered here is a multiagent process. To run this process, it might be useful to convert the causal map being analyzed to the form of a valency matrix V. With the valency matrix, Pi can calculate indirect paths of length 2 (i.e. V2), 3 (i.e. Vs), etc., and the total effect matrix Vt. In fact, Vt tells Pi how the decision variables L>i and D2 affect her utility and Gi2's utility. This gives the following matrix of size 2x2 (keeping only the relevant entries) involving two decision concepts (DC), D\ and D2, and two utilities considered as value concepts (VC), namely, Utilities of Gi2 and Pi. Research quality + of G12 Utility ^K)f Gl2 Student motivation Utility of Pi -=•- Pi workload 259 DC\VC Di D2 Utility of G12 + - Utility of Pi ? + Thus, Pi perceives (1) decision D\ as having a positive effect on Utility of G\2 and an undetermined effect on her utihty; (2) decision D2 as having a negative effect on Utility of G\2 and a positive effect on her utility. In these conditions, it is important to remove the undetermined result of Di decision on Pi utility. To achieve this, we apply the previous algorithm as follows: 1. To see the impact of giving the course D\ on utility of G\2 we cut off the negative path produced by "Student motivation" —(+)—> "Workload of Pi" —(—)—> "Utility of Pi". Practically, this means that Pi evaluates the following hypothetic situation: "if the course D% will be given by another colleague what will be the impact (Ji) of D\ on my utility without taking into account the workload induced by D\V 2. Similarly, we cut off the positive path produced by "Student motivation" —(+)—> "Research quality of G12" —(+)—> "Utility of Gi2" — (+)— > "Utility of Pi". By doing so, we can see the impact (I2) of giving the course D\ on the workload (W2) of Pi without the positive impact induced by the group Gi2. Practically, this means that Pi evaluates the following hypothetic situation: "What will be the impact (I2) on my utility if I give the course D\ to another group that has no connection with me?". 3. Finally, If the impact I\ compensates I2 then D\ —(0)—> utility of Pi; (b) is more valuable than I2 then £>i —(+)—> utility of Pi; (c) is less valuable than I2 then D\ —(—)—> utility of Pi Suppose that Pi believes that the impact of giving the course D\ produces effects on her utility, via her group of research, which are more valuables than what this course gives her as workload. In these conditions, we have DC\VC Dt D2 Utility of Gx2 + 0 Utility of Pj + + It is clear here that decision D\ would be preferred on decision D2 be­ cause this decision has a positive impact on Pi's utility and on G\2 utility. Conversely, D2 has only limited impact because it only positively influences the utility of Pi. It is now important to say how the CM-RELVIEW tool can be used by de­ cision makers for their QDM? In fact, decision makers (DMs) can elicit causal 260 knowledge about their decision and utility variables from different sources, including documents (such as corporate reports or memos), questionnaires, interviews, grids, and interaction and communication between other agents. After that, they use the relation editor of CM-RELVIEW for filling matrices relative to this causal knowledge. Then, they use the GRAPH button for transforming those matrices, into graphs (causal maps). Finally, they analyze those causal maps using the TRANS button. Here, how a decision maker (DM) can use this tool. By pressing the button TRANS in the menu window (Fig. 2), CM-RELVIEW, a decision maker (DM) can calculate the transitive closure, (i.e., the total effect that a decision has on the utility variable). In the case where there is an undetermined result, CM-RELVIEW applies the algorithm introduced previously and asks the DM to give it some guidance to solve the undetermined result. In particular, the DM is asked to supply (1) the impact of positive and negative paths and, (2) the most valuable impact. A fully automated process for solving the undetermined result problem is scheduled in the agenda of our future work. 4 CMs as a Tool For Studying Changes in Organization of Agents In multiagent systems, the study of an organization of agents has generally focused on some structural models such: (1) centralized and hierarchical or­ ganizations, (2) organizations as authority structure, (3) market-like organi­ zations, (4) organizations as communities with rules of behavior. All these structures missed dynamic aspects and influences that exist in an organization of agents. Weick 13 suggested to change the prevalent static view of an organiza­ tion of agents to a dynamic view which is sustained by change. Precisely, he proposed that organization and change were two sides of the same social phe­ nomena. His reasoning was that an organization is a process of co-evolution of agents' perceptions, cognitions and actions. In this context, Weick proposed a theory of organization and change based on the graphs of loops in evolving social systems. In the last decade, additional investigation guided by this approach 3'4 tried to articulate how CMs provide a way to identify the loops that produce and control an organization. As an example, consider the organization that binds researchers, grant agencies and qualified personnel in any (science and engineering) department. The causal map representing this organization is shown in Fig. 4. The meaning of this CM is clear and we shall explain it no more. In this causal map, concepts link together to form loops, some of which 261 Figure 4. An organization of agents as loops. are numbered (1) to (7). Loops (1), (4)-(7) are deviation-amplifying loops. Change in the organization is the result of such loops, because any initial in­ crease (or decrease) in any concept loops back to that concept as an additional increase (or decrease) which, in turn, leads to more increase (or decrease). Loops (2) and (3) are deviation-countering loops 4. The stability of the organization is the result of such loops. In the case of loop (2), for instance, an increase of resources for research can lead to an increase of salaries which, in turn, reduces the resources allowed to research. If this reduction is not enough to compensate the initial increase of resources, then a residual increase of salaries takes place which, in turn, reduces the resources, and so on, until a balance between the initial increase of resources and salaries is reached. Thus, deviation-countering loops are useful for stabilizing the growth generated in an organization. Notice that in a wholistic approach the whole constraints the concepts and the relationships between them. With an organization of agents represented 262 as a wholistic approach , we obtain a dynamic system in which deviation- amplifying loops are responsible for change and deviation-countering loops are responsible for stability of the organization. Using these loops, an individual strategist can direct strategic change in the desired directions. This can be achieved by (1) choosing and changing a loop or (2) choosing and changing a set of loops. Now let's see how the CM-RELVIEW tool can be used by decision makers (DMs) for the reasoning on organization changes. Here also, DMs elicit causal knowledge about their organizations from different sources as reports, memos, questionnaires, interviews, etc.. After that, they use the CM-RELVIEW for constructing causal maps reflecting this causal knowledge. Finally, they use the CM-RELVIEW tool for analyzing those causal maps. As stated in Section 2, the submenu of the graph menu called WHOLISTIC- APPROACH allows DMs to draw a "wholistic" causal map, whereas the menu WHOLISTIC-CM of TEST allows them to test it by choosing and changing a loop. Obviously, the loop to be changed should be a weak loop loosely coupled to the system. CM-RELVIEW offers DMs the following actions for changing a loop (from deviation amplifying to deviation countering, or vice versa): ADD- NODE: adding a node; REM-NODE: removing a node; REP-NODE: replacing a node; CHG-LABEL: changing the label of a link. 5 Conclusion and Future Work We have firstly proposed a tool for qualitative reasoning based on cognitive maps representing relationships between agents' beliefs. This tool allows users to determine certain quantitative and qualitative features of any cognitive map. Then, we have argued for the use of this tool in the context of multiagent systems, particularly for the reasoning on interrelationships among a set of individual and social concepts. There are many directions in which the proposal made here can be ex­ tended. • The full possibilities of relation algebra have yet to be exploited. Another option is to study "fuzzy relations" between agents' concepts 15. Our approach might be extended in this direction to take into account many degrees and vague degrees of influence between agents such as none, very little, sometimes, a lot, usually, more or less, and so forth 9'12. • Applications such as the following ones must be investigated in greater depth: (1) negotiation and mediation between agents in the case of rea­ soning about subjective views; (2) knowledge available to or necessary to 263 agents in the case of nested causal maps; (3) reasoning about the wholis- tic approach; and (4) reasoning on social laws, particularly for qualitative decision making. References 1. R Axelrod, ed. Structure of Decision: The Cognitive Maps of Political Elites. Princeton University Press, (1976). 2. Berghammer, R. and Schmidt, G. RELVIEW - A computer system for the manipulation of relations. In Con/. AMAST-93, pp. 405-406,(1993). 3. Bougon, M. G. Uncovering Cognitive Maps: The Self-Q Technique, Priv. Print Handb., Penn. State Univ., (1986). 4. Bougon, M. G. and Komocar, J. M. Directing strategic change: a dynamic wholistic approach, in Mapping Strategic Thought, A. S. Huff, ed,. Wiley and Sons, pp. 135-163, (1990). 5. Buede, D. M. and Ferrell, D. Convergence Problem Solving: A Prelude to Quantitative Analysis, IEEE Trans. Syst., Man, Cybern. 23, pp. 746-765, (1993). 6. Chaib-draa, B. abd Desharnais, J. A Relational model of cognitive maps. Inter. Jour, of Human-Computer Studies 49, pp. 181-200,(1998). 7. Huff, A. S. Mapping Strategic Thought, Wiley & Sons, Eng., (1990). 8. Kelly, G. A. The Psychology of Personal Constructs, New: Norton, (1955). 9. Kosko, B.Neural Networks and Fuzzy Systems, Prentice Hall. (1992). 10. Levi, A. and Tetlock, P. E. A cognitive analysis of Japan's 1941 decision for war. Journ. of Conflict Resolution 24, pp. 195-211, (1980). 11. Nakumara, K. Iwai, S. and Sawaragi, T. Decision support using causation knowledge base, IEEE Trans. Syst, Man, Cybern. SMC-12, pp. 765- 777, (1982). 12. Park, K. S. and Kim, S. H. Fuzzy cognitive maps considering time rela­ tionships, Int. J. Human-Computer Studies 42, pp. 157-168, (1955). 13. Weick, K. E. The social Psychology of Organizing, Reading, MA: Addison Wesly, (1969). 14. Wellman, M. P. Inference in cognitive maps, Mathematics and Computers in Simulation 36,pp. 1-12, (1994). 15. Zhang, W. R. NPN Fuzzy Sets and NPN Qualitative Algebra: A Com­ putational Framework for Bipolar Cognitive Modeling and Multiagent Analysis. IEEE Trans. Syst, Man, Cybern., 26(4), pp. 561-574, (1996). USER'S ONTOLOGY-BASED AUTONOMOUS INTERFACE AGENTS TAREKHELMY, SATOSHI AMAMIYA, MAKOTO AMAMIYA Graduate School of Information Science and Electrical Engineering Kyushu University, 6-1 Kasuga-Koen, Kasuga Fukuoka 816-8580, Japan Email: [helmy, roger, amamiya]® al.is.kyushu-u.ac.jp This paper proposes methods by which user's preferences for WWW-based pages can be inferred from user's behaviors. Both explicit and implicit feedbacks of inference were used to infer the user's preferences. In the explicit feedback mode, a user evaluates the selected page as interest/not interest according to the relevancy of the page with the given query and sends an explicit feedback. In the implicit feedback mode, a user browses the pages by performing; for instances; bookmark, saving, printing, scrolling, enlarging, closing, reading, or jumping to another link, and the system infers from these operations how much the user was interested in the page. The users browse pages by using Kodama's simple browser in which there is an interaction agent that monitors the user behaviors and a learning agent that infers user's preferences from the interaction agent. The results show that the proposed techniques for learning and using user preferences in refining the given query and filtering the retrieved documents greatly enhance the value of retrieving more relevant information. 1 Introduction The number of information sources available to the Internet user has become extremely large. This information is loosely held together by annotated connections, called hyperlinks [3], [12]. This makes locating relevant information consistent with the user's information need is very difficult. The users normally face with very large hit lists with low precision while using the Traditional Search Engine (TSE). Moreover, the information gathering and retrieving processes in the TSE are independent of user's preference, and therefore feedback from the later process is hardly adaptive to improve the quality of the former process. These factors make it necessary to investigate new techniques to address these problems. Intelligent agents may be the way to improve search and retrieval process as active personal assistants. Researchers in Artificial Intelligence (AI) and Information Retrieval (IR) fields have already succeeded in developing agent-based techniques to automate tedious tasks and to facilitate the management of information flooding [4], [5], [15]. Kodama1 is a distributed multi-agent for the IR in large, dynamic and distributed environment such as WWW. The approach is based on a distributed, adaptive and on-line agent population negotiating and making local decisions for retrieving the most relevant information to the user's query. 1 Kyushu University Open Distributed Autonomous Multi-Agent. 264 265 In this paper we will describe briefly the mechanism of agentifying a Web site, creating WPAs communities and the main focus is on the User Interface Agent (UIA). We discuss our new methodologies of calculating the relevancy with the User's Preferences (UP) by UIA. Next, we introduce ways to model user's interests and show how these models can be deployed for more effective information retrieval and filtration. We describe the adaptation techniques used in the UIA and how the UIA makes use of user's query history and bookmark files as the UP. Finally we present the experimental results and future work of Kodama. 2 Web Site Agentification Cooperating intelligent Kodama agents are employed to agentify the Web where the hyper structure is preexisting in the form of Web links [12]. Our system uses three types of Kodama agents in the agentification mechanism for searching the Web. A Server Agent (SA) assigned to each Web server, a Web Page Agent (WPA) assigned to each Web page, and a User Interface Agent (UIA) assigned to each user's machine [6], [7], [8], [9]. A SA is assigned to one Web server to be responsible. The SA starts from the portal address of the Web server and creates the hyper structure of WPAs communities based on the hyper link structure in the Web server. We introduce a definition of WPAs community that enables the SA to effectively focus on narrow but topically related subset of WPAs and to increase the precision of search results. The SA knows all WPAs in the server and works as a gateway when WPAs communicate with each other or with one in another server. The SA initiates all WPAs in its server when it starts searching relevant information to the user's query. The WPA registers itself to the SA and takes essential properties and principles given by the SA to create the Interpretation Policy (IP) as an ontology that represents the context of the Web page. Each WPA has its own parser, to which the WPA passes a URL, and a private IP, in which the WPA keeps all the policy keywords, found in its URL. At the retrieval phase, WPAs, when received a user's query from SA initiate search by interpreting the query and/or either asking 'Is this yours?' or announcing This is yours,' to its down-chain WPAs. The selected WPAs and/or their down-chain WPAs of each Web server, in turn, interpret the query based on both Query-IP and Query-URL similarities and reply the answer This is mine' with some confidence or 'Not mine' (zero confidence). For more information about the IP representation and relevancy measurement by the WPA, see [8], [9]. 3 User Interface Agent Autonomy The UIA resides in the user's machine, communicates with the WPAs via an SA to retrieve information relevant to the user's query, and shows the results returned by the WPAs to the user after filtering and re-ranking them. Monitoring the user- 266 browsing behavior is accomplished via a proxy server that allows the UIA to inspect HTTP requests from its browser. The UIA receives user's responses of his/her interest/not interest to the results and regards them as rewards to the results. The UIAs in Kodama system look over the shoulders of the users and record every action into the query history file. After enough data has been accumulated, the system uses this data to predict a user's action based on the similarity of the current query to already encountered data. The followings are the job stream of the UIA. (1) The user starts by sending a Natural Language (NL) query to the UIA. (2) UIA analyzes the NL query using a simple NL processing algorithm, throws out irrelevant words, reformulates and transforms it to Qt. (3) The UIA calculates the similarity with the method described here and looks for relevant URLs in UP files using equations 5, 6. (4) If UIA finds relevant URLs in UP then shows them and asks the user whether the user is satisfied or wants to search the Web. (5) In case of finding relevant queries in UP, the UIA takes two queries from the UP, whose similarity to the given query is over a predefined threshold value and concatenates the given query with the keywords of these two queries after removing the redundant terms to expand Qt. (6) The UIA takes a set of queries, whose similarity to the given query is over a predefined threshold value from the UP. Then, the UIA makes a context query from them and <2, to be used for filtering the retrieved documents. (7) If a user does not satisfy with the relevant URLs from the UP files then the UIA routes Qi to a relevant SA, which in turn forwards it to its community of WPAs, (In the current version the UIA routes the query to default or predefined SAs). (8) The UIA receives the search results returned by the WPAs via the SA. The results consist of a set of contents of Web pages. (9) The user checks and either explicitly evaluate the retrieved documents using UIA's feedback or the system implicitly detects user's response. 3.I User's Preferences Creation and Relevancy Algorithm by UIA Recording and analyzing user's accessing histories and bookmark by the UIA are quite important to catch his/her preferences. The query history file contains information about previously visited URLs for specific queries, the number of occurrences that this URL is visited, the time of visiting & leaving and the query. The bookmark file contains a user's hot-list of Web links, the number of occurrences that a URL is visited, bookmaking time of the URL and its title. The query and the title fields in query history and bookmark files are represented as a vector of keywords sorted in alphabetical order, where a weight value is assigned to each keyword to reflect the correlation with the content of the page. User's explicit response (9?), is Useless, Not very useful, Mildly interesting, Neutral, or Interesting and each has a value between 0 and 1. The UIA maps the user's implicit responses 267 to have the same effect as the explicit responses. When looking up relevant URL from the UP, the UIA calculates similarities as follows: First: We define equations to calculate the similarity between a user's query and his/her query history file. Assume we have a query history file and a bookmark file of n URL lines gathered. Qi = stands for a vector of keywords sorted in alphabetical order, of the query given by the user. Qj =< Khj{, Khj2. • • • • nhjm >, (\< j and Kbn =kb:,:• wb:: are defined for the title of / th line in the user's bookmark file. The j*' j,< J,I j weight w*, and wj, are incrementally computed with the number t. of visiting to URLj . WJJ (/_,•+1) = p • WJJ (/,-) + (1 - P )•» (1) Where w.^. means w •• or w.,-, and 0<9?<1 is a user's response described above. Initial value w7, (l) is set by the user's first response. 0 < p < 1 is a function of tj, i.e., p(tj), and pOj) depends on how long user's response history upon the keyword will be involved in calculating and adapting the next weight wJ?,(f7 +1). Notice that W-. means the accumulated user's preference of keyword in the y th line. We calculate the similarity Sj between Qin and the query field of j th line of the user's query history file, and similarity S*between <2,and the title field ofy'th line of the bookmark file. 5*. =£w j,rg(k) (2) & S) = I Wjii••g\k.) (3) Where, g(kj) = l if k.eQ.nQ , otherwise g(£,) = 0, and g'(k) = l if ' J / j ' k. e Q. nT., otherwise g:(kj) = 0. Also, we calculate the similaritySU- between Qin and the URL of j th line using equation (4). S J' = -~f—_ (4)- in j url Where, cin =\Q\, suri -\<2,^ nURL\, dj=\URLj\, and URLj stands for the set of words in the URL of y'th line. Weighting factor 0 2 1.8 1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 u -* % 1 I 1 I I 1 m 1 ki _3hup://agents.umbc.edu/Topics/Multi_Agent_Systems/index .sht ml • http:/ /agents. umbc.edu/App lie at ions_and_So ft ware/App lie al ions/index.shtml • http://lie ber.www.media.m it .edu/people/lie ber/Teach in g/Ag en t s-Tut orial/ • http://agents.umbc .edu/ • http:/ /agents, u mbc.edu/ To pic s/BDL_ Agents/ index, shtml __http://agents.umbc.edu/Topics/Natural_language_proccssin g/inde x.shtml • http://a gents.umbc.edu/Topics/Interface_Agents/index.sht ml E_http://agents.umbc.edu/kqml/ • ht t p:/ /www. utoronto.c a/ ian/so ft ware /soft ware, html • hi tp:/ /agents, u mbc . e du/Ap pi ic at ions_and_S oft ware/App lie at ions/ index, sht ml • http://agents.umbc .edu/Courses/Tutorials/index.shtml EJhtlp://www.epm.ornl.gov/ctrc/MABES .htm • http://agents.umbc .edu/Agents_for_.../P la nn in g_and_sched uling/inde x.sbt m 1 • http://www.labs.bt.c om/projec t s/a gen t s.ht m • hup://computer.org/cspress/csp-aulh.htm • http://www.mit.edu:8001/people/cdemcllo/eg.html References Figure 3 URL's correlation with the relevant keywords 1. Ballacker K., S. Lawrence, and L. Giles, "CiteSeeer: An Autonomous System for processing and organizing scientific Literature on the Web", Proc. of Automated Learning and Discovery Conference, Carnegie, Mellon University, 1998. 273 2. Budzik J. and Hammond K. ^Watson: Anticipating and Contextualizing Information Needs", in Proceedings of Sixty-second annual Meeting of the American Society for Information Science, 1999. 3. Chakrabarti S., B. Dom, D. Gibson, J. Kleinberg, P. Raghavan, and S. Rajagopalan, "Automatic Resource Compilation by analyzing Hyperlink Structure and Associated Text", Proc. of the 7th WWW Conference, 1998. 4. Chen L. and Katia S., "WebMate: A Personal Agent for Browsing and Searching", Proceedings of the Second International Conference of Autonomous Agents, Minneapolis, MN USA, May 9-13,1998, pp. 132-138. 5. Edmund S. Yu, Ping C. Koo, and Elizabth D. Liddy: Evolving Intelligent Text-based Agents, Proceedings of the 4th International Conference of Autonomous Agents, June 3- 7- 2000, Barcelona, Spain, pp.388-395. 6. Helmy T., B. Hodjat and M. Amamiya, " Multi-Agent Based Approach for Information Retrieval in the WWW", Proceedings of the First Asia-Pacific International Conference on Intelligent Agent Technology (IAT'99), Hong Kong, 15-17/12, 1999, pp. 306-316. 7. Helmy T., T. Mine, G. Zhong, M. Amamiya, "A Novel Multi-Agent KODAMA Coordination for On-line Searching and Browsing the Web", Proceedings of The Fifth International Conference and Exhibition on The Practical Application of Intelligent Agents and Multi-Agents, 10-12/4, 2000, Manchester, UK, pp. 335-338. 8. Helmy T., T. Mine and M. Amamiya, "Adaptive exploiting User Profile and Interpretation Policy for Searching and Browsing the Web on KODAMA System", Proceedings of the 2nd International Workshop on Natural Language and Information Systems NLIS, London, United Kingdom, September 4-8, 2000, pp. 120-124. 9. Helmy T., Amamiya S. and Amamiya M. "Collaborative Kodama Agents with Automated Learning and Adapting for Personalized Web Searching", Thirteenth International Conference on Innovative Applications of Artificial Intelligence (IAAI- 2001), AAAI Press, August 7-9, 2001, Seattle, USA. 10. Joachims Thorsten, Dayne Freitage, and Tom M. Mitchell, "WebWatcher: A tour guide for the World Wide Web", in Proceedings of International Joint Conference on Artificial Intelligence (IJCAI97), pp. 770-775, 1997. 11. Kim J., Oard D., and Romanik K. "Using Implicit Feedback for User Modeling in Internet and Intranet Searching" Technical Report [2000], Collage of Library and Information service, University of Maryland. 12. Kleinberg J., "Authoritative sources in a hyperlinked environment", ACM Journal, 46(s), PP. 604-632 1999. 13. Morita M. and Shinoda Y., "Information filtering based on user behavior analysis and best match text retrieval", Proc. of the Seventeenth International ACM-SIGIR Conference on Research and Development in Information Retrieval, pp. 272-281. 14. Pann K., A. And Sycara, K. " A Personal Text Filtering Agent", Proceedings of the AAAI Stanford Spring Symposium on Machine Learning and Information Access, Stanford, CA, March 25-27,1996. 15. Steve Lawrence, "Context and page analysis for improved Web Search", IEEE Internet Computing, July-August, pp.38-46,1998. INTEGRATION AND REUSE OF HETEROGENEOUS XML DTDS FOR INFORMATION AGENTS EUNA JEONG Computer Science and Information Eng., National Taiwan University E-mail: eajeong@agents.csie.ntu.edu.tw CHUN-NAN HSU Institute of Information Science, Academia Sinica E-mail: chunnan@iis.sinica.edu.tw This paper proposes a novel approach to integrating heterogeneous XML DTDs. With this approach, an information agent can be easily extended to integrate het­ erogeneous XML-based contents and perform federated searches. Based on a tree grammar inference technique, this approach derives an integrated view and source descriptions of XML DTDs in an information integration framework. The deriva­ tion takes advantage of naming and structural similarities among DTDs in similar domains. The complete approach consists of three main steps. (1) DTD clustering clusters DTDs of similar domains into classes. (2) Schema learning takes the DTDs in a class as input and applies a tree grammar inference technique to generate a set of tree grammar rules. (3) Minimization optimizes the rules previously generated and transforms them into an integrated view as well as source descriptions. We have implemented the proposed approach into a system called DEEP and had the system tested in artificial and real domains. Experimental results reveal that the DEEP can effectively and efficiently integrate radically different DTDs. 1 Introduction Software agents1'2 and integration systems of heterogeneous databases3'4'5'6 are widely studied and developed to allow users the ability to find, collect, filter and manage information sources spread on the Internet. The design concern of these systems vary for different domains, but all share a common need for a layer of an integrated view and source descriptions in order to seamlessly integrate heterogeneous information sources. The integrated view must be designed for each application domain. Source descriptions are needed to map source schemas to the integrated view. However, previous work in information integration requires both of them to be constructed manually in a laborious and time-consuming manner. The approach presented in this paper is based on previous work in in­ formation integration. In particular, this approach addresses the problem of automatic derivation of the integrated view for XML DTDs(Document Type Definition).7 Although XML is becoming an industrial standard for exchang- 274 275 Table 1. Example DTDs (a) COOKBOOK DTD 1 (!ELEMENT cookbook (title, author*, year, isbn, publisher)) 2 (!ELEMENT author (authorname)) 3 (IELEMENT authorname Cfirstname, lastname)) 4 ((ELEMENT publisher (name, address)) (b) BIB DTD 5 (IELEMENT bib (title, author*, publisher, price)) 6 (IATTLIST bib year CDATA #REQUIRED) 7 (IELEMENT author (last, first)) 8 (IELEMENT publisher (name, email)) ing data on the Internet, it is difficult and sometimes impossible to have such a common DTD when maintaining of the information sources is independent of the integrator. The remainder of the paper is organized into the following. Section 2 reviews XML and information integration. Section 3 describes our view infer­ ence approach. Section 4 contains the experimental results. Finally, Section 5 reviews related work and draws conclusions. 2 XML Information Integration XML data is an instance of semistructured data. With a DTD, XML is self- descriptive and provides a semistructured data model. Thus, we model a DTD as a labeled, directed tree. The tree nodes represent objects and are labeled with an element or attribute name. The leaf nodes represent atomic objects and the internal nodes represent complex objects. Type Each internal node in the tree has its own type. The type of an object is defined by its label and its immediately adjacent child nodes. XML attributes are treated in the same way as element tags. Each type is denoted by U, where i is the type id. All leaf nodes (i.e., #PCDATA type) belong to t0. Each internal node type has a type definition of the form [label: Type (label)], where label is a regular expression over a finite set M of names; Type (label) is either #PCDATA for leaf nodes or a regular expression over M with type id as the subscription. DTD Schema A DTD schema consists of a sequence of type definitions. DTD Class A DTD class consists of similar DTD schemas. Example 1 Table 1 gives two example DTDs extracted from published pa­ pers and documents.8,9 Here, COOKBOOK and BIB DTDs represent two related domains. Although these DTDs are created by different authors, they reveal 276 User < ;> Figure 1. XML Information Integration Agent structural and naming similarities because the underlying concepts are closely related. Given the set V of source DTDs in Table 1, the following type set T can be constructed. The underlined label, such as year0 of £5, means that it corresponds to an XML attribute. • ti = [cookbook: (titleo, (author2) + , yearo, isbno, publisher^)]; ti ~ [author : (authomame^)]; ti = [authorname : (firstnameo, lastnameo)]; t$ = [publisher : (nameo, addresso)]; £5 = [bib : (titleo, (authorc)-r, publisher*/, priceo, yearn)]; tg = [author : (firsty, last/))}; ty — [publisher : (name//, emailo)\ Figure 1 shows an information integration agent(IIA) for XML documents with different DTDs. The user submits a request to the system through a user interface. The request is then translated into an XML-QL8 query by a query decomposer. Given the query, and based upon an integrated view, the server transforms the query into a set of subqueries against each integrated information source. Finally, query executor issues the subqueries towards each information source, integrates the results, and returns the requested data to the user as an XML document. Previous work in information integration usually assumes that the inte­ grated view is given. This paper, however, describes how to automatically derive the integrated view by way of a view inference system. Consequently, the derivation is conducted offline before the IIA is able to provide service. The view inference system serves to automatically discover the association between closely related DTDs, identify elements with similar underlying se­ mantics, and generate an integrated view that covers these semantically similar elements. 351 277 Process flow *• Dataflow • Figure 2. View Inference System diagram 3 View Inference System Our view inference approach consists of the following three major components (shown in Figure 2): DTD clustering; schema learner, minimizer. A brief description of each module is as follows: DTD clustering takes a collection of source DTDs as input and clusters them into DTD classes based on DTD similarities; Schema learner infers the general rules describing source DTDs in each DTD class; Minimizer optimizes the learned rules. The learned rules are first adjusted to fit the characteristics of DTDs and then transformed into both the integrated view and the source descriptions to be used in the information integration agent. 3.1 Renamer Renamer as a preprocessing step is an optional module that requires human intervention. The internal nodes in XML DTDs offer both naming and struc­ tural hints in order for the system to conveniently associate related elements in the different DTDs, while leaf nodes offer very limited information to the system. The renamer module is designed to allow human users to provide ad­ ditional hints for the system to associate related leaf nodes. In the case of leaf nodes, the element name can be manually renamed to another internal/leaf element name in different DTDs so that they will be considered as sharing the same underlying concept. For instance, in Example 1, element first can be changed to f irstname. 3.2 DTD clustering Since we make no assumption that the input DTDs must describe the same domain, the possibility exists that the input DTDs may describe drastically different domains. Therefore, DTDs need to be clustered into classes of similar domains so that the system may meaningfully derive an integrated view. This 278 task is the goal of our DTD clustering approach. As a preprocessing step, we merge types before clustering. The purpose is to reduce the number of types, as well as the distance between DTD trees. This will allow DTDs of similar domains to have a better chance to be clus­ tered together. Example 2 Given the type set T in Example 1, the merged type set is gener­ ated. In this example, types t^ and £7 are merged to new type t' = [publisher : (nameo, addresso?, emailo*?]. Continuing from Example 1, all DTD schemas are redefined according to Tm. The BIB DTD schema will be redefined with three types, t5, tg, and t'. • We employ a hierarchical clustering method11 used widely in information re­ trieval. The basic idea is the following: initially start with a separate class for each DTD; successively merge the classes closest to one another, until the number of classes is sufficiently small. We extend Lu's algorithm12 to com­ pute the distance between two labeled trees. The distance is computed by calculating the minimum number of modifications required to transform the input tree into a reference tree. For computing the distance between classes, the average distance is used. 3.3 Schema Learner Now that DTDs in similar domains are clustered into classes, the next task is to generate an integrated schema covering the DTDs in each class. We address this problem with a tree grammar inference approach. Grammatical inference is the task for inducing hidden grammatical rules from a set of examples. The problem of deriving an integrated schema from similar DTD schemas can be reduced to this task. We adopt the fc-follower method,10 which applies a simple state-merging heuristic process. Given a DTD class, schema learner generates a tree automaton to describe DTDs (as trees) in the DTD class. The corresponding tree grammar of the tree automaton describes an infinite language, containing input trees in the DTD class. Definition 1 Let S be a given finite set of trees, and S a union of S and Ssuf, which is the set of all subtrees of the member trees in S. Let A; be a nonnegative integer. The k-follower H^(T) of tree T with respect to S is defined by fl$(T) = {17(6 <== %)\U e S and b/U = T) where tree U satisfies one of the following: if U £ S, then Depthu(b) < k; if U e Ssub, then Depthu(b) = k; where $ is a special character not in the set N, 279 U(b -£= $) is the replacement of the subtree of U at b with $, and Depthu(b) is the number of nodes on the path from the root of U to b, excluding b. • Our tree grammar inference algorithm is formally presented as follows: Algorithm 1 Given a set S of source DTDs, Step 1. Generate the set S of all subtrees of the member trees in S and ini­ tialize k to 0. Step 2. For each subtree T in S, generate the A;-follower with respect to the set S. If the fc-follower set of two trees are equal, then the states of the automaton corresponds to the same equivalence class. Step 3. If the equivalence classes have changed, then go to Step 2 with k increased by 1. Otherwise, go to Step 4. Step 4. Generate state transition functions A. Example 3 Suppose we are given two modified DTD schemas of Example 2. The input set S is a DTD class containing two DTD schemas and the generated set S contains 15 subtrees. Algorithm 1 is terminated when k = 2 because its equivalence classes are the same as k = 1. The inferred tree automaton is M = {{F, qx, q2,... ,913}, M, A, {F}} where the state transition functions are as follows. (The corresponding tree grammar is shown in Figure 3.) • <5«»aU = 91 1 <5pric. = 92, <5isbn = 931 ^address = 94, 1 WHERE 2 9 10 11 12 13 14 15 16 17 $title</> <*> <author1authorname> <first|firstname>$first</> <last1lastname>$last</> </> </> <publisher> <address>$address</> <name>$name </> <email>$email</> </> <year>$year</> <isbn>$price</> <price>$price</> 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 <title>$title</> <author> <authorname> <firstname>$first</> <lastname>$last</> </> </> <publisher> <address>$address</> <name>$name </> </> <year>$year</> <isbn>$isbn</> </> IN "COOKBOOK.xml" CONSTRUCT result patterns 18 CONSTRUCT result patterns domains, namely, book, play, and movie-list. The tested DTDs are prepared as follows: we started by collecting two to three seed DTDs from published pa­ pers and documents8,9 on test domains. The seed DTDs serve as the "golden rule" for performance evaluation. From these seed DTDs, we construct 100 DTDs for each domain by using various perturbations with different modi­ fication rates. The modification rate is defined as the ratio of the number of modified nodes and the total number of nodes in a given tree (i.e., DTD). The modification is conducted by randomly selecting one node and applying a randomly selected operator to the node. Each data set was used in two cases: with or without the renamer process, as described in Subsection 3.1. The first performance measure is the correctness of the clustering. The precision of clustering is the average of the ratio of the correctly clustered DTDs and the number of DTDs in each DTD class. As the modification rate increases, the precision degrades gracefully from 100% to 75% with the renamer in Fig­ ure 4 (a). Even without the process, we see that the 38% degradation (from 100% to 62%) is not too severe. The second measure is the accuracy of the integrated schema. The result was achieved without clustering. The accuracy is the ratio of the number of similar concepts discovered by the system and the total number of similar concepts in the data set. Without the renamer, the performance is not optimal, as accuracies range from 50% to 18%. On the contrast, with the renamer, DEEP performed quite well; accuracies ranged from 100% to 82%. In actuality, renaming only gives hints to the system. Most associations between similar concepts are identified by the system, as 282 Figure 4. Quality of DEEP shown in the shaded area of Figure 4 (b). In this experiment, renaming is as­ signed by human experts. Furthermore, the task can be also automatized by regulating human heuristics, a topic currently being investigated. Ontologies of common vocabulary that guide renaming may also help. 5 Related Work and Conclusions The most closely related work is LSD,13 a system that learns source descrip­ tions. Given an integrated schema, the system learns from manual mappings and then proposes mappings for new data sources. LSD's focus is on finding one-to-one mappings for leaf elements of source schemas. Another related work is XTRACT,14 a system that extracts a DTD from XML documents. Input XML documents are assumed to conform to the same DTD. Since DTDs are not mandatory, tools that can infer an accurate DTD for given XML documents are useful. It is straightforward to extend our sys­ tem in order to extract a DTD from XML documents using the schema learner module. In this case, the set of sample trees consist of XML documents and the inferred rules corresponds a DTD able to cover all the input documents. Given a query, MIX15 derives the view DTD able to describe the query. Since many candidate view DTDs may satisfy the query, the main concern is to derive the tightest view DTD. We have proposed a view inference approach that automatically derives an integrated view and source descriptions in order for an information in­ tegration agent to access XML-based sources. This problem arises because manually constructing an integrated view and source descriptions for each application domain is error-prone and labor-intensive. We therefore conclude that our view inference approach is a feasible solution for alleviating engineer­ ing bottlenecks in the development of scalable information integration agents. 283 Acknowledgements This reported research was supported, in part, by the National Science Council in Taiwan under Grant No. NSC 89-2218-E-002-014, 89-2750-P-001-007, and 89-2213-E-001-039. References 1. O. Etzioni and D. Weld. A softbot-based interface to the Internet. In C. ACM, 1994. 2. C. Kwok and D. Weld. Planning to gather information. In Proceedings on 13th national conference of AI, 1996. 3. S. Chawathe, H. Garcia-Molina, J. Hammer, K. Ireland, Y. Papakonstantinou, J. Ullman, and J. Widom. The TSIMMIS project: Integration of heterogeneous information sources. In Proceedings of the Information Processing Society of Japan Conference, pages 7-18, Tokyo, Japan, October 1995. 4. T. Kirk, A. Y. Levy, Y. Sagiv, and D. Srivastava. The information manifold. In Proceedings of the AAAI Spring Symposium on Information Gathering in Distributed Heterogeneous Environments, Stanford, California, March 1995. 5. C. A. Knoblock, Y. Arens, and C. N. Hsu. Cooperating agents for information retrieval. In Proceedings of International Conference on Cooperative Informa­tion Systems. University of Toronto Press, 1994. 6. O. Duschka and M. Genesereth. Querying planning in infomaster. In Proceed­ ings of the ACM Symposium on Applied Computing, San Jose, CA, 1997. 7. Tim Bray, Jean Paoli, and C. M. Sperberg-McQueen. Extensible Markup Lan- guage(XML) 1.0, 1998. W3C Recommendation. 8. A. Deutsch, M. Fernandez, D. Florescu, A. Levy, and D. Suciu. XML-QL: a query language for XML, 1998. W3C Note. 9. M. Fernandez, J. Simeon, and P. Wadler. XML query languages:experiences and examplars, 1999. W3C Draft manuscript. 10. H. Fukuda and K. Kamata. Inference of tree automata from sample set of trees. International Journal of Computer and Information Sciences, 13, 1984. 11. Edie Rasmussen. Clustering Algorithms, chapter 16. Prentice Hall, 1992. 12. S. Y. Lu. A tree matching algorithm based on node splitting and merging. In IEEE Transactions on Pattern Analysis and Machine Intelligence, volume 6, pages 249-256, 1984. 13. A. Doan, P. Domingos, and A. Levy. Learning source descriptions for data integration. In 3rd International Workshop on the Web and Databases, 2000. 14. M. Garofalakis, A. Gionis, R. Rastogi, S. Seshadri, and K. Shim. XTRACT: a system for extracting document type descriptors from xml documents. In Proceedings of SIGMOD, 2000. 15. Y. Papakonstantinou and P. Velikhov. Enhancing semistructured data me­ diators with document type definitions. In Proceedings of the International Conference on Data Engineering, 1999. Sydney, Australia. VIRTUAL MUSEUM'S ASSISTANT* OSVALDO CAIRO, ANA ALDECO, M.E. ALGORRI Division Academica de Ingenieria Instituto Tecnologico Autonomo de Mexico (ITAM) Rio Hondo 1, Tizapdn San Angel, 01000 Mexico DF Email: cairo(a),itam.mx, al51578@alumnos.itam.mx This paper focuses in an application combining three apparently separated research areas: virtual environments, intelligent agents and museum web pages. It consists in a virtual visit to a museum guided by an intelligent agent. The agent must respond in real time to user's requests, providing different layers of data, making difference between users by using different Knowledge-Bases. The agent not only has some autonomy during the visit but also permits the user to make his own choices. The environment is created allowing an immersion, so the user could feel himself inside the museum's structure. This kind of application works as a complementary experience, because the user is being introduced to the expositions in the museum, convincing him to make a future real visit. Keywords: Knowledge-Bases (KB), Intelligent Agent, Virtual Environments. 1 Introduction Internet offers an enormous amount of information for every kind of users, making it cumbersome and sometimes plain time consuming to come across the desired data. Intelligent agents are revealing themselves as future internet assistants to allow faster, intelligent queries; narrowing user's choices to whatever information is most relevant to him, making his search more natural, enjoyable and less time consuming. Agent based systems are essential in the data query both of users and agents [4]. Research on AI has shown the capabilities of web-agents, such as Letizia [1], multiagent infrastructure framework [5] and AVATARS [6]. Being incapable to represent the human knowledge about how to use the body during communication. This problem was attacked using a behaviour defined by KB's. The agent perceives an action and has an established reaction to it. As graphics development increases, raising the levels of interactivity between the user and the environment by offering different views of objects simulating 3d perspectives to provide a high level of immersion and to allow the user to get the impression of being inside the virtual environment. Some museum sites are introducing virtual visits to their expositions, Le Musee du Louvre [16] is offering additional purchase software not available through the Internet, making it inaccessible to most people. Museums are * This project has been partially funded by CONACYT as project number 33038-A, and Asociacion Mexicana de Cultura, A.C. 284 285 primer application places for agents because they hold an enormous amount of information interesting to the user, but in the web pages, an average Internet user generally avoids the static reading sites and privileges interactive sites where he can have freedom of action. The valuable information is kept unread. In deeper layers, the user could even have information taylored to suit his own interest profile. We started this work by visiting several museums web pages ([15] to [26]) noticing that virtual visits and intelligent agents serving as guides or information facilitators in the virtual visits are very rare. The remaining of the paper is organised as follows: Section 2 gives the agent's definition. Section 3 deals with the museum's virtual environment. Finally, we state some conclusions. 2 Agent: Main Elements Our agent can be classified as hybrid because it has several characteristics from reactive agents -it will be constantly monitoring the user's actions to give a proper reaction, it is considered as an impulse-reply agent- and static characteristics. Multiple KB's were defined to give answers to certain questions with all the connotations of a human answer. This is translated as accesses to the KB's, which are established by the environment where the agent is standing. global KB's s Perception the user clicks in a paint uch as movements and gestures. Action: QUERIES (KB) Information of the piece's Authors' information Gestures and movements Room's Museum's architecture Phrases User's information Figure 1. Example of table with perception-actions. Based on these features, the agent chooses randomly an action according to the environment that surrounds him and the requests of the user. The agent must consider the different kinds of users -adults, children, foreign or local-, to share different information, by asking certain data before the application starts -part of the agent's attributes- such as name, age, country, etc. Information. Contains the actions concerning user's questions about the artwork presented in the museum, its history, authors' biographies, architecture knowledge as well as regional knowledge. This last KB is divided by local and foreign users, to provide relevant information about the country to a foreign user and do not repeat the facts that a local user would be familiar with. Reactions. Concerned with the agent's mouvements, gestures and common phrases. Museum scope. The virtual visit is loaded and contains the actions that allow the user to go through the museum's rooms with different perspectives. Table 1. Agent KB's 286 2.1 Agent Goals Users should be able to interact with the interface like they commonly communicate, being able to specify preferred forms of presentation, but not determinating the best way of doing it or the details to achieve it. Goal orientation: The user should be able to state what he wants finished, rather than how he wants it to be done. The agent should determine how to perform the necessary actions, without user's supervision. Expression: The user should be able to make requests without learning an artificial query language or being limited by the constraints of a menu system. Adaptability: The agent should adapt to different users, both through direct requests and by learning the patterns of a user's behavior. Autonomy: The agent should have some independent decision-performing capabilities and should be able to choose among several strategies to accomplish user's tasks. Integration: The interface that users and agents use to communicate should be understandable, consistent and directed to the task. The user should not have to remember details of the task in order to specify his goals. Table 2. Criteria considered for this agent 2.2 Agent Design The presented system is described using UML definition, based on the abstract models' construction of real-world ideas, notations and specifications. Register for visit Navigate through a virtual museum Select rooms for visit Request pictures information Create museum catalog Maintain rooms information Maintain user information Table 3. Defined Agent's Use Cases, each of them documented with a flow of events to accomplish the required behavior of each one. There are two main actors: the agent and the user, one secondary: the Virtual Museum's Environment and one use case: Navigate Through the Virtual Museum. In the UML use case presented in figure 2, User navigates around a specific view of the Virtual Museum, on which the agent is embedded to assist the visitor through his trip. Every action performed by the user over the scenario is monitored by the agent's sensors. These elements allow the Agent to identify which set of actions must be performed, according to its KB's. Moreover, if user does not carry out any defined movement, the Agent is able to achieve an autonomous action, according to User's navigation. The Agent starts the selection of rooms to visit use case as soon as it requires to perform an action registered in its KB's (see Figure 3). lyst Select rooms for visil Figure 2 and 3. Use Case Diagram and Select rooms for visit use case 287 • KnosdedgeManager 7 perform KnawledgeActicw "V /eguesf AsentAtfortRequest Figure 4 and 5. Class diagram of the Navigate Through a Virtual Museum's Scenario use case and Select rooms for visit use case class diagram. The AgentManagement class models the sequencing behavior described by the Navigate Through a VirtualMuseum'sScenario use case. This class monitors every action performed by the user, defined in the UserOptions class. This class encloses events the user can trigger whether from the AgentOptions or from the VirtualOrganization classes. These two classes delimit the virtual navigation's potential, because the user won't invoke any action not allowed by these classes. The VirtualOrganization class also restricts the behavior of the agent, according to the context where the user lays. The ScenarioElement class describes all the objects included in the VirtualOrganization class -e.g. art pieces-. The UserProfile class collaborates by giving a personal touch in the interaction between the Agent and the User along the navigation. Finally, the AgentsActingOptions class functions as an interface used by the AgentManagement class to communicate Agent and tasks to achieve. In figure 5 the Select Rooms' Use Case Class Diagram is shown: The KnowledgeManager class is responsible of the actions' execution requested by the agent, it handles the performance of the Agent -phrases, attitudes and movements- using the KnowledgeAction class, as well as the Agent's requests with the use of the AgentActionRequest class. 3 Virtual Environment Deepness: Allows the user to feel he's travelling inside the structure, and also to view the artwork from several views. Texture & Light: Both properties give the virtual environment a sense of reality. Table 4. Relevant points while developing the virtual environment To avoid speed and space limitations while navigating through a full 3D virtual environment in the Internet, the virtual museum must be exported to a standard Internet format such as Virtual Reality Modeling Language (VRML) to dynamically download complex scenes from a server directly to a web browser. Using VRML it is possible to interactively navigate through the virtual environment in real time. 4 General Conclusions The proposed application, according to the research, defeats problems of space, speed, compatibility and basically, suggests an interaction environment-user through an agent, who guides the user during the navigation in the virtual environment bringing relevant information to each 288 kind of user and offering it pleasantly, solving the problems presented in SectionOne. As future work, it could be exported to an embodied environment such as REA [3], where the user has complete interaction through the interface agent. Our project is currently integrating up-to-date literature and seeks the feedback from researchers in the area to generate ideas and opinions that will permit us to attack the weak points and deliver a high quality application. 5 References [l]Lieberman,HLetizia:AnAgentThatAssistsWebBrowsing. Proceedings of the Fourteenth International Joint Conference on Artificial Intelligence. 1995 [2]FininJ^,FntzsonJiMcKay,DMcEntiie,R.KqmlAsAnAgentCommunicationLanguage\ .3thlnter nationalConferenceOnlnformationAndKnowledgeManagementGaithersburg, Maryland, 1994 [3]Cassell,J,Biclanore,T,Billinghurst,M,Campbell,L.,Chang,K.,Vilhjalmsso\ n,H.,andYan,H. EmbodimentlnConversationallnterfaces: i?ea,CHI'99Proceedings,ACMPress,Pittsburgh,PA, 1999 [4]Sycara,K., Klusch, M, Lu, }.MatchmakingAmongHeterogeneousAgentsOnTheInternet.\� [5]Sycara,K.,Decker,K.,Pannu,A.,Williamson,M.andZeng,D.Z)M/n6«/erf/n\ /e/%ent4genta.IEEE Expert,December 1996 [6]Casse\\,S.andWi\h}^mssonMF"llyEmbodiedConversationalAvalars:Making\ CommunicativeB e/iavio/-^j<<ono»iou5.AutonomousAgentsandMulti-AgentSystems,Volume2,N\ umberl,1999 [7]Hannoun,M.,Boissier,0.,Sichman,J.andSayettat,C.MO/5£.v4nOrga«/z\ a/i'ona/Mo^e^brMM//i- ^ge/i/S>«/e/H.IBERAMIA-SBIA,LNAI1952,Springer-VerlagBerlinHeidelberg2\ 000 [8]Castelfranchi,C. ,CommitmentsFromIndividiialIntentionsToGroupsAndOrganizations,V'lCM AS, San-FranciscoUSA1995 [9]LopesCardoso,H.,01iveira,E. UsingAndEvaluatingAdaptiveAgentsForElectronicCommerceNe gotiation [lO]Bryson,3McGomgle,BAgentArchitectureAsObjectOrientedDesignAntdYi$entA\ gentslVSpnn ger 1997 [ll]Wooldridge,M.Jennings,N./nte/%ent4gente.Tteor)'^«^>''ac//ce.KnowledgeEngineeringRe view 1995 [l2]Btooks,RARobustLayeredControlSystemForAMobileRobot.lEEE]o[in\alOfRo\ boticsAindAut omation. RA-2:14-23 April 1986 [\3]VeThsaen,E.J)igmm,F.Jios,SJmplementationOfACooperativeAgentArchite\ ctureBasedOnT heLanguage-Action Perspective. Intelligent Agents IV, Springer 1997 [14]Museo del Prado, URL: http://prado.mcu.es [15]Musee du Louvre, URL: http://www.louvre.fr [16]Vatican Museum, URL: hltp://vvww.christusrex,org [17]Museum Meji-mura, URL: http://cin.meitetsu.co.jp [18]Israel Museum, URL: http://www.ini.org.il [19]British Museum, URL: http://thebritishmuseum.ac.uk [20]Smithsonian National Air and Space Museum, URL: http://www.nasm.edu [21]Universum, URL: http://www.universum.unam.mx [221TheChineseUniversityofHongKongArtMuseum.URL: http://cuhk.edu.hk/ics/amn/index.html [23]Hungarian National Museum. URL: http://origo.hnm.hu/english/ottlapl.html [24]Museum of Modern Art URL: http://wvvvv.moma.org [25]Schi\\'mg,ATowardP^alTimePhotorealisticRendering:ChallengesandSolu\ tions,SGGRAPH/ EurographicsWorkshopOnGraphicsHardwareAugust 1997 New York City, NY. ACM Press [26]Goss,M.,Yuasa,K. TextureTile VisibilityDeterminationForDynamicTextureLoading, SIGGRA PH/EurographicsWorkshopOnGraphicsHardwareAugust 1998,Lisbon, Portugal. ACM Press. [27]Labrou,Y.,Finin,T.5e»wnriciFor/4/j^ge«/Coffi/Mun/car(o«Ian\ guage.IntelligentAgentsIV.Spri nger1997 Index Based Document Classification with CC4 Neural Networks Enhong Chen+ Zhengya Zhang"1" Xufa Wang+ Jie Yang++ +Department of Computer Science and Technology ++ Department of Precision Machinery and Precision Instrumentation University of Science and Technology of China, Hefei, Anhui 230027, P.R.China cheneh@ustc.edu.en Abstract. CC4 network is an efficient neural network-based classification algorithm used in metasearch engine Anvish[2]. Metasearch engine Anvish uses CC4 to classify the returned web pages from other search engines. The documents returned by search engines are very short abstract information. Almost every keywords contained in each document appear only once. It is natural to represent these documents with binary vectors and classify them with CC4 neural network. However, for real life documents, binary representation is not much appropriate. This paper proposes to map all documents into points in low dimensional space while their distance information is kept as much as possible. Then each index of documents is transformed into a binary sequence so that the CC4 neural network can accept it as its input. Experimental results show that our method can greatly improve classification precision in comparison to classifying documents only with CC4 neural network, and the performance is very stable. 1 Introduction With the explosive growth of information sources available on the World Wide Web, it has become increasingly necessary for users to utilize web search engine to find the desired information sources[l]. Among all techniques used by search engine, document classification is very important for helping users to find their interested information efficiently. CC4 network [2] is an efficient neural network- based classification algorithm used in metasearch engine Anvish[3], Metasearch engine Anvish uses CC4 to classify the returned web pages from other search engines. The documents returned by search engines like Yahoo, WebCrawler, This work was supported by National Nature Science Foundation of China research grant 60005004 and National 973 Project Gl998030509. 289 290 Excite, Infoseek are very short abstract information. Almost every keywords contained in each document appear only once. Therefore, it is natural to represent these documents with binary vectors and render them to CC4 neural network as its input for classification. However, for real life documents, the frequency of each keyword in a document varies widely. Therefore, binary representation is not much appropriate. Considering that CC4 can only accept binary vectors as its input, we propose to map all documents into points in low dimensional space while their distance information is kept as much as possible. Then each &-index of documents is transformed into a 0/1 sequence so that the CC4 neural network can accept it as its input. In the following section, we will describe our document index based classification method called ExtendedCC4, which is an extension of original CC4 neural network based classification. Our theoretical analysis is given in Section 3. Section 4 is our experimental results. The final section is concluding remarks. 2 CC4 Network Based Classification with Document Index 2.1 Construction of Document Indexing To map every document into a point in low-dimensional space, we propose a BP neural network based incremental data indexing approach, called MDS-NN method. In this method, a small data set called sample data set is first indexed with MDS approach. For the size of sample data set is very small, the time spent on this step is very low. Then the indexing results are provided as training samples and supervisor signals to train neural network. The trained neural network is used to index newly added data. The quality of indexing is measured by Stress function [4], [5]. Definition 1 Af-index: Suppose that there exists a mapping Tthat maps any original data d into a point p in ^-dimensional space, then point p is called &-index of d. Our proposed MDS-NN method is as following: 1. Build the ^-indexes of training sample data using MDS method. 2. Construct the sample data set and supervisor signal set for BP Neural Networks with the results obtained in step 1. 3. Train the BP Neural Networks with the data obtained in step 2. 4. Build the &-index of newly coming data with the trained Neural Networks. 2.2 Document Classification with ExtendedCC4 The CC4 algorithm, proposed by Tong and Kak [2], is a new type of corner classification training algorithm for three-layered feedforward neural networks.The 291 CC4 network maps an input binary vector X to an output vector Y. The neurons are all binary neurons with binary step activation function. Considering that CC4 can only accept binary vectors as its input and for real life documents, the frequency of each keyword in a document varies widely thus making binary representation be inappropriate for real life documents, we propose to map all documents into points in low dimensional space while their distance information is kept as much as possible. Then each &-index of documents is transformed into a 0/1 sequence so that the CC4 neural network can accept it as its input. We call our method ExtendedCC4 for short, in contrast to the CC4 (we call it InitialCC4) using binary representation of textual documents as its input. In the following, we will present the notion of L-discretization sequence of real numbers first, and then L-discretization sequence of &-index. Definition 2 Let x be a real number such that x e [a, b], S is a L-discretization sequence of x given that the frontmost k elements of S are all ones and the rest L - k b-a x-a elements are all zeroes, where L is the length of S, m = , k = [ ]. L m Definition 3 Suppose that &-index of an original data d is (*/, x2,..., xk), xt e [a,-, bj\, i = 1, 2,..., k, L is a given positive integer and 5, is L-discretization sequence of x,at interval [a„ b-\, then S = < 5y> = < Sn, S12,..., SlL , S2l, 522,.... S2L, •••, Skl, Sk2,..., SkL> is the L-discretization sequence of &-index of data d, where i,j = 1,2, ..., L, Sij= Si\j] 's they'th element of the L-discretization sequence of fc-index of x-t. When training ExtendedCC4 to classify documents, each training document is indexed with MDS-NN method and then its L-discretization sequence of &-index is calculated as the input of ExtendedCC4. The topic of the corresponding document is served as the supervisor signal of ExtendedCC4 Neural Networks. For new textual documents, MDS-NN method is applied to obtain their L-discretization sequence of ^-indexes and then the trained ExtendedCC4 network is used to classify them. 3 Theoretical Analysis of ExtendedCC4 We first introduce the notion of 8-neighborhood of k dimensional point X, and then give the relationship between radius of generalization and classification ability of ExtendedCC4. Definition 4: Suppose that X is the center of a hyper-cube whose length of each edge is 28, then the continuos area covered by the hyper-cube is called 5- neighborhood of X and denoted as N£X), X is the representative of the area. 292 Definition 5: Suppose that X = (JKI, x2, ...,xk) e [0, 1]*, Y = (yt, yi, ..., yk), *,-, >>,e [0, 1], i = 1, 2, ..., k. If | Xi - y, \ < 8, where S> 0, then Y belongs to the 8- neighborhood of X and is denoted as Y e N£X). Theorem 1: Suppose that &-index X = (x\, x2,..., **) is the center of training set for class C, jc,e [0, 1], / = 1, 2, ..., k. Let L = s for L-discretization sequence of £-index X and r = [8/s]. To any Y = (y,, y2, ..., yk), y,e [0, 1] for i = 1, 2, ..., k, if the Hamming distance of L-discretization sequences of *,-, yt is at most n, n < r iff Y € Proof: First, we know that r = [S/s], hence rs < S< (r+l)s. For n < r and £>0, thus ns < rs < S .Hence the Hamming distance of L-discretization sequences of &-index of X and Kis at most d. Thus we can conclude that Y € N&X). Conversely, given that Y e N£X), thus | X; - yj |< S,i=l,2,...,k. For ns < | Xj - y, \ < (n+l)s, hence ns< 8. For rs < | JC, - yi \ < (r+\)s, hence ns < rs, hence n < r, and the theorem is proved. By Theorem 1, we know that more and more points will be covered by the 8- neighborhood of each training center with the increase of the radius of generalization when training ExtendedCC4 and thus improve the classification precision of trained ExtendedCC4. The precision will reach to its highest value at a certain radius of generalization. Afterwards, with the increase of the radius of generalization, more and more points are covered by the ^neighborhoods of the centers that belong to other classes, thus leading to the decrease of classification precision. However, when the radius of generalization is larger than a value r0, called threshold value, the ^-neighborhood of the center used as the first training sample for ExtendedCC4 will cover all points. The classification precision will stay at a stable level, around at the percentage of test samples belonging to the first class. 4 Experimental Results and Analysis Our experiments are performed on real data downloaded from UCI KDD Archive site http://kdd.ics.uci.edu. We randomly select 10 out of 20 groups of news data downloaded and pick out the frontmost 50 news in each group as our experimental data. All documents are mapped into points in 3-dimensional space. In each news group, determine a value for ratio such that ratio = size of training set / size of entire data set, then calculate the size of training set SamplesNumber. Here we set ratio value to be 10%. Figs 1 and 2 show the relationships between the radius of generalization and the classification precision of ExtendedCC4 and InitialCC4 when the ratio of training documents is 10%. It can be observed that the highest classification precision of 293 ExtendedCC4 will be much better than that of InitialCC4. We can also observe that when the radius of generalization is larger than a threshold value r0 the classification precision of ExtendedCC4 and InitialCC4 stays at a stable level, i.e. around at the percentage, i.e. 10%, of test samples belonging to the first class. III !» f*ts«=ij i ratiorO. 1 IC" IV .'. = 0.2 r o 0.15 - - oi 0.05 500 600 700 800 900 1000 1100 1200 eeneralization radius Fig. 1. The influence of radius of generalization on classification precision of ExtendedCC4 (ratio = 0.1) Fig. 2. The influence of radius of generalization on classification precision of InitialCC4(raf/o = 0.1) 5 Conclusion This paper proposes a document classification approach with ExtendedCC4 through mapping documents into points in low dimensional space and then transforming each &-index of documents into a 0/1 sequence so that the CC4 neural network can accept it as its input. Our experiments show that the performance of ExtendedCC4 is much better than that of InitialCC.4. References 1. Venkat N. Gudivada, et al., Information Retrieval on the World Wide Web, IEEE Internet Computing, September • October, 1997, p.58-68. 2. Tong K.-W. and S.C.Kak, A New Corner Classification Approach to Neural Network Training. J. of Circuits, Systems, Signal Processing, Burkh auser Boston, 1998, p.459-469. 3. Shu B., Kak S., A neural network-based intelligent metasearch engine, Information Sciences, 120, 1999, p. 1-11 4. Jagadish H.V., A retrieval technique for similar shapes, Proc. ACM SIGMOD Conf, May 1990, p208-217 5. Faloutsos C, FastMap: A Fast Algorithm for Indexing, Data-Mining and Visualization of Traditional and Multimedia Datasets, Proc. of ACM SIGMOD Conf, 1995, pl63-174. PRICE WATCHER AGENT FOR E-COMMERCE SIMON FONG E-Netique Pte Ltd, Singapore E-mail: simon@enetique.com.sg AIXIN SUN School of Computer Engineering, Nanyang Technological University, Singapore E-mail: sunaixin@pmail.ntu.edu.sg KIN KEONG WONG School of Computer Engineering, Nanyang Technological University, Singapore E-mail: askkwong@ntu.edu.sg We report an autonomous agent for retrieving competitors' product prices over the World Wide Web, for the purpose of price comparison at an e-commerce retail shop. This price watcher model is different from the conventional price comparison services currently available on the Internet in a way that it collects competitors' price information without the competitors' participation and attention. It scans the price information over the Internet on a regular basis, builds up a knowledge base at the user's site and provides a price comparison facility for shoppers to use. It is an information retrieval utility that could be used as a part of the business intelligence infrastructure. This paper summaries the application background as well as the technical details in the design of the prototype. 1 Introduction The Watcher Agent proposed in this paper is an autonomous software program that "spies" on the competitors' prices over the web. The prices collected from the competitors are stored in a local database. They can be used for price comparison at the front-end of an e-commerce online shop as well as for market research at the back-end. This technology will offer itself as a useful new feature for online shops and help increase consumers' confidence in buying the products by showing them the competitors' prices, and hence helps improve sales. The agent can be configured such that only the prices higher than (or equal to) ours are displayed. A snapshot of a shopping site with price watcher is shown in Figure 1. One of the barriers for e-commerce retailers to overcome is that most consumers are not convinced that the price of a product offered at their sites is the best; and it is always easy for them to surf away to other shopping sites looking for a better offer 1. How to encourage the consumer to commit a purchase on the spot at the current site is thus an issue to be addressed. 294 295 " / • MMI fst R' <*d Wk» siu»> putf Lilt MfiK f 1SS4S,' Oyr Pit**: %U9SS B^/flC^U RW'Ci Figure 1. Snapshot of the application of Price Watcher There are several price comparison services available on the web2'3,4. The differences between our price watcher agent and most of the web-based price comparison software and portals are follows: 1. Designed for usage by individual online shops. Price watcher is a price-monitoring tool used by individual online shops while the usual web-based price comparison services are made publicly available for web surfers to compare prices. 2. Neither broker nor public database is used. For most of the price comparison services, there exist a mediator which is usually the web server or service provider, and a centralized database is used to maintain the price information available to the users. In our watcher agent strat­ egy, a private and confidential database that holds the competitors' price information is located at the local site. 3. No participation of retailing shops is required. The way that some price comparison services work is they let the participating stores to submit their latest prices to the mediator. Our approach is different because there is no need to get the competitors involved. 4. Forms part of the Competitor Intelligence strategy. The price watcher is to be implemented as a part of the competitor intelligence strategy that includes information retrieval, filtering, analysis, and pre­ sentation. In this paper, Section 2 covers the overall working process of the price watcher. The product name matching and price extraction algorithms are 296 described in detail in Sections 3.1 and 3.2 respectively. The technical limita­ tions about price watcher is given in Section 3.3 and finally we conclude our work in Section 4. /eb pages Information Retrieval Layer URL Retrieval Engine resS URLi' lWl«"i' 'JRLs Market Explorer Market Monitor Market Watcher Marketing Information System Price Watcher Compilation Layer Storage Layer Presentation I^ycr Figure 2. The architecture of Watcher Agent 2 Price Watcher Working Process The price watcher working process consists of five steps: 1. The set of competitors' URLs, configuration parameters(e.g. retrieval scheduling) and product names are obtained from database. 2. The HTML pages are downloaded using the web retrieval engine. 3. A dollar sign detector is used as a filter. Only pages containing dollar signs like $ and S$ are to be processed further. 4. The product names are searched within each page. The price for any possible matches is extracted and stored in the local database. 5. The competitors' price (and our own price) are then queried and shown in a tabular form. 297 3 Technical Details To monitor a web site, the contents of the web site should be downloaded based on some schedule setting5. In the price watcher, only the HTML texts are to be downloaded. Finding the level of similarity between our product names and the names provided on the web, as well as extracting the corresponding prices are the two main challenges facing us. The architecture of the Watcher Agent is shown in Figure 2. The agent is composed of two major parts. One part is the price watcher and the other part is the market watcher. The market watcher helps the administrator of the online shop get the latest information about his competitors' web sites. The market watcher part is not covered in this paper. 3.1 Product Name Matching We know that one product name can usually be divided into three parts: brand, model number and description. For example, brand: Canon, model number: BJC-4200SP and description: Color Bubble Jet Printer. The model number is believed to be unique for a specific product. The brand part may appear to be slightly different on different Web sites. For example, Hewlett Packard and HP (for short). This problem can be solved by inputting more than one brand equivalents from users. The description part may be quite different from each Web site. However, this part is not so critical for product name matching although it is useful in determining where model number or brand can be found. What we do in product name matching is to allow users allocate weight for each part. For example, 50%, 30% and 20% for model number, brand and description respectively. Model number and brand require exact matching regardless of the character case. Exact matching will give a similarity level of 1, otherwise the similarity level is 0. Approximate word matching algorithm6 is applied for similarity level computation of the description part. The final similarity of each part is given by the product of its similarity level and weight. The overall similarity level for the whole product name matching is obtained by summing the final similarity levels of these three parts. This final value is subsequently compared with the threshold value to decide if a match has actually been detected. 3.2 Price Extraction The main operation of the price watcher is to extract the prices from HTML documents. HTML documents are semi-structured in nature7, hence extract­ ing information from HTML documents is significantly different from extract- 298 ing information from tables in a database. The price extraction algorithm is developed based on the KPS Mining Algorithm8. Once a product name is matched and located in a HTML document, the following rules are applied to extract the price. • For a product name appearing in a title (i.e. <title>, <hl> - <h6>), the price of the product is most likely to be located in the string after the product name. • • For a product name appearing in an item list, the price is most likely to be located in the same item, or the next one until the end of the list. For a product name appearing in a cell of a table, the price is most likely to be located in the same cell, or the same row in the column-wise table, or the same column in the row-wise table. • For a product name appearing in a textual line, the price is most likely to be located in the same paragraph, or the next paragraph, until the end of the page. • The price is assumed to be the first one appearing after the product name if more than one price are found. For each HTML page retrieved by the system, a Semi-Structured Data Tree7 will be constructed. If a model number can be located in the tree, the brand and the description are searched within the data node. If none of them can be located in the current data node, a super data string will be formed from all the data nodes which are children of the parent of the current data node. The similarity level of the obtained product name and the defined product name will then be computed. The price of this product will firstly be searched with the current data node, and up to three levels if no price information can be found. 3.3 Price Watcher Limitations One technical limitation is that the price watcher cannot distinguish Singa­ pore dollar and American dollar. The reason is that the "S$" and "$" are always used interchangeably in Singapore. In the current prototype imple­ mentation, price watcher can only deal with textual data. Another problem is that the detected product name may not be the one to be monitored al­ though a high similarity level is calculated. For example, "Cartridge for HP DeskJet 840C Printer" will be easily detected as "HP DeskJet 840C Printer". A more sophisticated algorithm is needed to resolve this problem. 299 4 Conclusion and Future Work In this paper, we have reported an autonomous software program called price watcher that collects competitors' product prices on the web. The collected price information will contribute to managers' business decision making, and it can be used to enhance shoppers' confidence via price comparison. The application of price watcher technology is believed to be relatively new and would create an impact on the way that retail shops market their goods online. The first online shop that applies this technology would benefit most, because it helps to place their business in a market position one step ahead of their competitors. It is envisaged that the system can be expanded to include scanning and analysis of competitors' other information, such as news, new products, promotions, etc. Work can also be extended to study how this agent can be integrated into the full infrastructure of business intelligence5. References 1. L. Gerald and L. Spiller, Electronic shopping:The effect of customer in­ terfaces on traffic and sales. Communications of the ACM, 41(7), pages 81-87, 1998. 2. B. Krulwich, The BargainFinder agent: Comparison price shopping on the Internet. In Agents, Bots, and other Internet Beasties, SAMS.NET publishing, pages 257-263, 1996. 3. R. B. Doorenbos, O. Etzioni and D. S. Weld, A Scalable Comparison- Shopping Agent for the World-Wide Web. In Proceedings of the First International Conference on Autonomous Agents, pages 39-48, 1997. 4. Pricewatch for Computer Products, http://www.pricewatch.com. 5. Q. Chen, P. Chundi, U. Bayal and M. Hsu, Dynamic Software Agents for Business Intelligence Applications. ACM Autonomous Agents'98, pages 453-455, 1998. 6. J. C. French, A. L. Powell and E. Schulman, Applications of Approximate Word Matching in Information Retrieval. In Proceedings of the Sixth International Conference on Knowledge and Information Management, pages 9-15, 1997. 7. S. J. Lim and Y. K. Ng, An automated approach for retrieving hierarchi­ cal data from HTML tables. In Proceedings of the Eighth International Conference on Information and Knowledge Management, pages 466-474, 1999. 8. T. Guan and K. F. Wong, KPS: a Web Information Mining Algorithm. Computer Networks 31(11-16): 1495-1507, 1999. AUTOMATED INFORMATION EXTRACTION FROM WEB PAGES USING AN INTERACTIVE LEARNING AGENT JUGAL K. KALITA AND PARITOSH ROHILLA Dept. of Computer Science, University of Colorado, Colorado Springs CO 80933, USA Due to the dynamic nature of the Web the layout of information on a Web page can change often. If a comparison-shopping agent relies on the programmer to detect changes in the layout and change the information extraction algorithms accordingly, the agent's efficiency and accuracy are compromised. The process of manually changing code is cumbersome. In addition, an agent built with hard-coded logic specific to a Web site works only for that domain. We have built a GUI based system, which enables the agent to learn to extract product information from a Web page. The algorithms use machine learning to help make the agent generic and easily adaptable to various product domains. We avoid any hard coding. In addition, the system is able to learn the desired information based upon just few training samples. Such a capability enables adding new sites for a product category relatively easy. 1 System Overview The following steps are involved in semi-automatically extracting relevant information from Web pages for comparison-shopping: 1. A structure for the relevant information on Web pages needs to be specified. 2. The learning engine needs samples that fit the structure defined. 3. From the training samples, the learning engine produces extraction rules. 4. The extraction rules are applied to Web pages to extract relevant information. These results also determine if more training samples are necessary. 5. The rules learnt can be fine-tuned if the Learner cannot capture all details. Our system has two programs, namely the Learner and the Extractor. Both programs interface with a common database. The Learner has modules for the first three tasks. The Extractor handles the last two. Our approach uses the inherent structure of tags and syntactic properties of plain text to infer rules. Our approach differs from published approaches based on wrappers or other techniques [1,2,5]. The entire page rather than being broken up into tokens is converted into a document tree. The tree is made up of tags and plain text nodes. The Learner tries to identify a node of interest by exploiting the properties of this tree and the plain text nodes. 2 The Learner The rules learnt by the Learner for a particular page are stored in the database. The Extractor uses these rules to extract records from target Web pages. 300 301 TJUB 2.1 Structure specification Most Web pages that provide information about products have an inherent structure. 8ook This segment of the page can be thought of as consisting of several records. A record is a group consisting of coherent pieces of related information [3]. Thus each record has several fields. Out of the fields a record has, we may be interested only in a few selected ones. For example, the relevant fields of a record (say, book) may be like the ones shown in figure 1 .While defining the structure of the records, we can mark some fields as mandatory. For example, in constructing a comparison-shopping agent for books, the mandatory fields for each record can be Title and Price. Besides providing names for various fields of the record structure, it may be helpful to also provide additional information such as the data type of the field. The extraction algorithm uses the data type information to reject nodes that do not match. Discount SollPriiB Author ISBN Figure 1. Desired record structure 2.2 Providing training samples Training samples shown to the Learner are records contained in Web pages. Several Web pages representative of the same record structure are obtained. The Learner has a GUI interface, which facilitates the learning process. A screen shot of the GUI is shown in figure 2. A user loads sample pages one at a time. Once loaded, a sample page looks like a text file without tags. The entire Web page is first converted into a document tree. The plain text nodes, which appear in the display area, are indented according to their depth in the tree. The indentation gives a feel of rendering. This can help the human trainer recognize record boundaries. Every node in the tree is !£k « jd K93 714 722 731 744 747 761 7G4 7G6 786 796 B07 eis 824 837 840 843 854 857 859 879 889 fll Arti firml Tn( i lli'jente :79.95: :7*: :74.50: : Bo den, Margaret. A.: :Detail: : 5. : : Artificial Intelligence: :24: :31t: :16.7S: :Percy, Robert L. : :, ISBH:0531117S7X , War 00 , MATS : Buy / Detail 6 Artificial Intelligence L**f«g«te$1SaQK£ fiseft fwfaitf*** fcfcto* VA* 9- He»i 'jnjj$i*«i i$&i )7rin J KBN0171G19G4B MayX ACAD .UttJ &«*«<)& JlSBK| dffitawdbtli i M Stattanwig j OMtt'tflv j e<it | Figure 2. The Learner GUI 302 given a node number by traversing the entire tree in a depth-first fashion. The numbers on the left show the node numbers assigned to the text nodes. The GUI has a form below the display area to show where various fields of the record appear on the page. The learning process begins by the GUI prompting the user to show fields of the record. The prompting continues until the trainer is satisfied that all possible variations of the record structure have been shown. chtral-' 2.3 Generation of extraction rules Extraction rules are learnt for every element of the record. Key properties of the document tree are utilized to formulate rules. Figure 3 shows a document tree corresponding to parts of a Web page. Our goal is to develop rules to extract fields that comprise records. The document tree for any Web document containing several records shows a number of interesting features: 71J0»«dtB,Mtt;.. 1. 2. Artificial HeSgcac* Figure 3. A document tree Most records in the tree have a similar pattern. The fields show common properties across records, e.g., they are at the same depth, and have parent nodes with the same tag. Every node in a document tree has a unique node number. This uniqueness property helps avoid conflicts while grouping identical looking fields into their respective records. 3. All plain text nodes show up as leaf nodes. Thus, the problem of identifying a field now narrows down to identifying the appropriate leaf node. The following information is gathered for all the fields of various sample records that are shown by the trainer: • The depth of a node in the document tree is recorded. Since all records have an identical pattern, it is very likely that this field is always at the same depth across all records. • For each field, we find the sequence of tags, starting from the root. For example, in Figure 3 the node containing "Artificial Intelligence" has the tag sequence "html;body;table;tr;td;b". 303 • The relative position of a field, the difference between its node number and the node number of the first field in the record, is recorded. • Any number of word(s) or characters) that stay constant across all records of a field are keywords for that field. Keywords can help in resolving ambiguity. • Any number of word(s) or characters) that should not be part of the plain text of the field are classified as omitwords for that field. Any plain text nodes matching the omitwords are ignored at extraction time. • The entire text associated with the field is also stored. We attempt to infer characteristics of the field by examining the text of a field across all records. For example, we can find the average size of the text in the field. The rule generation algorithm uses all of the information gathered above to formulate rules for each field of the record. 3 The Extractor The Extractor extracts and displays the records from the loaded document. The trainer specifies the rule set to be applied to the document. Having two GUIs, one for the Learner and the other for the Extractor helps the trainer to immediately view results of the samples that he provides to the Learner. Based upon the results he can either stop the learning process or continue to provide more samples. 3.1 Applying Extraction Rules The extraction module is a rule-based deduction system [4]. We have established the following general antecedent-consequent rules for each field of the record structure: • if depth of node = learned depth A tag sequence of node = learned tag sequence then node is a candidate node. • if node is a candidate node A node text has the specified data type A node has learned keywords A node doesn't have learned omitwords A text length is between min and max values then node belongs to the field. The extraction process follows a bottom up approach to form records. This approach helps deal with records that do not have all fields. Every node that qualifies as a field is extracted from the page, irrespective of the record it belongs to. The extracted fields are then grouped together into records. 3.2 Rule refinement The Extractor GUI provides a facility to look at the rules and make manual changes. This facility is handy if the algorithms are unable to calculate proper values for the rules. It is advisable that the trainer does not use this feature often. 304 4 Experimental Results We selected 11 Web sites that sell products belonging to different product categories. We filled product search forms on each of these Web sites and obtained pages showing product listings. We then ran our Learner and Extractor programs. Table 1 shows the results from some of our experiments. Table 1. Experimental Results W* Site Borders Noble Book Closeouts Time (Mm) 15 6 4 4 9 10 Total 4 3 4 5 4 4 Recoms Expected 182 55 55 55 157 137 Reconb Extracted 182 10 45 55 120 157 Wnms Reconls 0 0 0 0 0 0 UMOinplefe Recomt 0 7 7 0 48 0 Fin Tunc Y N N Y N y Recall % 100 18 81 100 77 100 Pndsifln % 100 100 100 100 100 100 All online stores considered have simple record structures. The time that a trainer can spend trying to get our system to learn to extract the records can vary between 15 - 40 minutes. These also include the time it took to count and determine if the extracted records were incomplete or wrong. The experiments indicate that given sufficient amount of time our system can achieve a recall rate of 100% for all stores. The precision is 100%. On almost all Web sites, the rule refinement involved changing only the minimum and maximum values for the length of the text that can appear in a field. We think this overhead could have been reduced by more careful selection of sample records. For some Web documents we were able to achieve a recall rate of over 75% without fine-tuning. The final rules that were learnt show a very impressive recall and precision rate. References 1. Berd Adelberg. NoDoSE - A tool for Semi-Automatically Extracting Structured and Semistructured Data from Text Documents. In SIGMOD '98, pp 283-294, Seattle, WA, June 1998. 2. Berthier Ribeiro-Neto, Alberto H.F. Leander, Altigran S. da Silva. Extracting Semi-Structured Data through Examples. In CIKM-8, pp 94-101, Kansas City, MO, November 1999. 3. D.W. Embley, Y. Jiang, Y.K. Ng. Record Boundary Discovery in Web Documents. In SIGMOD'99, pp 467-478, Philadelphia, PA, June 1999. 4. Patrick Henry Winston. Artificial Intelligence. Addison-Wesley, 1992. 5. Wai-Yip Lin, Wai Lam. Learning to Extract Hierarchical Information from Semi-structured Documents. In CIKM-9, McLean, VA, November 2000. AN INTELLIGENT AGENT WITH STRUCTURED PATTERN MATCHING FOR A VIRTUAL REPRESENTATIVE SEUNG-IK LEE AND SUNG-BAE CHO Computer Science Department, Yonsei University, 134 Shinchon-dong, Sudaemoon-ku, Seoul 120-749, Korea E-mail: cypher@candy.yonsei.ac.kr, sbcho@csai.yonsei.ac.kr We propose a conversational agent that can act as a virtual representative of a web site in­ teracting with visitors using natural languages. The agent consists of three main components: dialogue act categorization, structured pattern matching, and knowledge construction and repre­ sentation. Dialogue acts (DAs) are classified by automata which accept sequences of keywords defined for each of the DAs to identify the user's intention. Structured pattern matching is used for matching the queries with responses rather than the conventional natural language process­ ing techniques. To show the usability and possibility, this agent is applied to the introduction of a web site. The results show that the conversational agent has the ability to present more adequate and friendly responses. 1 Introduction Conversational agents have been focused recently because they can have conversa­ tions with users in natural languages and thus provide accurate information of a web site and respond quickly with friendly interaction. One of the first conversational agents, called Eliza, was born at Massachusetts Institute of Technology in 1966. Eliza was contrived for the research on natural language processing. This agent uses simple pattern matching technique l. ALICE (Artificial Linguistic Internet Com­ puter Entity, http://www.alicebot.org) is written in a language called AIML (Artifi­ cial Intelligence Markup Language) that is based on XML. A new idea in ALICE is to tailor the conversation for categories of individual, mainly through attempts to determine the client's age, gender, geographic location and occupation. However, most of the conversational agents have shortcomings in that they do not consider user's intention because of simple sequential pattern matching based on keywords. This paper aims to develop a conversational agent that identifies user's intentions and utilizes them in matching the corresponding response. 2 Conversational Agent The conversational agent we propose identifies the intention of a query and responses in natural languages, both Korean and English. A user query is preprocessed for the correction of typos and replacement of synonyms and put into the DA categorization process, which classifies it into categories of dialogue acts (DAs) 2'3'4 and extracts 305 306 Table 1. Dialogue acts User input Primary question Secondary question Primary statement Secondary statement Dialogue act Ability, Description, Fact, Location, Method, Miscellaneous, Obligation, Reason, Time, Whatlf, Who Acquisition, Comparison, Confirmation, Cost, Direction, Example, More, Possession Act, Fact, Message, Miscellaneous, Possession, Status, Want Cause, Condition, Feeling, Time keywords for each DA. These DA, keywords, and preprocessed query, are used to match the most appropriate response in a knowledge database called script. 2. / Dialogue Act Classification For the identification of the intentions of a query, thirty domain-independent DAs are defined as in table 1. Each DA is identified by a corresponding automaton con­ structed on keywords and their sequential information as in fig. 1, which shows an automaton that classifies a query into 'Location' DA. A special meta character '#' is used to allow the ending of a word to vary according to subject, tense, etc. A query LOCATION =state, county, city, street, town, neighborhood, county, nation, location Figure 1. Automata for location question 307 List 1 A Part of a Script TOPIC "location of lab" IF ((7LOCATIONQUESTION OR 7DIRECTIONSQUESTION) AND HEARD ("lab#", "softcomputing", "soft"&"computing"))) THEN SAYONEOF ITEM "It is located at the 3rd engineering ..." ITEM "529, the 3rd engineering building, ..." ITEM "The 3rd engineering building in yonsei ..." DONE ENDTOPIC is classified into only one of the DAs of primary category whereas several DAs can be assigned in case of secondary category. List 1 shows a part of a script. When a user asks the location or direction of something and "lab#," "softcomputing," or "soft" and "computing" appears in the query, one of the items below the "SAYONEOF" is randomly selected and presented as a response to the user. 2.2 Structured Pattern Matching Scripts are interpreted and loaded into memory by script interpreter when the agent starts. The condition part in each topic is transformed into a Boolean expression and the keywords listed in the condition part are transformed into a regular expression as in Fig. 2. The procedure of matching a query to a response is as follows. The condition part of each topic is compared with the query, DAs, and the keyword list. Different types of matching components, like DAs, keywords, or Boolean opera­ tors, are assigned different scores and the highest topic is selected and its action is performed to respond to the query. 3 Experimental Results To show the conversational capability of the agent, it is applied to the introduction of a web site. Fig. 3 (a) shows a response of the agent to a query concerning the location of the laboratory and (b) shows a somewhat different response to a location question of the same intention. As can be seen in Fig. 3 (a) and (b), the agent shows intelligent behaviors by responding a bit differently to the queries of the same intention. What follows next shows a stream of conversations between a user and the agent. The agent is able to interact with users naturally presenting the requested information to them. User : who are you? 308 IF ((7LOCATIONQUESTION OR 7DIRECTIONSQUESTION) AND HEARD ("lab", "sottcomputlng" "computing"))) Script Interpreter "soft" & SAYONEOF ITEM "It is located at the 3rd engineering building in yonsei university" ITEM "529, the 3rd engineering building, 134, yonsei university, shinchon- dong, seodaemoon-gu, seoul" ITEM "The 3rd engineering building in yonsei university" Figure 2. Internal representation of the script in List 1 . , , . _ n your query: tell me where your lab is your auery : where Is your laboratory? Chatbot ms»r: It Is located at the 3rd engineering building In OTsei university D"tbot an8,er: ** M engineering building In yonsei university I (a) Location question 1 (b) Location question 2 Figure 3. Questions of the same intention Agent : I am a virtual representative of this site to help y find what you want. So, what do you want to know? User : where is your laboratory? Agent : It is located at the 3rd engineering building in yonsei university User : hmm, can I ask your name? Agent : I am Walwalri. 309 User : I want to know name of your professor Agent : My professor's name is Sung-Bae Cho. 4 Conclusion In this paper, we have constructed a conversational agent that can give responses to the queries of users in natural languages. The agent can accept queries in both Korean and English, and give responses more consistently with the user's intention. This consistency originates from identifying the user's intention by the classifica­ tion of DAs and applying them to the structured pattern matching. Furthermore, the conversational agent has an advantage of making users feel natural and friendly in finding information because of mutual interactions with natural language support. As further works, we plan to study on the automatic construction of scripts from web pages in order to reduce the time and effort of the construction of the scripts. Maintaining contextual information in a conversation is another research topic to guarantee more intelligent and consistent interactions. Finally, giving the initiative to both sides could make the conversation more natural than the current implementation of the initiative given to only users. References 1. Weizenbaun, J., ELIZA - a Computer Program for the Study of Natural Lan­ guage Communication between Man and Machine. Communications of the ACM 9(1). (1965) pp. 36-45. 2. Austin, J.L.: How to do Things with Words. (Clarendon Press, Oxford, 1962). 3. Stolcke, A. et. al, Dialogue Act Modeling for Automatic Tagging and Recog­ nition of Conversational Speech. Computational Linguistics 26(3). (2000) pp. 339-373. 4. Core, M.G. and Allen, J.F., Coding Dialogs with the DAMSL Annotation Scheme. Working Notes of the AAAl Fall Symposium on Communicative Ac­ tion in Humans and Machines. (1997) pp. 28-35. 5. Brooks, R.A., A Robust Layered Control System for a Mobile Robot. IEEE Journal of Robotics and Automation. (1986) pp. 14-23. 6. Cho, J. et. al, Efficient Crawling through URL Ordering. Proceedings of the 7th International Conference on the World Wide Web. (1998) pp. 161-172 A CALENDAR MANAGEMENT AGENT WITH FUZZY LOGIC WAYNE WOBCKE Department of Information Systems University of Melbourne, Parkville VIC 3052, Australia E-mail: wobcke@staff.dis.unimelb.edu.au In this paper, we present a prototype calendar management assistant that uses fuzzy logic for the representation and satisfaction of user preferences. The system uses ideas from fuzzy scheduling and more loosely from "granular computing" to schedule actions at two different levels of temporal granularity: the task level and the activity level. Tasks are specific actions that have a set start time and are performed throughout a single timeslot (such as a meeting), whereas activities are processes that may not have a fixed start time and which may be spread over many time periods (such as writing a research paper). Users specify start times, deadlines and durations using terms such as morning, late afternoon and around 1 hour which are interpreted as fuzzy functions: the scheduler must produce a valid schedule that maximizes the overall satisfaction of the user's preferences. The present version of the system is also able to schedule plans of tasks or activities (networks of tasks or of activities ordered into an acyclic graph) in which fuzzy constraints between actions may be specified. 1 Introduction Adopting the "personal assistant" metaphor promoted by Maes,1 we present in this paper a prototype personal diary assistant that aims to help its user with the "time management" function of diaries by scheduling personal tasks and activities. The system is personalized in taking into account user pref­ erences when scheduling actions: it makes use of fuzzy logic4 in allowing the user to specify preferences for the start time, duration and deadline of an action using simple natural language expressions such as morning, late after­ noon, next week, around I hour, etc., each interpreted by the system using a predefined fuzzy function. The present version of the system is also able to schedule plans of tasks or activities in which fuzzy constraints between actions may be specified by the user. The assistant schedules actions at two levels of temporal granularity, this idea being related to Zadeh's notion of 'information granularity', Zadeh.5 The task level concerns specific actions that have, after scheduling, a set start time and which are assumed to be performed throughout a single timeslot, e.g. a (multi-agent) meeting. By contrast an activity is a process that is not assumed to have a fixed start time, even after scheduling, and whose execution may be spread over many time periods, e.g. writing a research paper. 310 311 Task scheduling is based on a standard "appointment diary" divided into half-hour slots into which tasks are allocated one or more consecutive slots, while activity scheduling operates at a different level of temporal granular­ ity, dividing the day up into four fixed 2-hour periods, and scheduling "work sessions" of an activity over a number of such periods. Moreover, the inter­ pretation of such "scheduling" is much more liberal: by allocating a timeslot to a task, it is understood that the user works on that task throughout the timeslot; however, with an activity, which is typically more easily interrupt- ible, it is understood only that the user works on the activity in a given period for the length of time allocated to the activity for the period (making no as­ sumption about when, during the period, the user works on the activity). A major function of the system is to maintain consistency between the different views of the diary resulting from these two levels of temporal granularity. 2 Task Layer The interface to the task layer is designed to look like a standard appointment diary, with each day divided into half hour slots, as illustrated in Figure 1. The duration of each task is given in parentheses beside its description. The dashed lines indicate that there are activities scheduled for the corresponding period, the length of the lines giving the user an idea of how much time is allocated to activities during a period; this is explained further in section 3. Figure 1. Appointment Diary Interface The system uses two scheduling methods in sequence: a depth-first "greedy" scheduler that uses heuristics to find an allocation of timeslots to a set of tasks as efficiently as possible, then a local search "hill climbing" sched- 312 uler that makes minimal adjustments to efficiently improve a given scheduling solution. There are two sorts of scheduling heuristics: for denning (i) the or­ der in which tasks are considered by the search and (ii) the order in which assignments to tasks are explored. First, tasks are ordered according to any plan dependencies," then from most constrained to least constrained. Second, when scheduling a single task, the possible timeslots are explored in order of degree of preference satisfaction. When there are no valid timeslots for a task prior to the commencement of scheduling but the day for the task is known, local search is invoked in order to increase the size of any gaps in the day schedule so as to create a slot for the task, moving up to three tasks forwards or backwards, and compressing a task that has a fuzzy duration (possibly in conjunction with moving up to two preceding or succeeding tasks). Meetings between multiple agents are scheduled using the "hierarchical" protocol of Sen and Durfee2 adapted to handle fuzzy preferences in conjunction with user profiles, as described in Wobcke and Sichanie.3 For efficiency, the first solution found by the scheduler is adopted, even though this may not be optimal. The scheduler then applies local search in order to improve the initial solution by making small changes to the task schedule for a single day. The heuristic employed is to consider moving a sequence of up to three tasks forwards or backwards by half an hour. The aim is to minimize changes to task sequences in the user's schedule. The typical case where the use of local search improves the solution is when there are a number of relatively unconstrained tasks that require scheduling. 3 Activity Layer The idea behind the activity layer of temporal abstraction is that not all actions a user may want to enter in a diary are tasks. Many are processes that may be spread over a number of days or weeks (or even months). Our aim is to provide some assistance to the user with "time management" for these types of activities. An activity should be "scheduled" at a higher level of granularity than tasks, but only in the loose sense that it is allocated some amount of time in some time periods. The idea is that this will enable the user to be sure the activity can be completed before its deadline, given the other tasks and activities in the user's diary. We call a part of an activity to be executed in a time period an activity session, and a collection of activity sessions (for a number of activities) an approximate schedule. "In keeping with the emphasis on efficiency, plans have a restricted structure, essentially enabling a tree-like set of dependencies to be constructed. 313 The use of a separate layer of temporal granularity necessitates a new interface and this is based on some assumptions about the particular grain size for activities that will be useful. We work with activities that are of week level granularity and which are divided into sessions of up to 2 hours duration by the scheduler. This is partly inspired by the standard "week-to-view" diary, which also forms the basis of the interface design. A sample screen, compatible with the task view from Figure 1, is shown in Figure 2. Figure 2. Week-to-view Diary Interface Each activity has a preferred work period and preferred work day, both of which may be fuzzy expressions such as morning or next week. The scheduler allocates a number of periods to each activity. The value of a period is the average, over the free timeslots in the period, of the degrees to which the timeslots meet the given preference. A "capacity check" must be carried out for each period proposed by the scheduler for an activity session. The simplest form of capacity check is to ensure that the user has sufficient free time in the period to allocate to the new activity session. However, this check is more complicated if activity deadline(s) fall within a period. It is now assumed that the user can optimally distribute time from the period to the different activity sessions. This enables the scheduler to treat the activity sessions as if they were discrete tasks, and to determine the value of a period by computing the ordering of these sessions that maximizes the degree to which all deadlines are met, for each ordering using the earliest end time for an activity as the basis against which the fuzzy deadline function is evaluated. Activity scheduling is based on task scheduling, although instead of allo­ cating a single timeslot to a task, a set of periods is allocated to an activity, 314 periods being allocated in order of satisfaction of the user's preferences. For plans of multiple activities, the scheduler uses depth-first search driven by the plan's dependency structure, although for efficiency reasons, no backtracking is performed. Activity rescheduling is invoked whenever a task is to be added to a period in which there is insufficient capacity. 4 Conclusion We have described a calendar management assistant that uses fuzzy logic for the representation and satisfaction of user preferences. The system operates at two complementary levels of temporal granularity: scheduling tasks (in timeslots) and activities (in larger time periods). The scheduler uses depth- first search with heuristics for ordering the actions and the possible timeslots, and uses local search for improving an initial solution so obtained. The assis­ tant is also able to schedule plans of tasks or of activities that may include fuzzy constraints, and includes a "hierarchical" protocol for meeting schedul­ ing between multiple agents. Acknowledgements This work was carried out at British Telecom Laboratories in the United Kingdom. We gratefully acknowledge the contribution to research and devel­ opment on the project made by Ben Azvine, David Djian, K.C. Tsui, Simon Case, Heather Maclaren, Gilbert Owusu and Arash Sichanie. References 1. Maes, P. (1994) 'Agents that Reduce Work and Information Overload.' Communications of the ACM, 37(7), 31-40. 2. Sen, S. & Durfee, E.H. (1994) 'On the Design of an Adaptive Meeting Scheduler.' Proceedings of the Tenth Conference on Artificial Intelligence for Applications, 40-46. 3. Wobcke, W.R. & Sichanie, A.G. (2000) 'Personal Diary Management with Fuzzy Preferences.' Proceedings of the Fifth International Conference on the Practical Application of Intelligent Agents and Multi-Agent Technol­ ogy, 105-120. 4. Zadeh, L.A. (1965) 'Fuzzy Sets.' Information and Control, 8, 338-353. 5. Zadeh, L.A. (1973) 'Outline of a New Approach to the Analysis of Com­ plex System and Decision Processes.' IEEE Transactions on Systems, Man, and Cybernetics, 3, 28-44. XML BASED MULTI-AGENT COLLABORATION FOR ACTIVE DIGITAL LIBRARIES YANYAN YANG, OMER F. RANA AND DAVID W. WALKER Department of Computer Science, Cardiff University, PO Box 916, Cardiff CF24 3XF, UK E-mail: L. Yann@cs.cf.ac.uk ROY WILLIAMS Center for Advanced Computing Research, California Institute of Technology, CACR 158-79, Pasadena, CA 91125, USA GIOVANNI ALOISIO Department of Innovative Engineering, University ofLecce, Italy Intelligent and automatic processing of the distributed data that efficiently supports scientific collaboration between both professional and casual users is a highly demanding task. It is also particularly challenging when the system must cope with active data that is processed on- demand. As part of the ongoing SARA Digital Library project, the research presented here proposes an intelligent mobile agent approach to on-demand processing of remote sensing data. We discuss the agent-based infrastructure that we have developed. The design, architecture and implementation of a prototype system that applies this approach are reported on here. In this experiment, the SARA system utilises cooperative software agents for data access and analysis and uses XML to model metadata and support agent communications on clusters of servers. Although the examples presented are mainly based on the SARA system, the applicability of the proposed techniques to the potentially more rewarding active archive system should be obvious. In particular, we believe the proposed agent design can allow distributed access, concurrent querying, and parallel computing over multiple heterogeneous remote-sensing archives in a modular and scalable fashion. 1 Introduction The general problem of managing large digital data archives is particularly challenging when the system must cope with active data which is processed on demand [2]. Active data represents data that is dynamically generated by a scientific experiment, or may be obtained from a sensor or monitoring instrument. The amount of digital spatial data available is growing rapidly. In particular, there is a vast amount of data from earth observation satellites. This presents a challenge for the development of software systems to enable the storage, management and dissemination of these huge datasets in on-line data archives or digital libraries. Ideally, such a system should provide efficient, on-demand remote access to these datasets over the Internet, so that authorized users can easily access and utilize the data for a variety of applications including geology, image registration, resource 315 316 monitoring etc. For a number of spatial applications, such as satellite imagery, the processing requires high-performance compute servers. In addition, scientists often require integrated access to information combining retrieval, computation, and visualization of individual or multiple datasets. Scientific collaborations are already distributed across continents, and software to enable these work groups will become increasingly vital. It will be necessary for human interfaces to these archives to become more simple to use and flexible. In the scientific world, scientists need to deal with both data-centric and process-centric views of information. While it is important to have access to information, often it is also important to know how the information was derived. Hence, the scientist should have a technological infrastructure that can intelligently and automatically process the distributed data, thereby transform the processed data into useful knowledge. SARA is an active digital library of multi-spectral remote sensing images of the earth, and provides web-based on-line access to such images. As part of the on going SARA digital library project, this paper describes a collaborative effort to explore an XML and agents based framework for the distributed management and analysis of remote sensing archive. We believe our proposed techniques suggest useful guidelines that go beyond the SARA system. Our results provide further evidence of the utility of the mobile agent approach for active archive systems. 2 The multi-agent system The SARA service is actually maintained by Caltech and the University of Leccefl]. The data are maintained in different kinds of file systems, such as SUN NFS, IBM/Livermore HPSS, and delivered using web front end. The web interfaces act as an integration tool for combining different server implementations. At present, a SARA user must telnet to the center hosting the parallel computer, select computing resources to use, and perform the image processing activity. The results of computation, image or text are written to web pages and examined with a browser enabling further data to be imported and processed if desired. The current approach is very restrictive on the kinds of users that can access and manipulate data in the SARA, as the client may not have permission or the relevant software to analyse data. Also, CGI scripts are stateless, whereby a user cannot enable one request to be based on the outputs or conditions generated from a previous one. A multi-agent system, which comprises both intelligent and mobile agent, has been developed to manage and analyze distributed multi-agency remote sensing data. Compared with other agent applications, our emphasis is on the use of the mobile agents to transfer analysis algorithms to image processing archives. We localize the most complex functionality in non-mobile Local Interface Agents (LIA), which remain at one location, communicate with the mobile User Interface Agents (UIA), and provide resources and facilities to lightweight mobile agents that 317 require less processor time to be serialized, and are quicker to transmit. Each agent is responsible for offering a particular type of service, and the integration of services is based on a user specification. SARA mobile agents are persistent, and can wait for resources to become available. Agents allow the delivery and retrieval of data to complete without user monitoring or recovery actions. There are two types of User Interface Agents: User Request Agents (URA) and User Assistant Agents (UAA). URA supports the user in creating a query or operation to perform on the SARA data. UAA manages the information of the user and provide control functions to the user, such as updating their file space on a remote server, and parameter settings for their visualization tool. There are many types of Local Interface Agents: a Local Assistant Agent (LAA) supports interaction with any visiting User Request Agents (URAs) by informing them about the available data and computing resources, and cooperating on the completion of the task carried by the URA. A Local Management Agent (LMA) coordinates access to other LAAs and supports negotiation among agents. It is responsible for optimizing itineraries of mobile URAs, to minimize the bottlenecks inherent in parallel processing and ensuring that the URA is transferred successfully. A Local InteGration Agent (LIGA) provides a gateway to a local workstation cluster, or a parallel machine. A Local Retrieval Agent (LRA) can translate query tasks and performs the actual information retrieval from the local archive. In addition to retrieval, a LRA may also perform other operations. For instance, it may save the results to a file before sending it to the user. A Local Security Agent (LSA) is responsible for authenticating and performing a validation check on the incoming URA. The URA will be allocated an access permission level. Agents from registered users may use, and have access to, more information resources than agents from unregistered users. 3 XML-based agent communication and data specification 3.1 XML-based agent communication It is essential that agents used to access heterogeneous remote-sensing data archives communicate and co-operate with each other in order to provide service and satisfy user requests. A simple way to do this is to define an interaction protocol for communication in the particular problem area. The best way to represent such a protocol and to define a standard message format with meaningful structure and semantics have become key issues. We propose an XML schema for agent communication that enable agents to communication with each other by expressing intentions in the SARA ontology. Our XML schema allows efficient parsing and is modular and extensible to support 318 evolving classes of XML documents. In addition it retains its simplicity and clarity, and is readable by the user. Each message has a standard structure, showing the message type, context information, message sequence, and the body of the message. Autonomous agents cooperate by sending messages and using concepts from the SARA ontology, which describes terms and concepts (such as a Track, a Latitude/Longitude coordinate, etc) and their inter-relationships. We represent ontology by listing terms, their meanings and intended use in the Document Type Definition (DTD). Every specific XML specification is based on a separate DTD that defines the names of tags, their structure and content model. A DTD can define elements, attributes, types, and required, optional, or default values for those attributes. While the XML specification contains the structured information, the DTD defines the semantics of that structure, effectively defining the meaning of the XML-encoded message. <?xml version='1.0' encoding="UTF-8"?> <!ELEMENT message (context+, content+)> <!ATTLIST message type (request | response | failure | refuse) #REQUIRED date CDATA #IMPLIED id CDATA #REQUIRED > < .'ELEMENT EMPTY)> <!ATTLIST context sender CDATA #IMPLIED receiver CDATA #IMPLIED originator CDATA #IMPLIED returnby CDATA #IMPLIED > < .'ELEMENT content (itinerary+, querydeff, results)> < [ELEMENT itinerary (server)+> <!ELEMENT server (Cardiff | Leece | Caltech , server2?) > < ELEMENT server2 (Cardiff | Leece | Caltech)> <!ENTITY query SYSTEM "query.xml"> <!ENTITY querydef(&query;)+> < .'ELEMENT results (#PCDATA)> Figurel A DTD for Agent Message Communication Message type represents intentions such as request, response, failure, and refuse explicitly and allows the system to monitor and control the progress of the interaction. For example, we can define a message for a request to search for tracks, and another message for information passing to return tracks. Context is used to identify the sender, the intended recipient of the message or originator for forwarded messages, using some form of local, regional, or global naming scheme. Returnby sets a deadline for user's waiting time. 319 Content defines itinerary of agent and user's request wrapping in XML, as well as forms of returning results. We define a set of DTDs for agent communication in the SARA system that specifies all of the legal message types, constraints on the attributes, and message sequences. In XML-based messages, agents encode information with meaningful structure and commonly agreed semantics. On the receiving side, different parts of the information can be identified and used by different services. Agents may use XML to explain their beliefs, desires, and intentions (BDI). Moreover, a mobile agent can carry an XML front-end to a remote data archive for data exchange, where both queries and answers are XML-encoded. We have currently identified various types of messages for agent interaction, such as upa-ura messages, ura-lia messages, and lia-upa message. Messaging is performed synchronously so that the URA is launched as soon as receiving a message from the UPA. A lia-upa message is sent from a LIA to a related UPA when the tasks are finished. In our system, we use the JAXP interface to XML developed by SUN that supports SAX and Document Object Model (DOM). 3.2 XML-based data specification In our system we use XML to encode system structure as metadata. The metadata base consists of four tables. The Track table houses information about the track image such as its name, date of acquisition, unique id, width, height, and number of channels. The Coords table contains the latitude and longitude coordinates of the four vertices of the image. In the File table the filenames of the files constituting the image are recorded, and finally the Stored table contains the information about where the image is actually stored, that is, one of the data servers that compose the distributed digital library. 4 Implementation and application of the system We have developed a simple prototype of an agent-based SARA system [4]. A subset of the LIAs and UIAs has been implemented to provide the basic functionality for on-demand processing of SARA data. We also implemented the following functions: locate a mobile agent, send a URA to the specified data server or compute server, retrieve URA-related information from the management server, stop the execution of an agent and release the resource held by the agent, and report agent status. The current system can be employed in a wide range of application domains, such as the analysis of multi-temporal images corresponding to changes in the ecology of a particular region, and studies of environmental pollution. SARA images can also be compared based on phase and amplitude differences of the 320 backscatter radiation, to study seismic or volcanic processes, motions of ice-sheets or glaciers, or other similar geological events. Support for real time processing can facilitate frequent overpassing of satellites over a given region in case of natural disasters such as forest fires or flash floods. The agent-based approach provides a useful system for enabling such applications to be more effectively deployed due to the reasons mentioned above, and involves an integration of data resources, various types of physical storage media, and various compute servers and analysis algorithms. A detailed description of agent-based data analysis is given in [3]. 5 Conclusions and future work An agent based framework for managing access to data, supporting parallel queries to data repositories, and providing an XML based data model for integration data from different repositories is outlined. Our approach utilises stationary agents which undertake specific roles, and mobile agents which can carry analysis algorithms to data repositories. We use XML to model metadata and support agent communication. We are currently working on the further enhancement of the user interface for formulating queries and visualizing data. We are integrating more sophisticated agents into the system. Another important task for future study is to investigate system scalability and response times when thousands of agents are hosted on a single data source. References 1. Aloisio G., Milillo G., Williams R.D., An XML architecture for high- performance web-based analysis of remote-sensing archives, Future Generation Computer Systems 16 (1999) 91-100 2. Coddington P.D., Hawick K.A., James H.A., Web-based access to distributed high-performance geographic information systems for decision support, In Proc. HICSS-32, Maui, January 1999. 3. Rana Omer F., Yang Yanyan, Georgousopoulos Christos, Walker David W., Williams Roy, Agent Based Data Analysis for the SARA Digital Library, Workshop on Advanced Data Storage / Management Techniques for HPC, Warrington, UK, 23rd - 25th February 2000. 4. Yang Yanyan, Rana Omer F., Georgousopoulos Christos, Walker David W., Williams Roy, Mobile Agents and the SARA Digital Library, In Proceedings of the IEEE Advances in Digital Libraries 2000, Washington DC, Published by IEEE Computer Society, May 2000. CHAPTER 5 DISTRIBUTED INTELLIGENCE AN INTELLIGENT CHANNEL ALLOCATION SCHEME FOR MOBILE NETWORKS: AN APPLICATION OF AGENT TECHNOLOGY ELIANE L. BODANESE Centro Federal de Educacao Tecnologica do Parana , Av. Sete de Setembro, 3165 Curitiba, Parana, Brazil, 80230-901, E-mail: bodanese@dainf.cefetpr.br LAURIE G. CUTHBERT Dept. of Electronic Engineering, Queen Mary and Westfield College - University of London Mile End Road, London, El 4NS, England E-mail: laurie.cuthbert@elec.qmw.ac.uk As the demand for mobile services grows, techniques for increasing the efficiency of channel usage in mobile networks become more important. Radio resource flexibility is needed to cope with the limited frequency spectrum available for network operators. The frequency channel allocation in mobile networks is a complex control problem with specific goals, i.e., to minimize the call blocking probability of the network and minimize the delay in channel assignment. This paper proposes a multi-agent system implementation to control frequency channel allocation in mobile networks. The internal agent architecture allows base stations to be more flexible and intelligent, negotiating and co-operating with others to improve the efficiency of the channel allocation scheme. The simulation results prove that the use of agent technology in controlling the allocation of channels is feasible and the agent negotiation is an important feature of the system in order to improve perceived quality of service and to improve the load balancing of the traffic. 1 Introduction Mobile networks were first implemented following the specifications of the called cellular concept [1]. This architecture of cellular network is composed of hexagonal cells representing geographic areas. The users called mobile stations (MS) or mobile users, are able to start/receive communication while moving inside the cellular network. Each cell has a base station (BS) which supplies frequency channels to the mobile stations inside its boundaries. The base stations are linked to a mobile switching centre (MSC) responsible for controlling the calls and acting as a gateway to other networks. When a mobile station using a frequency channel reaches the boundary of the cell, it needs to change its current frequency channel for another belonging to the neighboring cell, this procedure is known as handojfov handover. The assignment of frequency channels in the cellular concept is fixed, i.e., the total number of frequency channels allocated to a network operator is divided in sets and each set is assigned statically to a cell. The set of frequency channels used in one cell can be used in other cells distant enough to allow the reuse of the frequency channels without causing interference problems. These cells are called co-channel 322 323 cells and the distance between them is the co-channel reuse distance. The group of cells using different sets of channels form a cluster of cells that is called compact pattern. The frequency reuse layout of a cellular system is assembled following a scheme [1], that finds the nearest co-channel cells of any cell of the network. Fig. 2 illustrates the frequency reuse layout of a network with a 7-cell compact pattern. Figure 1. Frequency reuse layout of a 7-cell compact pattern cellular network. The frequency channel allocation scheme as introduced in the cellular concept is called Fixed Channel Assignment (FCA), that is simple, but strongly constrains channel utilization. If the traffic demand in a cell is greater than the number of nominal channels, (i.e. the frequency channels assigned to that cell) all the excess demand is blocked, regardless of the traffic conditions in other cells. This constraint is too limiting for mobile networks and several strategies have been proposed to maximize frequency channel allocation and minimize the traffic blocking rate. The strategies have been divided mainly in two groups: variants of FCA and Dynamic Channel Assignment (DCA). The strategies using FCA differ from the original concept by allowing a cell to use channels from its neighboring cells, when there are no more available channels to serve incoming calls or handoff requests. One of the examples is the channel borrowing scheme with channel locking [2] that borrows a channel from an adjacent cell, but prevents the use of the borrowed channel in the co-channel cells of the lender that are near to the borrower, taking into account the co-channel interference constraints. DCA strategies [3] do not have nominal channels; all channels are kept in a central pool. When there is a channel request in one cell, the MSC chooses the appropriate channel that gives maximum channel efficiency taking into account the co-channel interference constraints. The channels are assigned for the duration of a call and then they are returned to the central pool or reallocated. A detailed survey describing DCA schemes and FCA-based schemes, including those with channel borrowing is given in [3]. 324 Centralized schemes generally perform better, but the communications overhead is a real problem, causing several authors to propose modifications to make the schemes more distributed. One example is the Distributed Load Balancing with Selective Borrowing (D-LBSB) scheme [2] that performs better than its centralized version and also outperforms some other existing schemes. The schemes referenced here are purely reactive algorithms: although they give better performance than FCA still there is wasted signaling because of the high degree of failure under increasing traffic loads. Fully reactive algorithms are not able to detect the drop in their efficiency; as it will be shown in Section 4. In order to avoid wasting signaling resources, base stations must detect the situations where the efficiency of the algorithm is low and decide on different ways to improve the efficiency of the algorithm or to stop its execution. In count of that, we propose a multi-agent system that allows base stations to be more flexible, negotiating and co­ operating with others to improve the efficiency of the channel assignment scheme. There is one agent in each base station, which is responsible for the control of the assignment of channels in the cell where it is located. 2 A Channel Allocation Scheme Based on Agents Several agent frameworks have been proposed for control management in telecommunication networks, more specifically, the adoption of a layered multi- agent control system seems to fit the requirements of distribution, intelligence, robustness and concurrency of telecommunication network control as we can see in Somers [4]. A layered arrangement allows levels of co-ordination, each layer is defined to conduct the control of the network to a certain level of competence. In our approach, we selected a type of multi-agent system whose layered control is located inside the agents. The agents were based on a hybrid architecture called INTERRAP (INTEgration of Reactive behavior and RAtional Planning) proposed by Miiller [5]. This architecture is composed of three hierarchical controls layers, a knowledge base that supports the representation of different abstraction levels of knowledge and a well defined control architecture that ensures coherent interaction among the control layers. Each layer has different functionality which working concurrently completes the global behavior of the agent. INTERRAP was designed to react to unexpected events, to long-term actions based on goals and to cope with other agent interactions. Its layered control approach and its mechanisms allowing negotiation among agents made this architecture suitable for optimizing channel allocation in mobile networks and therefore employed and adapted for this work. For the cellular network scenario, the layered control of the agent is structured to include a reactive layer that is responsible for fast accommodation of traffic demand, a local planning layer that uses other strategies to optimize the local load distribution of channels and a co-operative layer, responsible for load balancing 325 across a larger area. The knowledge base is composed by a world model that contains the environment information and everything necessary for the operation of a channel allocation algorithm; a mental model that contains the complete information about the agent, about the use of frequency channels and possibly history of traffic load in the cell and a social model that has relevant information about other agents' data. The agent is prepared to receive the input from the environment that includes requests for channel allocation from new calls, handoff requests, borrowing channel requests and orders for locking channels. The actions performed by the agents over the environment includes all execution tasks that actually allocate, release, re-allocate, lock, lend channels, manage handoffs and terminate appropriately unsuccessful requests. 3 Functional Specification of the Agent 3.1 Reactive Layer The Reactive Layer is primarily responsible for the channel assignment and is basically composed of a FCA algorithm with channel borrowing and channel locking. The algorithm called "Distributed Borrowing Algorithm " (D-B A) is based on the general behavior of the D-LBSB scheme [2]. Using an algorithm similar to one of the distributed schemes described in the literature allows a better evaluation of the performance of the agent and also allows the benefits of the Local Planning and Co-operative Planning layers to be evaluated against the same reactive scheme. In the D-BA scheme, when the channel availability in a cell decreases to a certain threshold, it becomes "hot"; cells above the threshold are "cold" cells. When a cell becomes hot, it triggers the execution of the borrowing channel algorithm. The neighboring cells with more channel availability are chosen to lend channels to the hot cell, they are the lenders. The hot cell expects to get a number X of channels (determined by the algorithm). The algorithm borrows channels until the number X is reached or all selected lenders are not able to lend channels anymore, what comes first. Full details of the D-BA scheme is found in [6]. 3.2 Local Planning Layer This layer is responsible for the channel re-assignment scheme. Every time a channel is released, the Reactive Layer requests a re-assignment decision in order to improve locally the channel usage. The Local Planning Layer also determines a departing region r close to the borders of the cell. Mobile users inside this region are considered as departing from the cell and they have important role inside the agent negotiation. Finally, this layer monitors the efficiency of the algorithm in the Reactive Layer, and it is responsible for the decision of triggering the Co-operative Planning Layer. 326 3.3 Co-operative Planning Layer The Co-operative Planning Layer is responsible for the negotiation of resources. When the use of the local channel allocation algorithm on its own is not sufficient to keep low rates of call blocking, one way to alleviate the load of a hot spot is to move calls to less loaded regions. This can be done through management handoffs (or traffic handoffs as known in the standards), but this is not an easy task. Only mobile users close to certain borders of the cells can attempt the management handoffs. The handoff attempts must be in a co-ordinated manner to avoid a mobile user being shifted back and forward between two cells. The handoff of mobile users to different cells is a load-balancing problem, so that the co-ordinated control needed to solve this problem depends on the collaboration of a group of cells. The co-ordination of a joint plan using management handoffs is the responsibility of the Co-operative Planning Layer of the agents. The agent negotiation has two phases: first to find the best region to attempt the movement of calls; second, the actual execution of the handoff requests in a co-ordinated manner, i. e. the execution of the joint plan. In order to engage and collaborate in a joint plan, the agents communicate through the Contract-Net Protocol (CNP) [7]. In this work, the agent implementation uses a sub­ set of the communicative acts specified in FIPA Agent Communication Language (ACL) [8]. The execution of the protocol during the two-phase agent negotiation is best explained through an example. In Fig. 2, cell A is hot and the local channel algorithm is not responding efficiently. The Local Planning Layer triggers the Co­ operative Planning Layer to start the negotiation with other cells. The following actions are taken in the negotiation: 1. In the hot cell, agent A is a manager candidate agent and it sends call for proposals (cfp(O)) to all its co-channel cell agents A',-, where i can vary from 1 to the current number of co-channel cells of A (maximum 6). 2. The A\ agents advertise the need for channel availability assessment to their neighboring cell agents B',y, 1 < j < 6 sending cfp(l) (cell load assessment and plan engagement availability). 3. Each B'ij agent sends to the respective A',- agent apropose(l) act if the cell is able to engage in a joint plan in the near future or a refuse(l) act otherwise. Inside apropose(l) act the B'y agent sends its degree of coldness (dc) and the percentage of the mobile users in the cell {Depart) which are inside the departing region r of selected cell borders, called regions of movement. The regions of movement in each cell are pre-defined according to the position of the cell of agent A. 4. Each A't agent receives the answers of its neighboring B'y agents and it computes the value of the utility function (Equation 3). Where Reg is the minimum set containing A',- and its neighbors B'y that have sent propose(l) acts 327 (1 < |Re g\ < 1). The factor a is introduced in order to decrease the influence of Depart over F' (0 < a < 0.25) : F'=-j ,* Y(dck +a- Departk)(S) |Reg| teRes 5. The A'i agents that were able to perform the calculation of F\ send the result to agent A 'mpropose(O) acts. The A',- agents that did not have enough propose(l) acts from their neighbors in order to calculate F' send refuse(O) acts. 6. The receivedpropose(O) act with biggest F' value is chosen to be the region for moving the calls (if F' is greater than a minimum value). Agent A advertises the result of the auction to the winning co-channel cell agent with an accept- proposal(O) act. If there is no winning region, then agent A sends reject- proposal(O) to all A',- agents that have sent propose(O) acts and aborts the joint plan attempt for a specific duration of time. 7. If there is a winning region, then the co-channel cell agent of this region sends cfp(2) (engage joint plan) to its neighboring B'y agents. 8. Each B'y agent receiving the cfp(2), assess its availability to engage the joint plan, considering the number of plans it is already participating in and the regions of movement being already considered in such plans. It sends a propose(2) act if: the number of current engaged plans is less than two and the regions of movement (if engaged in another plan) match the requesting one. Otherwise, it sends a refuse(2) act. 9. If the wining co-channel cell agent receives back a minimum number of propose(2) acts from its neighboring B'y agents, it sends back an inform(jp) (inform joint plan) act to agent A and sends accept-proposal(2) acts to all of its B'y agents that have sentpropose(2) acts. Otherwise it sends afailurefjp) (joint plan failure) act to agent A and reject-proposal(2) acts to its B'y agents that have sent propose(2) acts. 10. The winning co-channel cell agent that has just sent an informfjp) and its B'y agents will perform all preparatory tasks to engage the joint plan and they will wait for an inform(activejp) (inform joint plan activation) from agent A. 11. If agent A receives an informfjp) act, it sends a reject-proposal(O) to all other co-channel cell agents that have sent propose(O) acts before, and a requestQp) (request joint plan engagement) act to its two neighboring cell agents in connection with the winning region. This request is mandatory. Finally, agent A will send an inform(activejp) act to all agents engaged in the joint plan (first joint plan execution act). If agent A receives afailureQp) act, it selects the next best F' (if exists) and the actions from 6 to 11 are repeated. 12. An agent receiving a requestQp) act will perform all preparatory tasks to engage the joint plan and wait for an inform(activejp) act from agent A. 13. End of the first phase of negotiation. The second phase of the negotiation starts with the engagement of all agents belonging to the winning region, the manager agent A and its two neighboring cell 328 agents into the joint plan (shaded region in Fig. 2). Agent A is the manager of the joint plan and the other partner agents are the contractors of the plan [7]. The manager has the responsibility to monitor the actions of the contractors and to terminate the joint plan. Each iteration of the joint plan needs to be feasible. Therefore, a proposed heuristic follows a resource-oriented approach of market- based control. The aim is to load-balance the whole region so that the difference in degree of coldness of partner cells should be smaller then certain threshold. Figure 2. Negotiation Strategy. The following heuristic tries to balance the region by distributing users among cells: 1. The manager agent A sends its first act to all partner agents to inform them that the joint plan is in operation (inform(activejp) act). 2. All partner agents receiving the inform(acttvejp) act will send an inform(ptrnjp) (partner cell in the joint plan) act to their manager agent identifying themselves and their neighboring cells in the regions of movement. 3. Iteration: a) The manager agent sends a query-ref(0) act to all partner agents. b) Each partner agent sends its total number of channels and the number of channels in use to the manager agent through an inform-ref(0) act. c) The manager agent computes the rate of change (Ac,) for each partner agent and itself by calculating the difference between the channel occupancy of the cell (c/C,) and the average channel occupancy of all members (N) of the joint plan (Lmg): Ac,=|-Lovs (4) L^L±±{S)vieN Ci is the total number of channels in use in the cell of agent i. C, is the total number of channels (nominal + borrowed) in the cell of agent i. Lavg is the average channel occupancy of all cells of the joint plan. Ac, is the rate of change in channel occupancy of cell i inside the joint plan. 329 d) If the cell of agent i has Ac, >0, the manager agent sends to agent i: Ac,, the Ac of the neighboring cells having borders with the regions of movement of the cell of agent i and the total number of channels of these cells (C). It also sends Lavg. This information is sent through a requestQpaction) (joint plan action) act. e) Each agent i that receives the requestQpaction) act from the manager agent will try to transfer mobile users in the regions of movement (departing areas) following the algorithm: I. Sort the received Ac of the neighboring cells. II.If Ac, is smaller than min Ac , then no transfers can be made; go to step f). Otherwise, go to step III. III.Calculate how many mobile users need to be transferred: users = Ac/ * Ci. IV. If min Ac is greater than Lavg, then transfer one mobile user to the neighboring cell with min Ac ; Go to step VIII. Otherwise, step V. V. Sort only Ac that is smaller or equal to Lavg. The aim is to transfer mobile users proportionally to the number of channels available in each target neighboring cell with Ac smaller or equal to Lavg. VI.For all sorted Ac find the number of mobile users that the cell can receive. For Ac of cell j : us, = -Ac, * C,. j J J J VII. To find the proportion of mobile users that will be attempted to transfer m to each cell, sum all USJ: US = ^jusj . The proportion of mobile users for each cell is: min 'us, ^ —- * users US VIII. Do the handoff attempts. f) End of the iteration. 4. Repeat this iteration at intervals of s seconds until the manager decides to terminate the joint plan. When the plan is to be terminated, the manager agent sends a cancel(jp) (cancel joint plan) act to inform the termination of the plan. The termination of the joint plan can be determined by the completion of certain number of iterations or by an exception. 4 Implementation and Analysis The results of the multi-agent system are compared against the conventional mobile network using the FCA and a mobile network using only the D-BA scheme under common traffic load scenarios. The network performance measurements used for the comparison are the traffic blocking rate and handoff rejection rate. For simplicity, the cellular networks being compared are identified by their channel allocation 330 schemes: FCA, D-BA and MA for the multi-agent system. The expected behavior of the MA network is to improve the performance of the D-BA scheme when the latter starts to decrease the efficiency of its borrowing algorithm. This improvement also starts to decrease as the traffic load increases, because fewer resources will be available for load balancing. A common cellular network was modeled in OPNET™ and validated against a mathematical model. The cellular network is composed of 49 cells and each cell has 10 nominal channels. The compact pattern is a 7-cell cluster with the reuse distance being 3 cell units. Mobile users have their own trajectories inside the mobile network. Call establishments and handoff requests are simulated as they are requested in AMPS systems [1]. Poisson distribution is used to generate calls, which have an exponentially distributed duration with a mean of 3 min. An idle mobile inside the cell performs the call attempt. The performance of the three systems were analyzed under four layouts with different traffic distributions. The results of one layout (Fig. 3) are presented here. In Fig. 3, the number in the bottom of each cell is the cell identification; the number in the middle gives the Poisson arrival rates in calls/hour (ranging from 20 to 200 calls/hour). Mobile users inside the shaded area drive at 40km/h or walk at 2km/h (in both directions). The remaining mobile users have random trajectories moving at 5km/h. In the results, the abscissa of each graph is the percentage of load increase in all cells compared to the traffic load shown in Fig. 3, called the base load (0 %). « IT <m i 10B 21 100 id ISO 6 dO 8 40 3 20 1 80 22 145 S8B 40 SO 4 i 140 2 I d0 23 100 .14 m 3 100 10 ISO 13 40 11 fl0 24 Figure 3. Cellular system layout. The MA network outperforms the D-BA and the FCA networks (Fig. 4) and the expected general behavior of the MA network is demonstrated. The number of borrowing attempts is kept at the same level as the D-BA network, and at almost the same efficiency ((successful + partial successful outcomes) / total number of 331 borrowing algorithm executions) (Fig. 5). This shows that the reductions in blocking rate are due to the agent negotiation performing the load balancing. ^^ ^ y^ y£^ s^ -^ -<-MA ••- D-BA — FCA 100 120 Load increase (%) Figure 4. Traffic blocking rate for the three systems. ^ %K V5V ^*~. "S. *"-••—- Load Increase (%) Load increase (%) Figure 5. MA versus DBA: Borrowing algorithm (BA) results. Some of the cells with higher traffic load in the network had better improvement in the traffic-blocking rate, for example, cells 0 and 26 (Fig. 6). The greater improvement in traffic blocking rate of individual highly loaded cells is a good result, because these cells have a greater need of resources. — 5^- y' s-i^ ~"^~ y?'' *_---"" <^^ *-BS_0MA -«-BS_0D-BA -*-BS._0FCA --^ y y y yr _,y^ ^. ^"' ^ ' *7^ -- ^y- -*•• BS_26 MA -*-BS 26 D BA -*-BS_26FCA Load increase (%) Load Increase (%) Figure 6. Cells 0 and 26: Traffic blocking rate. 332 The handoff rejection rate is also lower in the MA network, thanks to the action of the agent negotiation (Fig. 7). This is an important result because it increases the QoS perceived by the mobile user. -40 -20 0 20 40 60 80 100 120 Load increase (%) Figure 7. Handoff rejection rate. The agent negotiation proved to work well, almost 100% of the management handoffs were successful when the mobile station had enough signal strength to shift cells (Table 1). This shows good performance from the proposed heuristic inside the agent negotiation, choosing the right cells to receive mobile users. Here, the success rate is defined as the ratio of successful handoffs to possible handoffs; possible handoffs exclude those where the signal strength is too low. Table 1. Management Handoff Request Performance for layout Fig. 3. Load Increase -40 -20 0 20 40 60 80 100 120 Mgmt handoff requests 551 1,781 3,267 4,780 5,787 6,494 6,538 6,914 6,122 Rejected mgmt handoff for lack of signal 331 1,115 1,889 2,846 3,409 3,735 3,778 3,843 3,301 Successful mgmt handoffs 220 666 1,378 1,934 2,376 2,751 2,752 3,052 2,790 Success rate (%) 100.0 100.0 100.0 100.0 99.9 99.7 99.7 99.4 98.9 The important feature shown in the results is the behavior of the multi-agent system. There are still resources available when the neighboring cells, of a borrower cell, also reach the threshold of channel availability. At this point the agent negotiation has an important role in shifting some of the traffic to less loaded regions. 333 5 Conclusion The approach proposed by the authors was to use agent technology to control the channel assignment in the cellular network. A special hybrid agent architecture was adopted consisting of three layers. The interplay of the three layers proved to be a powerful framework to improve radio resource flexibility and to increase the robustness of the cellular network as a whole. The simulation results demonstrated that the use of agent technology brought more flexibility in obtaining extra radio resources to the network than the other two approaches. Overall, the multi-agent system proved to be feasible and the agent negotiation was an important feature of the system in order to improve perceived quality of service and to improve the load balancing of the traffic. References 1. MacDonald V. H., Advanced Mobile Phone Services: The Cellular Concept. Bell Systems Tchnology Journal 58 (January 1979). 2. Das S.K., Sen S.K., Jayaram R. and Agrawal P., A Distributed Load Balancing Algorithm for the Hot Cell Problem in Cellular Mobile Networks. In the 6lh IEEE International Symposium on High Performance Distributed Computing (Portland, USA, August 1997) pp. 254-263. 3. Katzela I. and Naghshineh M., Channel Assignment Schemes for Cellular Mobile Telecommunication Systems: A Comprehensive Survey. IEEE Personal Communications Magazine 3(3) (June 1996) pp 10-31. 4. Somers F., HYBRID: Intelligent Agents for Distributed ATM Network Management. Proceedings of First International Workshop on Intelligent Agents for Telecoms Applications (I AT A'96) at the European Conference on Artificial Intelligence (ECAI'96) (Budapest, Hungary (1996). 5. Miiller J. P., The Design of Intelligent Agents: A layered approach. In Lecture Notes in Artificial Intelligence 1177 (1996). 6. Bodanese E. L., A Distributed Channel Allocation Scheme for Cellular Networks using Intelligent Software Agents. PhD Thesis, Queen Mary and Westfield College, University of London (2000). 7. Smith, R.G., The Contract Net Protocol: High-Level Communication and Control in a Distributed Problem Solver. In Readings in Distributed Artificial Intelligence (Morgan Kaufman, 1988) pp 357-366. 8. FOUNDATION FOR INTELLIGENT PHYSICAL AGENTS. FIPA 97 Specification, Version 2.0 - Part 2 - Agent Communication Language Publication. October, 1998. http://www.fipa.org. AN ATOMIC APPROACH TO AGENT-BASED IMAGERY AND GEOSPATIAL PROBLEM SOLVING JAMES J. NOLAN, ROBERT SIMON, ARUN K. SOOD George Mason University Center for Image Analysis Dept. of Computer Science Fairfax, VA 22030 {jnolan,simon,asood} ©cs.gmu. edu This paper describes our approach to building a scalable, flexible agent-based ar­ chitecture for imagery and geospatial processing. The architecture implements over 100 imagery and geospatial processing agents based on the Java Advanced Imaging and OpenMap APIs. The agents are distributed over a Jini enabled network, and communicate with one another via JavaSpaces. We discuss our "atomic" approach in this paper: developing low-level processing agents that are used by applica­ tion specific agents. We discuss several concepts in this approach: agent lookup and discovery through traditional information retrieval techniques, the ability to rapidly prototype agents based on commercial software products, and a knowledge management approach that reuses prior processing approaches and results. 1 Introduction Imagery and geospatial systems are used in the intelligence gathering, car­ tography, and resource management domains, among others. These systems utilize low-level imagery and geospatial services to answer high-level queries. Services might include edge detection on images or route planning on vector data sets, for example. In production intensive environments, it is typical for these systems to process hundreds of images and geospatial data sets per day that each range from several megabytes to several gigabytes in size. The low-level imagery and geospatial processing services used in these systems are usually well defined in terms of the service's name, input data types, parameters, and output data types. On the other hand, the questions that are posed to an imagery and geospatial processing system are usually very high level. For example, a farmer may ask "Is the soil on my farm rich enough for wheat production this year?" or the general may ask "What are my expected troop locations over the next 48 hours?". To answer these queries requires the aggregation of low-level services into higher level services that address the query. In today's largely manual environment, a human analyst makes the trans­ lation from high-level query to low-level processing steps, including the input 334 335 data sets and parameter values. In some cases, this translation can be subjec­ tive, with a wide range of approaches, and is highly dependant on the analyst's experience. In other cases, the translation is well known, with a limited range of approaches, and can be easily codified. In general we have a good under­ standing of the representation of the queries posed, and of the low-level image processing tasks that are used to perform those tasks. Queries are usually comprised of some location, a subject of interest, and some time frame. This structure is fairly static. However, the translation of the high level query, to low-level processing tasks can be dynamic, and highly dependent on the application. To address this dynamic transition area, we have developed an agent- based architecture for imagery and geospatial computing that provides an approach for application-specific agents to be easily constructed from lower- level processing agents. This architecture, called the Agent-based Imagery and Geospatial processing Architecture (AIGA), provides a well-defined set of low-level imagery and geospatial processing agents, which we term " atomic". These agents describe themselves by using an ontology and Agent Commu­nication Language for imagery and geospatial computing. The ontology and ACL are critical for: enabling the discovery of agents to solve a particular query, finding other agents to assist in processing, or discovering information from the agent knowledge base. In this paper, we discuss our approach, the overall agent architecture, our ontology and ACL, the discovery process and how it used by agents and clients, and finally our Java-based implementation. 2 The Agent Architecture The AIGA architecture is comprised of: imagery and geospatial processing agents, an information retrieval agent, agents that provide access to imagery and geospatial data, locations for hosting agents, a communication page space (used for communication and knowledge discovery), and agent communication pages (I-XML Pages). A logical view of this architecture can be seen in Figure 1. Imagery and geospatial processing agents represent low-level functions or algorithms typically used in an image processing or geographic information system. The information retrieval agent provides the mechanism for agents to discover other resources (agents, clients, etc.) on the network. The com­ munication page space is central to our architecture. It provides a place for agents to leave and pick up messages, and serves as a knowledge base repos­ itory. Prior agent strategies to particular problems are stored here in this 336 i Image Processing p—.,^ , Image Data —Jli- f Information ^ Retrieval ^ Agent Geospatial Data Agent Figure 1. AIGA Agent Architecture fully searchable repository, and other agents may utilize, or leverage from this knowledge. I-XML pages are the mechanism with which the agents commu­ nicate and share information. These pages are structured, using the Resource Description Framework 1 and extensible Markup Language (XML) 2. 3 The Ontology An ontology provides a shared and common understanding of a domain that can be communicated between people and across application systems 3. An ontology for imagery & geospatial processing is fundamentally composed of three core elements: the data used in processing, the functions that process the data, and the query and resultant information. By defining a common vocabulary for these three core elements, we allow for interoperability among heterogeneous systems, as well as the potential to increase the number of tasks offloaded to automated processes in the form of agents. The imagery and geospatial communities have developed many data spec­ ifications that suitably describe imagery and geospatial data concepts. For example, an image in the National Imagery Transmission Format (NITF) con­ tains metadata information that can thoroughly describe the contents of an image file. The same can be said for geospatial formats such as Vector Prod­ uct Format (VPF) or the Spatial Data Transfer Standard (SDTS). While the aforementioned standards describe imagery & geospatial data concepts, what 337 these standards lack are the relationships between those concepts. We have taken these standards, and represented them in such a way that relationships can be easily built. Imagery and geospatial processing services are fundamentally composed of: a name, a required and/or optional set of parameters, input data types, and output data types. In addition, there may be other descriptive information such as the service creator, or documentation on the service. For example, to perform image change detection, the name of the operation is "Change Detection", the parameters are a start date and end date, and the service requires two images. The query and resultant information represent the beginning and end of the agent process. The query, as described previously, represents some rela­ tively high-level question (e.g., "What are my expected troop locations over the next 48 hours?"). The resultant information represents information pre­ sented back to the client to assist in the decision-making process. In addition to an answer to the query, the resultant information contains supporting in­formation used during processing such as imagery and geospatial data sets or intermediate processing results. This information provides additional context to the resultant answer. We have chosen to represent our ontology in RDF using XML for the encoding. This approach provides several advantages: 1) these are emerging standards, with several implementations available to easily parse data; 2) the mapping of the components of our ontology into RDF has proven straightfor­ ward, as the concept of " Resource" can be applied to the components of our ontology, as others have shown 4. An example of the RDF Schema for agent descriptions can be seen in Figure 2. 4 The Agent Communication Language According to 5, an agent communication language that allows agents to in­ teract while hiding the details of their internal workings will result in agent communities able to tackle problems no individual agent could. Above, we described an ontology that hides such implementation details from agents. Agents are able to describe themselves using this ontology, and other agents can interpret and use the functionality of those agents. We use the described ontology as the core part of our Agent Communication Language, which we term I-XML. In this section we describe the components of I-XML: the query, baseline representation, computational steps, processing strategy, and finally, results. The Query section represents a question that an analyst wishes to have 338 <rdf:Description ID="Service"> <rdf:type resource="http://www.w3.org/TR/1999/PR-rdf-schema-19990303#Class'7> <rdfs:subClassOf rdf:resource="http://www.w3.org/TR/1999/PR-rdf-schema-19990303#Resource'7> <rdfs:comment>An abstract class used as the top-level class for processing services</rdfs:comment> </rdf:Description> <rdf:Description ID="description" > <rdf:type resource="http://www.w3.org/TR/1999/PR-rdf-schema-19990303#Property"/> <rdfs:domain rdf:resource="#Service"/> <rdfs:range rdf:resource="http://www.w3.org/TR/2000/WD-xmlschema-2-20000407#string"/> <rdfs:comment>A human readable description of the processing service</rdfs:comment> </rdf:Description> <rdf: Description ID=" parameter" > <rdf:type resource="http://www.w3.org/TR/1999/PR-rdf-schema-19990303#Property"/> <rdfs:domain rdf:resource="#Service"/> <rdfs:range rdf:resource=" ^Parameter" /> <rdfs:comment>A parameter used by the agent.</rdfs:comment> </rdf:Description> <rdf:Description ID="dependantUpon"> <rdf:type resource="http://www.w3.org/TR/1999/PR-rdf-schema-19990303#Property"/> <rdfs:domain rdf:resource=" #Service" /> <rdfs:comment>A Service that the agent is dependant upon prior to processing.</rdfs:comment> </rdf:Description> Figure 2. Example RDF Schema for Agent Descriptions answered. This query is decomposed into keywords and location, the first step in the processing chain. This is the basis, or trigger, for all actions within the AIGA architecture. The Baseline Representation contains information about the geo­ graphic location of the query. For example, this may include a bounding rectangle of the region of interest or a place name such as a country or city. The Computational Steps represent the steps necessary to answer the query. For example, steps for locating military troops may include: image retrieval from a database, feature detection on the imagery to identify troops, change detection to determine the movement of those troops, and an open source literature search for ancillary information. The Computational Steps represent the necessary processing steps and not the order in which they will be done. This is essentially a listing of the steps required to determine the resultant information, however this list has not been optimized to take advantage of any parallel processing opportunities. The Processing Strategy refines the Computational Steps into a di­ rected processing graph, which is the exact series of steps required to mini­ mize the time required to complete the task. This is an important feature of time-sensitive systems such as imagery and geospatial systems. Consider, for 339 instance, the Computational Steps defined above. In this processing chain, there are two independent sub-chains: an image processing sub-chain, and an information retrieval sub-chain. The image processing sub-chain consists of three sequential steps, each dependent on results from the previous step: image retrieval, feature detection, and change detection. The information re­ trieval sub-chain consists of one step: searching the open-source literature. The Processing Strategy provides a way to represent these two processing chains as independent of one another, and identifies them as processing chains that could be executed in parallel if more than one processing resource is avail­ able on the network. The Results tag represents any outputs of agents that may help to answer the query. As the query is executed and results are returned from agents, the Results tag will be updated with information that may include geospatial features, references to image sources, or open literature that was useful during processing. 5 The Discovery Process We have found that by using a well-defined ontology as described above, we have created a mechanism that allows clients and agents to search for data types and agents available on the network. Our agent and data set descriptions contain much descriptive information, additionally, our use of RDF in XML has made these descriptions an efficient search space. We have implemented Salton's vector model information retrieval algo­ rithm 6, and we use this as the basis to match queries up with appropriate agents, who in turn make use of the algorithm to find other agents to assist in processing. The algorithm is as follows: Sim^q) = \A-\^Q where sim(aj,q) represents the similarity of agent j to query q, A represents a vector of agent descriptions, and Q represents a vector of the terms from query q. This formula states that the similarity of the capabilities of an agent a., to a particular query q can be calculated by taking the cosine of the angle between the vectors A (the terms of the agent description) and Q (the terms of the query). The terms available in the repository of agent descriptions is updated each time an agent enters or leaves the network. Using these terms, each agent has the capability to calculate its relevance to specific queries posed by a user, and also search for agents it may require assistance from during processing. This 340 is especially useful as we make very fine-grain agents, isolating very specific functionality, and these agents rely on other agents to fulfill processing in response to queries. 5.1 Clients Discovering Agents In the Client-Agent discovery process, a user is submitting a query, by means of a client agent, for processing. This is usually some high level query such as: "What are the expected troop movements in the next 48 hours?". When the user submits this query to the system, they are seeking the best possible solution. To make this match, we measure the similarity of the query to each agent description on the network. Each agent description is text-based, and in a format as defined by our ontology. Returned to the client is a list of agent "bids", or ranked relevance to the query. This process requires the human-in-the-loop to choose the most appropriate agent "bid" against the query. When the user selects the most appropriate agent, the then can fill in the details of the processing, including input data sets or agent parameters. 5.2 Agents Discovering Agents We have defined a model to implement low-level imagery and geospatial pro­ cessing agents. These agents are well described in the form of a name, input and output types, and parameters. These agents can be used indirectly to solve high-level queries. They are used by application specific agents that directly map from a user query to agent form. These application-specific agents theoretically come in one of three forms: 1) those that provide no additional functionality, and rely solely on atomic agents for processing, 2) those that provide some additional functionality, and rely partially on atomic agents for processing, and 3) those that provide all of the functionality, relying on no atomic agents for processing. We argue that our approach (agent types (1) and (2)) makes agent- development easier and quicker than approach (3). In approaches (1) and (2), agent dependency, or the ability to discover other agents that can fulfill processing, becomes very important. This is fulfilled through the use of our agent ontology, describing an agent by certain parameters as well as its agent dependencies. By using a well-defined ontology, higher-level agents can now search for the atomic agents they are dependent upon by agent description. The opposite of the Client-Agent model, this search is fully automated. The high-level agent 341 chooses a threshold, and will select the highest matching atomic agent as its dependant agent as long as its relevance meets or exceeds the threshold. 5.3 Agents Discovering Knowledge The last model that uses our discovery mechanism is the Agent-Knowledge process. In the AIGA architecture, we have the I-XML Page Space. This space is used for two purposes: 1) as a place for agents to exchange messages, and 2) as a place for agents to store and discover knowledge. Knowledge in this sense refers to prior I-XML pages that were used to solve problems. In these I-XML pages, the query, computational steps, processing strategy, and resultant information can all possibly be reused. Consider the following scenario: 1. Analyst A submits the following query to the system: "Have Country A's troops moved closer to the border of Country B?". To answer this query, the analyst may use a set of agents that detect troops in imagery, compare troop movements from previous days, etc. 2. Sometime later, analyst B submits the query "Have Country C's troops moved closer to the border of Country D?". In this example, query (2) is very similar to (1), the only change being the location of the query. The approach that analyst A used to solve his/her query could be of use to analyst B. As such, analyst A's approach is available in the I-XML Page Space. This illustrates one reuse strategy in our architecture, the reuse of prior computational steps or processing strategies. This reuse approach is useful from two perspectives: 1) we provide the potential for less skilled analyst's to leverage and learn from a more skilled analyst's approaches, and 2) we reduce the computations necessary to develop computational steps and processing strategies. Our other reuse strategy centers on the reuse of prior processing results. Consider the following change to the previous reuse example: instead of ana­ lyst B querying over a different geographic location, he/she is now interested in the troop movement of Country A with respect to Country B's border, as analyst A was. Only now, analyst B is interested in the answer to this question a day after analyst A. In this case, analyst B would leverage off of analyst A's processing result to determine troop movement. Now, the agents tasked with the processing only need to start from the most recent result in the I-XML Page Space to calculate troop movement. 342 Figure 3. Client Screenshot 6 Implementation We have created a Java-based prototype of AIGA. Agents are Jini-enabled services and can move to/from locations using this mechanism. We have im­ plemented over 100 agents that represent the functionality in commercial im­ age processing and GIS packages. Agents implemented include: imagery and geospatial processing agents representing functionality of the Java Advanced Imaging and OpenMap APIs; an information retrieval agent that implements Salton's vector model algorithm; a client agent providing the interface between human user and agent system; and imagery and geospatial data agents. In Figure 3, we show a screenshot of the client interface to the system. First, we show a map-background of the world. This map shows where the analyst using the system might have agents working on problems concerning a certain part of the world. Second, at the top of the screen, we show a box where a user may submit a query to the system. When the "Submit" button is pressed, this query is encoded in an I-XML page and written to the I-XML Page Space. The Information Retrieval agent is then notified of the query, and mea­ sures the similarity of each agent to the query. This agent ranks the relevancy, and notifies the client (by writing out to the page space) of the highest ranked agent plus any other agents within a threshold. These "bids" on the query can, ranked by their associated relevance, be seen in the bottom of Figure 3. By selecting a bid, the user selects an agent, or set of agents, to answer the 343 query. An I-XML page (shown in the upper right) appears on the screen. This page contains the details of the agent approach to solve the query. On this page, the user can fine-tune the computational steps and processing strategy necessary to answer the query. The user can change parameter settings in this screen before submitting the page back into the page space for agent processing. 7 Conclusions We have presented an agent-based architecture for distributed imagery and geospatial computing. The approach allows for clients and agents to discover agents for processing using well-known information retrieval techniques. We have prototyped the system using Jini as the middleware. The system is currently used to assist the human analyst in developing processing strate­ gies to solve typical imagery and geospatial queries. Additionally, we have introduced an agent reuse concept where agents can leverage from prior pro­ cessing approaches or results to effectively create better results and reduce the computational load on the system. Acknowledgments This is work is supported under the National Imagery & Mapping Agency's (NIMA) University Research Initiative (NURI) program. References 1. World-wide Web Consortium, Cambridge, MA. Resource Description Framework (RDF) Model and Syntax Specification, February 1999. 2. World-wide Web Consortium, Cambridge, MA. Extensible Markup Lan­ guage (XML) 1.0, February 1998. 3. E. Mena, V. Kashyap, A. Illarramendi, and A. Sheth. in Formal Ontology in Information Systems, IOS Press, Amsterdam, Netherlands, 1998. 4. S. Staab, M. Erdmann, A. Mdche, and S. Decker. An extensible ap­ proach for modeling ontologies in RDF(s). In Proceedings of the ECDL-2000 Workshop "Semantic Web: Models, Architectures and Management, September 2000. 5. Y. Labrou, T. Finin, and Y. Peng. Agent communication languages: the current landscape. Intelligent Systems, 14(2):45-52, March-April 1999. 6. G. Salton. The SMART Retrieval System. Prentice Hall, Englewood Cliffs, NJ, 1971. MODEL-BASED CREATION OF AGENTS AND DISTRIBUTION OF PROBLEM SOLVING KATSUAKI TANAKA, SETSUO OHSUGA Waseda University, Department of Information and Computer Science 3-4--1 Ohkubo Shinjuku-ku, Tokyo 169-8555, Japan E-mail: katsuaki@ohsuga.info. waseda. ac.jp Abstract Problem solving process is defined as 'a subject applies activities to an object'. A combination of Subject, Activity, Object represents a problem solving and is called a problem model. It is represented as an agent. When a problem is large, it is decomposed into sub-problems based on the problem model structure and a structure of agents is created as a multi-agent system. Each agent solves assigned sub-problem and cooperates with the other agents to solve the whole problem. Persons control agents via user agents. 1 Introduction Today, problems that human being must solve are becoming large and complex, because of growing of social systems. Every large problem is becoming unsolv- able by a single person but a cooperative problem solving by many persons is necessary. There how to distribute problems and how to manage persons who join the problem solving are big issues. It is worried that the current method of management is inadequate for following up the growth of problem scale. We are required today to develop a new method of management to solve very large problems. That person is the main body of problem solving processes is one of the reasons why current method is inadequate. A large amount of decisions is distributed to number of persons and the decision procedures made by them are apt to remain only in their brains without being recorded. Very often they make errors as an intrinsic nature of human being. The errors let the quality of the solution very low, but manager cannot follow the problem solving process afterward for checking its validity. It is worried that it causes frequent accidents in the future because the scale of problems grows rapidly. A way to improve this situation is to introduce computers in a problem solving process much more than ever before and let them record the history of the process, especially the history of decisions made there by persons. Thus computer as software-agents to replace some part of workload of persons in problem solving is discussed. 344 345 Many papers have discussed problem solving by software-agents1 2 6. It is assumed in most of the current systems that agents tailored to the problems are to be created by persons. That is, every multi-agent system is dedicated to a given problem with the specific goal. Ihida3 discussed a multi-agent system where agents are created automatically to increase the number of inference engines that works in parallel to reduce the load. In this paper, the authors propose a new method of agent-based problems solving in which agents are cre­ ated automatically and adaptively to a given problem. The created orgnization of agents represents a internal structure of the problem model. A new modeling scheme, multi-strata model4, is used for the purpose. 2 Problem Solving 2.1 Design type problem as example It is said that every problem concerns some object in the world and an object has its own structure that is composed of components. Its formal representa­ tion is an object model. Problem solving is defined as operations to this object model. There must be some subject as an entity that applies operation to this object in order to get solution. Thus a formalized representation of a triple (Subject, Activity, Object) represents objectively a problem solving. It means that Subject intends to apply Activity to Object in order to arrive at a goal state. In reality its computerization is an agent. In real problems however, each of these three items in the triple consists of many elements and forms a structure such as a structure of subjects, a structure of activities, and a structure of objects. These structures are related in the different ways and define a variety of problems. The more complex their relations are, the more complex the problem becomes. If there is a proper way to decompose these relations into a set of triples of (a subject, the simpler structure of finite activities, the simpler structure of finite objects), then the complex problem is decomposed into a set of simple sub-problems. A criterion of decomposability is that the mutual relation be­ tween sub-problems is weak and independence of each sub-problem solving is kept as much as possible. These relations however cannot reduce to null but certain mutual relations remain between sub-problems. An agent is created to each triple and a multi-agent system is formed by means of these agents with relationships between agents to represent the mutual relations between sub- problems. As the way of decomposing the original model is dependent on the problem, a multi-agent system specific to the problem is created automatically. In many real problems there are specific priorities among the structuring 346 Model Analysis I r Model ">•• Solution Figure 1: Process of a design type problem solving rules of subjects, activities and objects respectively in a problem model. For example, making an object structure is the major goal in an engineering design problem. The objective of design is to build up a structure of objects such that satisfies given functional requirements. In this case problem is decomposed first based on decomposition of the object structure. The structures of subjects and activities are built depending on the object model building. A basic operation for a design type problem solving is represented as fol­ lows (fig. 1). It consists of four stages. First, an incipient model is built as an embodiment of a person's idea on problem solving. It includes requirements to be satisfied. Second, the model is analyzed to obtain its functions and behav­ ior. This is an analysis problem solving. Then the result is evaluated whether the model satisfies requirements. If requirements are satisfied on an evaluation step, the design process stops. The model represents a solution. Otherwise the model is modified. An analysis type problem solvng is included in an design type problem solving. Therefore, decomposition and distribution of problems are discussed mainly based on design type problems in the following. 2.2 Large Problem Solving by Persons To solve a large problem, the problem must be decomposed to a set of smaller sub-problems. The solution of the original problem is derived as an integra­ tion of solutions of sub-problems. These sub-problems are distributed to the different persons and solved there. Since sub-problems are generated by decom­ position of the original problem, these persons cannot be assigned in advance, but must be assigned dynamically in parallel with the decomposition process. Fig. 2 shows the case of an airplane design. 347 "Design (his Airplan' Model Decomposition 1 tail wing engine = wing eleclronic system 1 fuselage 9 Model Assignment W —— •*- X ir^no mgi.c ffl yj l r «inS — I lij fuselage Figure 2: Design process of an airplane 1. A chief designer builds an incipient generous model based on given re­ quirements from clients. He/she creates a top node of a hierarchy model to represent the object and gives it requirements. 2. The designer decides tentatively main assemblies of the airplane such as engine, main wing, tail wing, vertical wing, landing gear, etc. If this tentative structure does not satisfy the given requirements, he/she has to find another structure or changes the required function of the assemblies. In general, a complex object is decomposed to a number of assemblies and each assembly is further decomposed to a set of sub-assemblies and so on. In this way, a hierarchical object is generated. 3. The designer distributes problems of designing each assembly to an ex­ pert of assembly design. For example, the engine design is assigned to an engine designer. After then, these experts perform the similar process for designing each assembly. Each expert of assembly design is the subject of activity to the assigned object. Therefore, subjects hierarchy is built similarly as and depending on the objects model (fig. 3). Thus many people commit a design of common objects. It means that decisions are distributed to the different persons and remain there without being recorded. It causes the difficulty of tracing afterward the design for a check and maintenance. It will also be very much troubled in the document acquisition, if previous record is imperfect. This human-centered problem solving method is replaced by a computer- centered method. The computer-centered method means that a computer sys­ tem manages a total process and persons joining the problem solving. In this computer system, a problem is described by knowledge representation language and knowledge-processing agents deal with the problem cooperative- 348 Subj.J Subject Obj. Object Parent-child relationship Subject-Object correspondence Figure 3: Problem decomposition ly. The structure of the agents corresponds to the human organization in a human-centered problem solving. 2.3 KA US as Knowledge Representation Language A language suited for representing this system is necessary. In order to cope with problem model, it must be suited for representing predicate including data-structure as argument and for describing meta-level operation such as knowledge for describing on other knowledge. KAUS 8 (Knowledge Acquisi­ tion and Utilization Language) has been developed for the purpose. In the following, some logical expressions appear as knowledge. However, these are not necessarily written in correct KAUS expressions but locally simplified. It is because KAUS syntax is not included in this volume and these locally sim­ plified expressions are more comprehensive than correct expressions. 3 Multi-Agent Problem Solving System The major parts of the system are user agents, a global knowledge base and problem-solving agents as the building blocks of a cooperatively distributed problem solving system. The overall structure of the multi-agent problem solving system is shown in fig. 4. Each agent communicates in XML like extended KAUS language via the HTTP protocol. A problem solving session starts when a user gives his/her requirements to own user agent. The user agent generates a problem-solving agent with the requirements. Then the problem-solving agent retrieves knowledge of the problem (to satisfy the requirements) from the global knowledge base, and generates other agents depending upon the knowledge. The problem solving and the problem solving system generation progress in this way. 349 c Agent k t / X X" Agent Agent \ A J^ Problem Solving System Agent gent ^ Agent Agent ) ) ) Figure 4: Multi-agent problem solving system When problem is solved, problem solving agents send results to the user agent. The user agent shows the detail of the process to its owner. If the user decides the solution does not meet his/her requirements, he/she can control the whole process of solving via own user agent. 3.1 User Agents Every user who joins the system has an own user agent. A user agent manages information about a user such as e-mail addresses, domains where the user joins, and problems that the user joins solving, etc. It provides interfaces between a user and problem-solving agents. The user can change any decisions made by problem-solving agents. He/she can modify and add knowledge that is stored in problem-solving agents. He/she can also send requests for other user to join a problem solving. The user who invited to the problem solving, can join it and control problem-solving agents. 3.2 Global Knowledge Base The global knowledge base supplies knowledge necessary for problem solving to every problem-solving agent. A knowledge base management system manages large amount of knowledge from various types and domains. It accepts requests from agents to retrieve knowledge. Knowledge retrieval must be done in a short time for assuring the practicality of the system. Therefore the knowledge is divided into chunks by type information, domain information and the other information for aiding rapid retrieval of knowledge5. These chunks are structured under the manage­ ment system. The large knowledge base management system is itself a special agent. It accepts requests from the other agents, and sends suitable knowledge back to the agent. 350 ,ffW^y#4'^t^J»l--'">"•* •• -» ••> >• '!••' ••»••••. («ail)Ml4al*MlJiiIUJhAt.[iI&*fariilit!i; ".'CIO IS&.UHJ JBOj j«f, jcwga Figure 5: Screen of a user agent 3.3 Problem-Solving Agents A problem-solving agent is designed not as a special-purpose agent to achieve a special goal but a general-purpose agent that can accept and cope with any problem. No agent has any knowledge related to a specific object in advance. It retrieves necessary knowledge from the global knowledge base when a problem is assigned and just before to start problem solving. A problem-solving agent has three layers in accordance with multi-strata model. First layer is an agent controller. It generates the problem solving system to solve assigned problem by retrieving necessary knowledge from the global knowledge base and communicates with the other agents. Second is the problem solving system that consists of an inference engine and a local knowledge base for solving the given problem actually. Third is a model of activities and objects that is manipulated by the second layer (fig. 6). 3,4 Large Problem Solving by Agents When an upper-agent (an upper problem-sovling agent or a user agent) needs to solve a sub-problem and no agent has been assigned the sub-problem, the upper-agent generates a new lower agent and assigns it the problem. Otherwise if a lower agent that has been assigned the problem already exists, the upper- agent requests the agent to retry the problem solving for another solution. The agent starts working. The controller retrieves knowledge of the problem from 351 4 ...... Global 1 '• Knowledge Base Figure 6: Structure of a Problem-Solving Agent the global knowledge base and stores knowledge in the local knowledge base. Then the inference engine starts inferences. When there is not enough knowledge in the local knowledge base, the inference engine requests the agent controller to give new knowledge. The controller requests user agents of which owners have joined the same domain as the problem to input new knowledge. If necessary, the inference engine requests the agent controller to distribute sub-problems to lower agents or sends messages to the other agents in accor­ dance with the relations between sub-problems (assemblies in the case of design problem) in the original problem model. When the problem is solved, the agent controller returns the solution to the upper agent. A problem-solving agent destructs oneself when it receives message telling that the problem solving has been finished. If a problem-solving agent succeed­ ed in solving problem, it stores the solution in the problem model. Otherwise, it discards the solution. Knowledge of a design type problem is defined as fig. 7. The prefix "ALL" in (3) means to apply this knowledge to all items of a list, "{distribute design)" means to distribute the predicate "{design)" in this knowledge. The inference engine distributes design sub-problems to other agents following this rule. (6) and (7) means to retry the predicate "design" and "decompose" in a condition that requirements are not satisfied. A problem-solving agent works as follows with this knowledge: First, de­ compose the object into sub-objects (2) and assign each sub-object to a lower agent (3), (5). Then receive sub-solutions from lower agents and merge them (4). While the solution does not satisfy requirements, request the lower agent to find another solution (6), (3). If a lower agent cannot find any solution, send a fail message to all lower agents to destruct themselves and change the way 352 ('(design functionList restrict A) (1) [(decompose functionList restrict SubFunctionsList SubRestrictsLisi) (2) [(design ALL subF unction ALLsubRestrict Sub A SubRestrictA) (3) [(mergeModel A P ALLSubA ALL SubRestrictA) (4) (distribute deisgn) (5) (retry. 1 design) (6) (retry 2 decompose) (7) )• Figure 7: Knowledge for design (decompose [PC, forEditVideo] 1000 [[PCIO, IEEE1394] 200, [SCREEN, HighResolution] 700] ...). (8) (design [PCIO, IEEE1394] 1000 [CARD, CI]). (9) (design [PCIO, IEEE1394] 1500 [CylflD, C2]). (10) (decompose [SCREEN, HighResolution] 700 [[PC/O, HighResolutionVideoCard] 100, [CAT, C#T1] 600]). (11) Figure 8: A part of knowledge of designing PCs of decomposition (7), (2) and distribute new sub-problems. If still it cannot find any solution, it returns to the upper agent the message that any solution could not found. When the solution meets the requests, it is returned. 4 Experiments This system was applied to the design problem of personal computers. Users give requirements to own user agent such as a PC for editing video movies, for working as a web server, etc. and costs as a limitation. In a case where a user wants a computer to play DVDs under $900, he/she gives his/her requirement as "(design [PC, PlayDVD]) 900 ,4)?" to own user agent. Knowledge of personal computers is recorded in the global knowledge base. For example, a PC for editing video needs an IEEE1394 I/O port, a high-resolution screen, etc. (fig. 8), (9) The problem of designing a PC was divided into sub-problems of designing parts; these sub-problems were distributed to the different agents and the 353 ™WWB1WMIMIIII,IIIM^ ffi» d* *S ¥iy? ComsMinteator i*^ Objwi morfrl of: (design [PC, edit Video] 1000 A)7 A - [CARD, 101], [CARD, VI], [CRT, CRT1], [HDD, HD1] [PC, editVideo] 1000 [PCIO, IEEE1394],200 [CARD, IOl], 200 [SCREEN, HigKResolution],700 [VIDEOCARD, VRAM32], 200 [CARD, VI], 200 [CRT, Utah], 500 [CRT, CRT1], 500 [HDD, Large], 100 [HDD, HD1], 100 ICIoip] IS^^^^I .h* w #» cs \*| Figure 9: Result of "(design [PC, editVideo) 1000 A)T' designed parts were merged to obtain the model of a PC. Users could change decisions made by agents. When a user requires an alternate solution but there is no more knowledge in knowledge base, the agent send requests users to give new knowledge. The solution and the object model composed by the system are shown in fig. 9. It was confirmed that the different organization of agents was generated depending on the way of decomposition of a problem, and results of the past trials were used effectively. 5 Conclusions In this paper, it was discussed a way of solving a large problem by a distributed multi-agent system in cooperation with persons. Problem solving was represented by a triple (Subject, Activity, Object) and relations among them. Based on the relation, a large problem solving was decomposed into sub-problem solving. An agent was created corresponding to each sub-problem and a multi-agent system was organized according to the decomposition structure. The agent is intelligent in the sense that it can solve various type of problem autonomously, and it can create other agents as needed. Each person who joins a problem solving can control the behavior of problem-solving agents via own user agent. He/she can change any decision made by any agent, give new knowledge to any agent, and ask other persons to join a problem solving process. A basic idea, a way of problem solving, also a way of generating a multi- agent system, was tested by an experiment using a simple example. This system is a part of a larger system the author's group is developing now. The part discussed in this paper is a central portion of the ideas on this system development. Acknowledgment This research was conducted sponsored by the Ministry of Education, Culture, Sports, Science and Technology of Japanese Government. The authors would like to express sincere thanks to their support. References 1. Caroline C. Hayes, Agents in a Nutshell - A very Brief Introduction, IEEE Transactions on Knowledge and Data Engineering, Vol. 11, No. 1, January/February 1999 2. M. Harandi and G. Rendon, A Support Environment for Building Dis­ tributed Problem Solvers, Proc. IEEE Conf. Systems, Man, and Cyber­ netics, Oct. 1997. 3. T.Ishida, L.Gasser, M.Yokoo, An Organizational Approach to Real-Time Continuous Problem Solving, Journal of Japanese Society for Artificial Intelligence, Vol.7, No.2, Mar. 1992 4. Setsuo Ohsuga, Toward truly intelligent information systems — from export systems to automatic programming, Knowledge-Based Systems, pp.363-396, Oct. 1998 5. Setsuo Ohsuga, Hiroyoshi Ohshima, A Practical Approach to Intelligen- t Multi-Task Systems - Structuring Knowledge Base and Generation of Problem Solving System, European-Japanese Conference 2001 on Infor­ mation Modeling and Knowledge Bases, Jun. 2001 6. G.W. Tan, C.C. Hayes, and M. Shaw, An Intelligent-Agent Framework for Concurrent Product Design and Planning, IEEE Trans. Eng. Man­ agement, vol.43, no.3, pp.297-306, Aug. 1996 7. Katsuaki Tanaka, Michiko Higashiyama, Setsuo Ohsuga, Problem De­ composition and Multi-Agent System Creation for Distributed Problem Solving, ISMIS 2000, LNAI 1932, pp. 237-246, 2000 8. Hiroyuki Yamamuchi, KAUS User's Manual Version 6.502, RCAST, University of Tokyo, 1999 A DISTRIBUTED ALGORITHM FOR COALITION FORMATION AMONG E-COMMERCE AGENTS GUILLAUME VAUVERT AND AMAL EL FALLAH - SEGHROUCHNI Laboratoire d'Informatique de Paris Nord - UPRES-A 7030 - Institut Galilee Universite Paris 13 - 99, av. J-B Clement - 93430 Villetaneuse - France {guillaume.vauvert, elfallah\@lipn.univ-parisl3.fr Since no optimal structure exists, organizations have to be flexible to dynamically react towards environment changes. In an economic context, agents are strongly autonomous and weakly rational and have to deal with cooperation and competi­ tion, as in task repartition domain. This paper proposes an open and distributed protocol based on exchange of preferences computed using qualitative and quanti­ tative criteria: agents will agree on coalitions to form in order to fulfill tasks. We are going to prove that our protocol converges to reach consensus. Experimenta­tion shows that the most rigid strategy is not optimal and that higher competition leads to easier consensus. 1 Introduction In economic context, rational agents are lead to cooperate in several situ­ ations: 1) agents cannot perform tasks by themselves; 2) other agents are more efficient in performing tasks; and 3) working on the task collaboratively will increase benefits or reduce its costs1. To increase the efficiency of task achievement, agents may work jointly2'3'4,5 and may form coalitions viewed as groups of agents whom have decided to cooperate in order to carry out a common task1. Suitable to dynamic environments, coalition formation is usually studied from three perspectives, considered independently even if they are not: 1) coalition structure generation (partitioning or covering the set of agents); 2) solving the optimization problem (solving their joint problem and receiving eventually a benefit depending on used resources and spent time) and 3) dividing the value of the solution among agents (decided by agents themselves or imposed before beginning the process, addressed by game theory6). Many coalition formation approaches exist, but address different problems in different domains. Sarit Krauss has proposed a classification of works in coalition formation7 that we are going to extend to emphasize our criteria. The set of main criteria is about the domain. Distributed authority, commu­ nication and negotiation are always considered: —Individual goals8 vs Common goal1,9'9 (social welfare maximizing). —Self-interested8,10 vs Altruistic. 355 356 —Only pure software agents vs pure software with people agents. —Known rationality (group1,11,6, personal8,10, coalition9,8) vs unknown. —Bounded rationality12,3. —Positive externalities (cooperation)8 vs Negative (competition). —Number of agents: a dozen1, a hundred, thousands. —Untractable size of solution space 10>13 vs Small space. —Defined and agreed protocols10) vs non pre-defined protocols. —Static vs evolutionary evaluation of incomes. —Common vs individual evaluation of incomes. —Costly3 vs costless computation. —Independent tasks9. —Satisfy the more tasks as possible9 vs satisfy all tasks. —Enough competences and agents to solve problem. —Dynamicity: agents appear/disappear and task arrive constantly1,4. —Transferable resources (more beneficial coalition,9,10) vs no transferability. —Monetary system for side-payment10. —Set partitioning10 vs set covering14. _CFG10,13,5,12,2 vs non_CFG. —Super-additive5,8,2, sub-additive12,2, no additivity (most of cases). In e-commerce, self-interested agents deal with selfish goals and in DPS, altru­ istic agents deal with common goals. The type of goal is given by the problem, since the type of agent is defined by environment (DPS, e-commerce), or res­ olution choice (DAI, MAS, ...). As shown in12, bounded rational value of a coalition is determined by three parameters: 1) as usual, the domain problem (task and resources); 2) the execution architecture (limited and costly computation) and especially 3) the possibility for agents to design their protocols. Effects of computational limi­ tations on coalition structure formation and stability have been studied in3,12. In this paper, we focus on self-interested agents acting in an economic con­ text. They have individual goals (to increase their incomes) and might be pure software agents or interface for human, and then no strategy is assumed and rationality is bounded (autonomy and rationality deeply studied in15). The protocol we propose is assumed to be known and agreed by agents, but they are completely autonomous: protocols take into account possibilities for agents to try to cheat. The problem of task allocation binds agents to cooperate in order to fulfill tasks (each agent is able to fulfill a part of a task). We assume that all tasks can and must be fulfilled. A task might be dependent of an another (prece­ dence order, income decrease, same/different agent for some subtasks) and coalitional value may depend on non-member actions: this may be taken into 357 account by a modification of solution space and of subtasks incomes (but no experimentation have been made upon). Resources may be not transferable, but if they are, agents may exchange resources outside the protocol without modifying it. A monetary system is used for experimentations to simplify computation, but since the protocol is only based on preferences exchange, it is not necessary (agents need only criteria to compute their preferences). The number of agents may be large (around 25), and experimentations show that the number of turns decreases when the number of agents increases (time however increases because each turn spend more time). Evaluations of incomes are individual and may evolve during the process. Computation and communication time might be taken into account, by de­ creasing sub-task income as time elapses. Experimentations assume that agent may fulfill subtasks in different coali­ tions, but the protocol run with a partition (reduction of the solution space). The optimality of the solution have no sense here, because it depends on agents viewpoint. However, chosen solution is legitimate, because no agent is favored. In this defined context, we propose a protocol that take into account strong autonomy15, weak rationality15 and heterogeneity to reach a consensus about a subtask repartition. This article is organized as follows: section 2 formalizes the concepts we de­ fine in order to solve the consensus problem. Section 3 proposes a distributed algorithm to be executed by agents during the consensus process. It goes on to prove the convergence of the proposed algorithm. Section 4 discusses experimentation and provides some of our most significant results. Finally, section 5 concludes the paper. 2 Coalition Formation To reach a consensus, agents have to exchange information to possibly evolve their preferences. Argumentation should be used, but it needs a complex pro­ cess, it binds agents to have a common communication language and to know the rationality of others. Heterogeneous agents should prefer to exchange ba­ sic information that don't need such a formal process. Thus, at each turn, agents send their preferences to others and consider other's preferences to compute their next preferences. Because agents whom don't make conces­ sions are more likelihood to be ejected from the final solution (see 4), agents may be flexible. If they aren't enough, they may form alliances; if no alliance is formed, agents choose two agents whom are obliged to ally. Finally, alliance formation leads to facilitate a consensus to be reached. This algorithm is more broadly beared out and described in16. 358 2.1 Formalization Let us now presents the concepts of the coalition formation problem and highlight their meaning within a case study: airlines choose to cooperate to provide their passengers with a unified reservation system. The problem is that for each travel, several airlines are in competition on some stages. Definition 1 (Coalition Formation Problem (CFP)) A CFP is de­ fined as a tuple (A,T,S,C,V), where: A: the set of agents candidate to the execution of sub-tasks; T: the set of tasks to be accomplished; S: the set of sub-tasks to be carried out; C: the set of competences necessary to perform the sub-tasks; V: the set of incomes. An agent a £ A is defined by: a = (C, strategy), where C C C, and strategies contain competences computation (see 2.2). A task t £ T is defined by the set of sub-tasks it contains: t = (S), S C S. A sub-task s £ S is defined by s = (C,p), C C C,ps £ V, where c is the set of competences which an agent must have to be able to carry out the sub-task, and p the associated profit (used by agents to compute his preferences). A competence c £ C is a single item which represents what is required to be carried out by an agent. A sub-task can require more than one competence. A profit p £ V is used as an income, but only to simplify agents internal calculations: V C H+. However, the type of profits independence implies that any unit could have been used. Example 1 Agents = arlines: A = {EUropean Airlines, US'Airlines,...}. A task = a flight: T = {New York-MAdrid (via PAris and LYon), ...}. Each flight: needs competences: authorization to do a national stage, pas­ sengers capacity, range of action: EUA—({autEU, MidC, ShrtR}); provides incomes: V=[0,10000] and NY - M=({NY^P,P-^L,L-^M},8000). Definition 2 (Solution) A solution is an assignment of each sub-task to an agent which is able to perform it. A solution a £ E is an application S —>• A such that Vs E S, a = cr(s) => s.C C a.C. Definition 3 (Preference) A preference is represented by distances (in the meaning given below) S £ A between solutions, where S : E x S —> [—1,1] is an antisymmetrical application. So, &{<Ti,<T2) = d is interpreted by "u^ is preferred to (T\ with a distance d if d > 0 and <T\ is preferred to <T2 with a distance —d if d < 0". A null distance means that the solutions are indifferent. Example 2 <T15 = [NY-^PA2^->WOA,L-+M<-+BUA,P-i-MO<->EUA,...}. Let Si = {O-Q, 0-2,0-4} the set of solutions which provide outcomes and S2 = {&1, °"3, C5} the set of solutions which provide none. 8(0, 0') = 0 if o and a' 359 are in the same set, and 8(a,cr') — 1 otherwise. Definition 4 (Sight) A sight (i>t)teiN EV is an application A —>• A. Definition 5 (History) A history h £ H is a sequence of sights. A history h = (t>t)i<t<T represents all preferences exchanged between turn 1 and T. An alliance is a set of agents and behave like a single one. A member have the representative role: he send alliance's preference to outside agents. Definition 6 (Alliance) An alliance ) 6 A is defined by A = (A,arep), where A Q A and arep £ A an alliance member with a special role (see 2.3), with the constraint that an agent can belong to only one alliance. To compute the alliance's preference, the representative needs a function of agregation, which is difficult to find 17. Definition 7 (Alliance Preferences Computation) An alliance prefer­ ences computation APC is an application A —> A. This application is known only by alliance members; other agents only known result of computation. Example 3 Let A £ A an alliance, A = {A,arep}, A C A. APC{\) = 8, where 8 is defined by: Vfcr^o^) £ E2, 8(a\,<T2) = YlaEA a-^(<Tii <J2)- This example use only members preferences to compute alliance preference. 2.2 Strategy of a member agent The agent's strategy depends on his preferences computation: Independent Preferences Computation (computation of the first preferences without know­ ing those of the others) and Dependent Preferences Computation (computa­ tion of preferences of next turns). Definition 8 (Coalition) A coalition Q,(cr,t) C A associated to the task t £ T in the solution a £ E is defined by: Q(«r, t) = {a £ A/3s £ S,s £t.S, a(s) 3 a}=[Jset s (T(S). A coalition contains all the agents which take part in a task. Definition 9 (Preferences Computation) — Independent Preferences Computation: IPC £ A. —Dependent Preferences Computation: DPC '• H —»• A, h i—>• 8. Example 4 Let 8 = IPC, V(<ri,o-2) £ S2), 8(ai,a-2) = profit{a2) - profit(ai). 8 is an antisymmetrical application. Let 8 = DPC(h), h = (ut)teiN- V(cr1,(T2) £ E2, <$(<Ti,d2)=Ea(E>t(«T(a)M<Ti,(T2)]/|.4|. S is an anti- symmetrical application. 2.3 Strategy of a representative agent Definition 10 (Criteria) —Releasing Switch-over Proposal Criterion (criterion used to decide when to propose to release to switch-over mode): RSPC : H —y {False,True}. 360 —Releasing Switch-over Acceptance Criterion (criterion which decides to ac­cept or not to switch to release mode): RSAC : H —> {False, True}. —Alliance Formation Proposal Criterion (gives a list of agents to which to pro­ pose to form an alliance): AFPC : ft >->• (Ai,..., An), where AFPC(h) = 0 is allowed and is interpreted by "the agent doesn't want to form an alliance". —Alliance Formation Acceptance Criterion (allows to answer to alliance for­ mation propositions): AFAC : H,A^ {False,True}. Example 5 Let ft = (vt)i<t<T- RSPC{h) = False if T < 2 and RSPC(h) = (VT = VT-I) V (VT-I — VT-2) V (VT — VT-I) otherwise. To diminish computation complexity, only loops of length 3 or less are detected and to simplify computations, RSAC = RSPC. Let d : A x A —>• IR a dis­tance between agents preferences, for example: ¥(61,62) G A2, d(6i,62) = £(<7i,<72)eS2 IM0"!'^) -62{<ri,o-2)\. For an agent a, AFPC(h) is the set of agents which preferences are enough near to him using a threshold. We can use the same application to compute AFAC but using a greater threshold. 3 The algorithm of consensus protocol Each agent may play several roles within the system. The organizer sends datas and manages inscriptions and turns. The supervisor prevents agents to send different preferences to each agent (information can not be used before others thanks to a parallal diffusion16) by asking agents what preferences they have sent and received (penalty may be paid by culprits). The candidate re­ ceives tasks to fulfill and decides to take part in or not: if he does, he becomes an alliance of one member (himself) and the representative of this alliance. The member receives and sends his preferences when asked by the represen­ tative. The representative has been defined in section2.3 and his algorithm is given below. The representative's algorithm plays a leading role. Each representative has a list of interlocutor's InterList C A initialized with the list of the can­ didates. The following algorithm is carried out by each representative ai in a distributed way. In switch-over mode, representatives decide which alliances are going to merge (using AFPC and AFAC); if no alliance desires to merge, the system choses them. Termination. In order to be able to guarantee that the process terminates, we have to assume that the criteria of switch-over mode releasing checks the existence of a loop: if the same situation occurs twice (this case will necessar­ ily happen), then an alliance is formed. In the worst case, there will be only formations of forced alliances, what will lead to a great alliance. In fact, the number of situations is not finite (preference use real numbers). To escape 361 Algorithm 1 Representative: Main receive( "IndPref", AllianceMembers) h ^> ParallelDiff (IndPref, InterList) send( "IndPref", AllianceMembers) while the consensus isn't reached do if RSPC(h) then send("proposition to switch-over mode",) if receive("proposition to switch-over mode",) then if RSAC then send ("proposition to switch-over mode",) if Va £ InterList, receive("proposition to switch-over mode",a) then call switch-over mode receive( "DepPref", AllianceMembers) h <-^ ParallelDiff (DepPref, InterList) send( "DepPref, AllianceMembers) end while this problem, we consider that two sights are equal if all their preferences are rather close w.r.t. the given distances as introduced in Example 5. Definition 11 (Pseudo-equality) Let e a small real, 8 and 8' two prefer­ ences and vt = and vt> two sights. We shall say that: —8 and S' are pseudo-equal (5 ~ 8') i/W £ S, \delta(a) — delta'(<T)\ < e; —vt and vti are pseudo-equal (vt ~ vti) «/Va £ A, \vt(a) ~ vt*(a)\ < e. Definition 12 (A cycle-like in a history) A history h = (vt)i<t<T con­ tains a cycle-like if3(ri,T2) £ [[l,?1]]2, T\ ^ r2 such that vTl ~ vT2. Definition 13 (A CFP detects cycle-like) A CFP (A,T,S,C,Q) de­ tects cycle-like if (h contains a cycle-like => (Bao £ A such that ao.RSPC(h) = True A Va £ A, a.RSAC(h) = True)). In other words, a CFP detects cycles- like if at least one agent detects it and all then accept to change mode. Theorem 1 If a CFP detects cycle-like, then the program terminates. Proof. If a CFP detects cycles-like and there is a cycle-like, then at least one agent will propose to change mode and all other will accept. Agents may then form alliances. If they don't, two agent will be compelled to form an alliance. As number n of agents and number k of solutions aren't infinite, the number of sights not pseudo-equal is finite (2nk/e). Finally, after 2kn(n — \)/e turns at worst, there is consensus. I Complexity. Complexity depends in particular on the number of possible solutions which is directly related to the problem datas. Let us assume that our system contains n agents and that each of them is able to process a portion of 1/m of the tasks; then a task has on average n/m agents able to carry it, 362 Figure 1. Income / strategy Figure 2. Number of turn / strategy what gives k = (n/m)s solutions. In the most general case, our algorithm does not make it possible to change class of complexity, but experimentations show that with alliance formation, the turn number is bounded. 4 Experimentation Many parameters influence the process, but three of them have more influ­ ence: agents strategies, competences repartition (more or less competition) and the number of agents. To measure the influence of the first parame­ ter, the number of agents is fixed (7). The preference of agent a at turn t: 6a(t) = (1 - w(t)) x Sa(0) + w(t) x J2beA Sb(* ~ 1)/\A\, where w{t) = e~at. This weight simulates a more or less flexible strategy. The goal of this exper­ imentation is to find the best average strategy according to other strategies. In Fig.l, each curve represents the strategy of the population (from 0.0 =rigid to 0.0 =fiexible strategy). Results are the average of a large amount of ex­ perimentations (350). As expected, agent's income begin to increase, but, around 0.7, agent's income decreases: to be too rigid should lead an agent to be excluded from chosen solution, he will so earn less income. That should lead agent to choose flexible strategies. Fig.2 shows that when more agents are rigid, consensus is hardly reached. If agents are too rigid, jamming detection leads to form an alliance and conse­ quently to reach a consensus more quickly, even if the last is not desired. More the agents have competences, more they have to compete with others. We studied the influence of the number of agents per subtask (competition level) on the incomes (no Figure) and on the number of turns (Fig.3). As ex­ pected, when competition increases, incomes decrease and consensus become more difficult to reach. 363 Figure 3. Turn / agents per subtask Figure 4. Turn number / |.4| As the number of agents increases (Fig.4), there are more and more agents able to fulfill subtasks and competition increases. But if the number of agents is greater than 25 (this value depends on other parameters), then reaching a consensus is easier, because the formed coalition contains enough agents to fulfill all the tasks: usually, one coalition fulfill all tasks. 5 Conclusion From the point of view of deployment of M.A.S. in an economic context, it is necessary to consider weakly rational, strongly autonomous and heterogeneous agents. To arrive, within this framework, to form coalitions, we propounded an open, distributed and egalitarian protocol based on an exchange of prefer­ ences computed with quantitative and qualitative criteria since dependent on the specific strategies of the agents. Moreover, we set up releasing procedures thanks to the flexible concept of alliance to avoid the system paralysis. We have shown that, with this protocol, to be extremely rigid is not optimal and high competition leads to a faster consensus. The next step is to lower complexity by reducing the size of exchanged pref­ erences (using Taboo to find quickly good solutions) and the number of turn (by using models of others). References 1. O. Shehory, K. Sycara, and S. Jha. Multi-agent coordination through coalition formation. In M. Singh A. Rao and M. Wooldridge, editors, Lecture Notes in Artificial Intelligence, volume Intelligent Agents IV - 1365, pages 143-154. Springer, 1997. 364 2. G. Zlotkin and J. S. Rosenschein. Coalition, cryptography, and stability : Mechanisms for coalition formation in task oriented domains. In Proc. of AAAI94, pages 432-437, Seattle, Washington, 1994. 3. T. W. Sandholm and V. R. Lesser. Coalition formation among bounded rational agents. In Proc. of IJCAI-95, pages 662-669, Montreal, 1995. 4. O. Shehory and S. Krauss. Formation of overlapping coalitions for precedence-ordered task-execution among autonomous agents. In Proc. of 1CMAS-96, pages 330-337, Kyoto, Japan, 1996. 5. S. P. Kepchel. Forming coalitions in the face of uncertain rewards. In Proc. of AAAI94, pages 414-419, Seattle, Washington, 1994. 6. A. Rapoport. N-person game theory. Technical report, Michigan Univ., 1970. 7. S. Kraus. Negotiation and cooperation in multi-agent environments. Ar­ tificial Intelligence, 94(l-2):79-98, 1997. 8. O. Shehory and S. Kraus. Coalition formation among autonomous agents: Strategies and complexity. Lecture Notes in A.I., From Reaction to Cog­ nition, C. Castelfranchi and J. P. Muller (Eds.)(957), 1995. 9. O. Shehory and S. Kraus. Task allocation via coalition formation among autonomous agents. In Proc. of IJCAI-95, Montreal, August 1995. 10. O. Shehory and S. Kraus. A kernel-oriented model for autonomous-agent coalition-formation in general environments: Implementation and results. In Proc. of AAAI-96, pages 134-140, Portland, Oregon, 1996. 11. J. C. Harsanyi. Rational Behavior and Bargaining Equilibrium in Games and Social Situations. Cambridge University Press, 1977. 12. T. Sandholm and V. Lesser. Coalitions among computationally bounded agents. Artificial Intelligence, Special issue on Economic Principles of Multiagent Systems(94(l)):99-137, 1997. 13. T. Sandholm, K. Larson, M. Andersson, O. Shehory, and F. Tohme. Coalition structure generation with worst case guarantees. Artificial In­telligence, 1999. 14. O. Shehory and S. Kraus. Methods for task allocation via agent coalition formation. Artificial Intelligence, 101(l-2):165-200, 1998. 15. G. Vauvert and A. El Fallah Seghrouchni. Coalition formation among strongly autonomous and weakly rational agents. In Proceedings of MAA- MAW2001, Annecy, France, May, 2-4 2001. 16. G. Vauvert and A. El Fallah Seghrouchni. Coalition formation among egoistic agents. In Proceedings of MAMA '2000, Wollongong, Australia, December 11-13 2000. 17. K. Arrow. The Origins of the Impossibility Theorem, chapter 1. Elsevier Science Publishers B. V., Amsterdam, 1991. Optimal Reward Functions in Distributed Reinforcement Learning David H. Wolpert and Kagan Turner NASA Ames Research Center, Mailstop 269-1, Moffett Field, CA 94035 {dhw,kagan} Qptolemy.arc.nasa.gov We consider the problem of designing (perhaps massively) distributed collections of adaptive agents so as to optimize a world utility function dependent the behav­ ior of the entire collection. We consider this problem when each agent's individual behavior is cast as striving to maximize an associated payoff utility function. The central issue in such design problems is how to initialize/update the payoff utility function so as to induce best possible world utility. Traditional "team game" ap­ proaches simply assign to each agent the world utility as its payoff utility function. In previous work we used the "Collective Intelligence" framework to derive a better choice of payoff utility functions, one that results in world utility performance up to orders of magnitude superior to that ensuing from use of the team game utility. In this paper we extend these results using a novel mathematical framework. We review the derivation under that framework of the general class of payoff utility functions that both are easy for the individual agents to learn and that, if learned well, result in high world utility. We then demonstrate experimentally that using these new utility functions can result in significantly improved performance over that of previously investigated collective intelligence payoff utilities, over and above those previous utilities' superiority to the conventional team game utility. 1 Introduction In this paper we are interested in Multi-Agent Systems (MAS's)1-2-3'4 where there is a provided world utility function that rates the possible histories of the full system. At the same time, each agent runs a reinforcement learning (RL) algorithm5'6'7, to try to maximize its associated private utility function. In such a system, we are confronted with an inverse problem: How should we initialize/update the agents' private utility functions to ensure that as the system unfolds the agents do not "work at cross-purposes", and their collective behavior maximizes the provided world utility function. Intuitively, to solve this inverse problem requires private utility functions that the agents can each learn well, but that also are "aligned" with the world utility. In particular, such alignment is necessary to avoid economics phenomena like the Tragedy of The Commons (TOC)8 or Braess' paradox9. This problem is related to work in many other fields, including computa­ tional economics10, mechanism design11, reinforcement learning7, statistical mechanics12, computational ecologies13, (partially observable) Markov deci­ sion processes 14 and game theory n. However none of these fields is both applicable in large, real-world problems, and also directly addresses the gen- 365 366 eral inverse problem rather than a very special instance of it. (In particular, the field of mechanism design is not generally applicable. A detailed discussion of related fields, involving hundreds of references is available15.) It's worth emphasizing that some of the previous work that does con­ sider the general inverse problem does so by employing MAS's in which each agent uses RL16'17. However, in those cases, each agent generally receives the world utility function as its private utility function (i.e., implements a "team game" 18). The shortcoming of such approaches, as expounded below and in previous work, is that they scale very poorly to large problems. (Intuitively, the difficulty is that each agent can have a hard time discerning the echo of its behavior on the world utility when the system is large.) In previous work we modified these systems by using the Collective INtel- ligence (COIN) framework to to derive the alternative "Wonderful Life Utility" (WLU)15, a private utility that generically avoids the pitfalls of the team game private utility9'19'15'20. For example, in some of that work we used the WLU as the private utility for distributed control of network packet routing19. Con­ ventional approaches to packet routing have each router run a shortest path algorithm (SPA), i.e., each router routes its packets in the way that it ex­ pects will get those packets to their destinations most quickly. Unlike with a COIN, with SPA-based routing the routers have no concern for the possible deleterious side-effects of their routing decisions on the global goal (e.g., they have no concern for whether they induce bottlenecks). We ran simulations that demonstrated that a COIN-based routing system has substantially better throughputs than does the best possible SPA-based system 19, even though that SPA-based system has information denied the COIN system. In related work we have shown that use of the WLU automatically avoids the infamous Braess' paradox, in which adding new links can actually decrease throughput — a situation that readily ensnares SPA's. As another example, we considered the pared-down problem domain of a congestion game21, in particular a more challenging variant of Arthur's El Farol bar attendance problem22, sometimes also known as the "minority game" 12. In this problem, agents have to determine which night in the week to attend a bar. The problem is set up so that if either too few people attend (boring evening) or too many people attend (crowded evening), the total enjoyment of the attendees drops. Our goal is to design the reward functions of the attendees so that the total enjoyment across all nights is maximized. In this previous work we showed that use of the WLU can result in performance orders of magnitude superior to that of team game utilities. In this article we extend this previous work, by investigating the impact of the choice of the single free parameter in the WLU (the "clamping parameter"), 367 which we simply set to 0 in our previous work. In particular, we employ some of the mathematics of COINs to determine the theoretically optimal value of the clamping parameter, and then present experimental tests to validate that choice of clamping parameter. In the next section we review the relevant con­ cepts of COIN theory. Then we sketch how to use those concepts to derive the optimal clamping parameter. To facilitate comparison with previous work, we chose to conduct our experimental investigations of the performance with this optimal clamping parameter in variations of the Bar Problem. We present those variations in Section 3. Finally we present the results of the experiments in Section 4. Those results corroborate the predicted improvement in perfor­ mance when using our theoretically derived clamping parameter. This extends the superiority of the COIN-based approach above conventional team-game approaches even further than had been done previously. 2 Theory of COINs In this section we summarize that part of the mathematics of COINs that is relevant to the study in this article. We consider the state of the system across a set of consecutive time steps, t € {0,1,...}. Without loss of generality, all relevant characteristics of agent rj at time t — including its internal parameters at that time as well as its externally visible actions — are encapsulated by a Euclidean vector £,,(, the state of agent r\ at time t. (j^ is the set of the states of all agents at t, and C is the system's worldline, i.e., the state of all agents across all time. World utility is G(C), and when t] is an RL algorithm "striving to in­ crease" its private utility, we write that utility as 7^(C)- (The mathematics can readily be generalized beyond such RL-based agents15. Here we restrict attention to utilities of the form J^t ^t(O) f°r reward functions Rt. Definition 1: A system is factored if for each agent r) individually, 7„(0>7„(C) * G(C)>G(C), for all pairs £ and ( that differ only for node rj. For a factored system, when every agents' private utility is optimized (given the other agents' behavior), world utility is at a critical point (e.g., a local maximum)15. In game-theoretic terms, optimal global behavior occurs when the agents' are at a private utility Nash equilibrium11. Accordingly, there can be no TOC for a factored system15'19'20.) In addition, off of equilibrium, the private utilities in factored systems are "aligned" with the world utility. Definition 2: The (t — 0) effect set of node r\ at £, St^(Q, is the set of all components Cri',t' f°r which the gradients V^]0(()?;',«' ^ 0. S*" with no 368 specification of £ is defined as \J^S^*{(,). We will also find it useful to define ~S%Jf as the set of all components that are not in S*f* • Intuitively, the t = 0 effect set of 77 is the set of all node-time pairs which, under the deterministic dynamics of the system, are affected by changes to 77's t = 0 state. Definition 3: Let a be a set of agent-time pairs. CL(7(() is ( modified by "clamping" the states corresponding to the elements of a to some arbitrary pre-fixed vector K. Then the (effect set) Wonderful Life Utility for node 77 (at time 0) is WLUn(() = G(() - G(CLs,/f(Q), where conventionally K = 0. Note the crucial fact that to evaluate the WLU one does not need to know how to calculate the system's behavior under counter-factual starting conditions. All that is needed to evaluate WLUV is the function G(.), the actual £, and S*H (which can often be well-approximated even with little knowledge about the system). In previous work, we showed that effect set WLU is factored20. As another example, if 7,, = G V77 (a team game), then the system is factored. However for large systems where G sensitively depends on all components of the system, each agent may experience difficulty discerning the effects of its actions on G. As a consequence, each 77 may have difficulty achieving high jv in a team game. We can quantify this signal/noise effect by comparing the ramifications on 7T;(C) arising from changes to (Vfi with the ramifications arising from changes to C>?,o, where "rj represents all nodes other than 77. We call this quantification learnability 15. A linear approximation to the learnability in the vicinity of ( is the differential learnability \,lri (C): wo ^ "?^(C)". a) ' l|VC,,o7,(C)ll It can be proven that in many circumstances, especially in large problems, WLU has much higher differential learnability than does the team game choice of private utilities 15. (Intuitively, this is due to the subtraction occurring in the WLU's removing a lot of the noise.) The result is that convergence to optimal G with WLU is much quicker (up to orders of magnitude so) than with a team game. However the equivalence class of utilities that are factored for a particular G is not restricted to the associated team game utility and clamp-to-0 WLU. Indeed, one can consider solving for the utility in that equivalence class that maximizes differential learnability. An approximation to this calculation is to solve for the factored utility that minimizes the expected value of [ ,WLRn]~2, where the expectation is over the values £)0. A number of approximations have to be made to carry out this calcula- 369 tion15. The final result is that 77 should clamp to its empirical expected average action, where that average is over the elements in its training set23. Here, for simplicity, we do not actually make sure to clamp each 77 separately to its own average action, a process that involves 77 modifying what it clamps to in an online manner. Rather we clamp all agents to the same average action. We then made the guess that the typical probability distribution over actions is uniform. (Intuitively, we would expect such a choice to be more accurate at early times than at later times in which agents have "specialized".) 3 The Bar Problem We focus on the following six more general variants of the bar problem inves­ tigated in our earlier work20: There are N agents, each picking one out of seven actions every week. Each action corresponds to attending the bar on some particular set of / out of the seven nights of the current week, where / £ {1,2,3,4,5,6}." At the end of the week the agents get their rewards and the process is repeated. For simplicity we chose the attendance profiles of each potential action so that when the actions are selected uniformly the resultant attendance profile across all seven nights is also uniform. World utility is G(Q = E^G(O), where flG(G) = ELi 0(*fc(C,*)), Xk((,t) is the total attendance on night k at week t, (j>(y) = yexp(—y/c), and c is a real-valued parameter. (To keep the "congestion" level constant, for / going from 1 to 6, c = {3,6,8,10,12,15}, respectively.) Our choice of </>(.) means that when either too few or too many agents attend some night in some week world reward RQ is low. Since we are concentrating on the utilities rather than on the RL algo­ rithms that use them, we use (very) simple RL algorithms. Each agent 77 has a 7-dimensional vector giving its estimates of the reward it would receive for tak­ ing each possible action. At the beginning of each week, each 77 picks the night to attend randomly, using a Boltzmann distribution over the seven components of 77's estimated rewards vector. For simplicity, the temperature parameter of the Boltzmann distribution does not decay in time. However to reflect the fact that each agent operates in a non-stationary environment, reward estimates are formed using exponentially aged data: in any week t, the estimate 77 makes for the reward for attending night % is a weighted average of all the rewards it has previously received when it attended that night, with the weights given by aIn order to keep the learning difficulties faced by the agents similar for various choices of /, the agents always have seven action from which to choose. Each such action gets mapped to an "attendance" profile, e.g.., for ( = 2, so that each agent must choose two nights, action one maps to attending on days one and two, action two maps to attending on days two and three etc. 370 an exponential function of how long ago each such reward was. To form the agents' initial training set, we had an initial period in which all actions by all agents were chosen uniformly randomly, before the learning algorithms were used to choose the actions. 4 Experimental Results We investigate three choices of/5: 0, 1 = (1,1,1,1,1,1,1), and the "average" action, a = |-, where I e {1,2,3,4,5,6} depending on the problem. The associated WLU's are distinguished with a superscript. In the experiments reported here all agents have the same reward function, so from now on we drop the agent subscript from the private utilities. Writing them out, the three WLU reward functions are: RWL<s(£t) = Raiit) ~ Ra{CL%t)) = <t>d,(xd„((,t)) - ^,(ard,(C,*)-l) RWL;((,t) = RG(C,t) - RaiCLfct)) 7 = Y, Mxd(C,t)) - Mxd(£,t) + i) d^dr, RWL*(0) = RaiCt) ~ Ro{CL%t)) 7 = ]T 4>d(xd((,t)) - 4>d{xd((,t)+ad) d^tdv + 4>dv(xdv(C,t)) - 4>d„(xdv(C,t) - 1 + ad) where dn is the night picked by 77 and ad = 1/7 The team game reward func­ tion is simply RQ. Note that to evaluate RWLs each agent only needs to know the total attendance on the night it attended. In contrast, RQ and RWLS require centralized communication concerning all 7 nights, and RWLi requires communication concerning 6 nights. Finally, note that when viewed in at­ tendance space rather than action space, CLa is clamping to the attendance vector Vi = X)d=i ^sX' where ud<i is the i'th component (0 or 1) of the the <i'th action vector. So for example, for / = 1, CLa clamps to Vi = Y^d=i "4^"i where 5dji is the Kronecker delta function. In the first experiment each agent had to select one night to attend the bar (/ = 1). In this case, K = 0 is equivalent to the agent "staying at home," while K = 1 corresponds to the agent attending every night. Finally, K — a = \ is 371 •o co CD DC "co .o .O O 7 6 5 4 3 11 ••-»'• W fr 100 200 300 Time 400 500 Figure 1: Reward function comparison when agents attend one night. {WLS is O ; WLx is + ; WLf is D ; G is x) equivalent to the agents attending partially on all nights in proportions equiv­ alent to the overall attendance profile of all agents across the initial training period. (Note, none of these "actions" are actually available to the agents. They simply use these fictional actions to compute their utilities, as described in Section 2.) Figure 1 graphs world reward against time, averaged over 100 runs, for 60 agents and c = 3. (Throughout this paper, error bars are too small to depict.) The two straight lines correspond to the optimal performance, and the "base­ line" performance given by uniform occupancies across all nights. Systems using WLa and WL$ rapidly converged to optimal and to quite good perfor­ mance, respectively. This indicates that for the bar problem the "mild assump­ tions" mentioned above hold, and that the approximations in the derivation of the optimal clamping parameter are valid. Figure 2 shows how t = 500 performance scales with N for each of the reward signals. For comparison purposes the performance is normalized — for each utility U we plot XV~JR"° , where Ropt and Rbase are the optimal per­ formance and a canonical baseline performance given by uniform attendance across all nights, respectively. Systems using RQ perform adequately when N is low. As N increases however, it becomes increasingly difficult for the agents to extract the information they need from RQ- Because of their superior learn- ability, systems using the WL rewards overcome this signal-to-noise problem to a great extent. Because the WL rewards are based on the difference between the actual state and the state where one agent is clamped, they are much less affected by the total number of agents. However, the action vector to which 372 o CO CD Q_ •a CD M "cc 40 80 120 160 200 240 280 320 Number of Agents Figure 2: Scaling properties of the different reward function. (WL$ is O is D ; G is x) WL5 is WLT agents are clamped also affects the scaling properties. Figure 3 shows the normalized world reward obtained for the different pri­ vate utilities as a function of / (i.e., when agents attend the bar on multiple nights in one week). RWLs performs well for all problems. RWIjr on the other hand performs poorly when agents only attend on a few nights, but reaches the performance of RWL3 when agents need to select six nights, a situation where the two clamping vectors are very similar (1 and |, respectively). RWLa shows a slight drop in performance when the number of nights to attend increases, while RG shows a much more pronounced drop. Furthermore, in agreement with our previous results20, despite being factored, the poor signal-to-noise in RG results in poor performance with it for all problems. (Temperatures varied between .01 and .02 for the three WL rewards, and between .1 and .2 for the G reward, which provided the respective best performances for each.) These re­ sults confirm our theoretical prediction of what private utility converges fastest to the world utility maximum. 5 Conclusion In this article we considered how to design large multi-agent systems to meet a pre-specified goal when each agent in the system uses reinforcement learning to choose its actions. We cast this problem as how to initialize/update the individual agents' private utility functions so that their collective behavior optimizes a pre-specified world utility function. The mathematics of COINs is specifically concerned with this problem. In previous experiments we showed 373 <x> o cc F o erf CD M al LUJC Z 1 0.9 0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 = « e e- e ~-* li 43-'"" ,•"*"' -----D r~"" x. M" x. 'X... ""X 1 2 3 4 5 6 Number of Nights to Attend Figure 3: Behavior of different reward function with respect to number of nights to attend. (WLS is © ; WL° is + ; WLX is D ; G is x) that systems based on that math far outperformed conventional "team game" systems, in which each agent has the world utility as its private utility function. Moreover, the gain in performance grows with the size of the system, typically reaching orders of magnitude for systems that consist of hundred of agents. In those previous experiments the COIN-based private utilities had a free parameter, which we arbitrarily set to 0. However as synopsized in this paper, it turns out that a series of approximations in the allows one to derive an op­ timal value for that parameter. Here we have repeated some of our previous computer experiments, only using this new value for the parameter. These experiments confirm that with this new value the system converges to signifi­ cantly superior world utility values, with less sensitivity to the parameters of the agents' RL algorithms. This makes even stronger the arguments for using a COIN-based system rather than a team-game system. Future work involves improving the approximations needed to calculate the optimal private utility parameter value. In particular, given that that value varies in time, we intend to investigate having to calculate it in an on-line manner. References 1. C. Boutilier, Y. Shoham, and M. P. Wellman. Editorial: Economic principles of multi-agent systems. Artificial Intelligence Journal, 94:1-6, 1997. 2. J. M. Bradshaw, editor. Software Agents. MIT Press, 1997. 3. N. R. Jennings, K. Sycara, and M. Wooldridge. A roadmap of agent research and development. Autonomous Agents and Multi-Agent Systems, 1:7-38, 1998. 4. K. Sycara. Multiagent systems. AI Magazine, 19(2):79-92, 1998. 5. J. Hu and M. P. Wellman. Multiagent reinforcement learning: Theoretical framework and an algorithm. In Proceedings of the Fifteenth International 374 Conference on Machine Learning, pages 242-250, June 1998. 6. L. P. Kaelbing, M. L. Littman, and A. W. Moore. Reinforcement learning: A survey. Journal of Artificial Intelligence Research, 4:237-285, 1996. 7. R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA, 1998. 8. G.Hardin. The tragedy of the commons. Science, 162:1243-1248, 1968. 9. K. Turner and D. H. Wolpert. Collective intelligence and Braess' paradox. In Proceedings of the Seventeenth National Conference on Artificial Intelligence, pages 104-109, Austin, TX, 2000. 10. T. Sandholm, K. Larson, M. Anderson, O. Shehory, and F. Tohme. Anytime coalition structure generation with worst case guarantees. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 46-53, 1998. 11. D. Fudenbergand J. Tirole. Game Theory. MIT Press, Cambridge, MA, 1991. 12. D. Challet and Y. C. Zhang. On the minority game: Analytical and numerical studies. Physica A, 256:514, 1998. 13. B. A. Huberman and T. Hogg. The behavior of computational ecologies. In The Ecology of Computation, pages 77-115. North-Holland, 1988. 14. M. L. Littman. Markov games as a framework for multi-agent reinforcement learning. In Proceedings of the 11th International Conference on Machine Learning, pages 157-163, 1994. 15. D. H. Wolpert and K. Turner. An Introduction to Collective Intelligence. Technical Report NASA-ARC-IC-99-63, NASA Ames Research Center, 1999. URL:http://ic.arc.nasa.gov/ic/projects/coin_pubs.html. To appear in Handbook of Agent Technology, Ed. J. M. Bradshaw, AAAI/MIT Press. 16. C. Claus and C. Boutilier. The dynamics of reinforcement learning cooperative multiagent systems. In Proceedings of the Fifteenth National Conference on Artificial Intelligence, pages 746-752, Madison, WI, June 1998. 17. T. Sandholm and R. Crites. Multiagent reinforcement learning in the iterated prisoner's dilemma. Biosystems, 37:147-166, 1995. 18. R. H. Crites and A. G. Barto. Improving elevator performance using reinforce­ ment learning. In D. S. Touretzky, M. C. Mozer, and M. E. Hasselmo, editors, Advances in Neural Information Processing Systems - 8, pages 1017-1023. MIT Press, 1996. 19. D. H. Wolpert, K. Turner, and J. Frank. Using collective intelligence to route internet traffic. In Advances in Neural Information Processing Systems - 11, pages 952-958. MIT Press, 1999. 20. D. H. Wolpert, K. Wheeler, and K. Turner. Collective intelligence for control of distributed dynamical systems. Europhysics Letters, 49(6), March 2000. 21. D. H. Wolpert, K. Wheeler, and K. Turner. General principles of learning-based multi-agent systems. In Proceedings of the Third International Conference of Autonomous Agents, pages 77-83, 1999. 22. W. B. Arthur. Complexity in economic theory: Inductive reasoning and bounded rationality. The American Economic Review, 84(2):406-411, 1994. 23. D. H. Wolpert. The mathematics of collective intelligence, pre-print, 2001. POLYGONAL APPROXIMATION OF PLANAR DIGITAL CURVES USING ANT SYSTEM PENG-YENG YIN Department of Information Management, Ming Chuan University, Taoyuan 333, Taiwan E-mail: pyyin@mcu. edu. tw This paper presents a new polygonal approximation method using the ant system. The problem is represented by a directed graph such that the objective of the original problem becomes to find the shortest cycle that satisfies the problem constraints. A number of artificial ants (agents) are distributed on the graph and communicating with one another through the pheromone trails which are a form of the long-term memory recording the positive tours previously constructed. The important properties of the proposed method are thoroughly investigated. The performance of the proposed method compared to those of the genetic-based and the tabu search-based approaches is very promising. 1 Introduction Planar digital curve approximation is a very important topic because the digital curves often appear as the region boundaries and the object contours in an image. It is desirable to approximate a digital curve with the corner points to reduce the memory storage and the processing time for the subsequent procedures. Polygonal approximation technique is one of the approaches which can accomplish this work and has caused the attention of many researchers. The idea behind is to approximate the digital curve by an optimal polygon with the minimal number of line segments such that the approximation error between the digital curve and the polygon is no more than a specified tolerance. Most existing methods provide local optimal approximation results due to limited computational time resource. They can be divided into three groups, (1) sequential approaches [1-2]; (2) split-and-merge approaches [3-4]; and (3) dominant points detection approaches [5-6]. These approaches are simple and fast, but their approximating results are far from the optimal ones. However, an exhaustive search for the optimal polygon will result in an exponential complexity. Approaches based on genetic algorithms (GA) [7, 8] and tabu search (TS) [9] have been proposed to solve the polygonal approximation problem and obtained much better approximation results than the local optimal methods do. In this paper, we develop a more effective and efficient global search algorithm based on a heuristic called the ant system (AS) [10, 11]. To the best of our knowledge, our work is the first attempt to apply the AS in the fields of image processing and computer vision. The properties of the proposed algorithm have been thoroughly analyzed, and the approximation results are encouraging compared to those of the works using GA and TS. 375 376 2 The Proposed Method 2.1. The Ant System The ant system (AS) algorithm was proposed by Dorigo [10, 11] in 1992, and had been used to solve many complex combinatorial optimization problems. The AS is inspired by the research on the real ant behavior. Ethologists find that ants could construct the shortest path from their colony to the feeding source and back through the use of pheromone trails. An ant leaves some quantity of pheromone on the ground and marks the path by a trail of this substance. The next ant will sense the pheromone laid on different paths and choose the one with a probability proportional to the amount of pheromone on it. Then the ant follows the path and leaves its own pheromone. This is an autocatalytic (positive feedback) process which favors the path along which more ants previously traveled. The general principles of the AS are given as follows. Firstly, the underlying problem should be represented as a graph. The feasible solutions of the problem are usually corresponding to the paths on the graph with some problem-specific restrictions. A number of ants are arbitrarily placed on the nodes chosen randomly. Then each of the distributed ants will walk through the graph and construct its own path. Each ant living on the graph travels from node to node by selecting edges according to the node transition rule. The selection probability of the next edge is exponentially proportional to two factors: the pheromone quantity laid on it and the merit of greediness of choosing it. After all of the ants have completed their tours (which is called a cycle), the intensity of pheromone on each edge is updated by the pheromone updating rule that the edges belonging to better tours will receive more pheromone. Then the ants are randomly distributed again and reiterate the cycle. The stopping criterion of the AS could be the maximal number of running cycles, the CPU time limit, or the maximal number of cycles between two improvements of the global best solution. 2.2. Graph Representation of Polygonal Approximation The idea of polygonal approximation is to approximate the digital curve by an optimal polygon with the minimal number of vertices such that the approximation error between the digital curve and the polygon is less than a prespecified threshold £ (we shall refer to this condition as the £ -bound constraint). To apply the AS, the problem should be represented as a graph G = <N, E>, where N is the set of nodes, E the set of edges. Apparently, for the polygonal approximation problem, each point on the digital curve should be represented as a node on the graph. The ideal edge set E* will be the one which has the desired property where any closed tour (a path which originates and ends in the same node) on the graph represents a feasible solution. However, it is very hard to generate the ideal edge set, or in most cases the ideal edge set does not exist. A more practical way is to generate a pseudo ideal 377 edge set E such that E*czE • For those tours which do not satisfy the £ -bound constraint, we can decrease the intensity of pheromone through a penalty function. Now, we define some notations as follows. Let the tour completed by the Ath ant be denoted as tourk, and the number of edges on tourk be our I • Since the completed tour may violate the £ -bound constraint, we should compute the approximation error yielded by every tour. We use Err(tourk) to denote the approximation error between the digital curve and the approximating polygon corresponding to tourk • 2.3. Initial Ant Distribution Here we prefer to place the ants on the nodes which have better probability of finding the shortest tour. In doing so, we design a special data structure called the selection table of the starting node denoted by Tt, i = 1, 2, ..., n, where n is the number of nodes. Initially, each entry of Tis equal to 1. The probability of node i to be chosen as a starting node is the entry value Tt normalized by the sum of all the entry values, i.e., Select =T/Yr • At the beginning of the first cycle, every node / i=i will have equal probability of being chosen as the starting node. When this cycle ends, every ant has completed a tour. Let the set of ants which chose the rth node as the starting node at the previous cycle be Ant Start . and the size of Ant Start be \Ant Start .1 • At the end of each cycle, we update the entries of the selection table by r* Select,+. (1"r) , £ V^—f> t1) T. =< \Ant _Starti\j,An,_su,t, our k where r e (0,1) is the parameter which controls the relative importance between the previous experiences and the current experience. 2.4. Node Transition Rule There are two dominant factors in computing the selection probability of the edge from node i to the next node,/: the pheromone intensity i.. and the greediness value 'j T is gradually changed at the end of each cycle us: Tf.. • T.. is gradually changed at the end of each cycle using the pheromone updating rule, n is determined by a greedy heuristic which forces the ants to walk to the farthest accessible node. This can be accomplished by setting JJ as the number of nodes on the corresponding arc of the chosen edge. JNow, we define the transition probability from node i to node 7 as 378 = (T,)"(lJ,)' , (2) v A where OC and /? are control parameters regarding the relative importance of exploitation and exploration searches. 2.5. Pheromone Updating Rule The intensity of the pheromone trail is updated at the end of each cycle by the quality of the solution corresponding to the completed tour. The edges belonging to better tours receive more pheromone quantity in this cycle. However, the edges of the tours which result in infeasible solutions will be penalized by decreasing an amount of pheromone. We define the pheromone updating rule as follows. At the end of each cycle, the pheromone intensity on the edge from node i to node j is updated by T,=PT..+max(£AT*,0)> (3> where p e (0,1) is the persistence rate of the pheromone trails, and 1 if the edge belongs to tour t >ur „1 ' and Err (tour t) < e (4) Ar' = Err (tourt) if the edge belongs to tourt e * n, and Err (tourk) > e ), otherwise Therefore, more quantities of pheromone will be laid on the edges on which most passing ants can construct shorter feasible tours. On the other hand, in the worst case, the edges will receive no positive rewards either no ants walked through them or most passing ants constructed infeasible tours. 2.6. The Proposed Algorithm and the Best Parameter Setting Now we present the details of the proposed algorithm (which we denote as AS/Poly) for solving the polygonal approximation problem as follows. 1. Initialize. Construct the directed graph as depicted in Subsection 2.2. Set i\. = 1/ for every edge (i,j). Set T. = 1 for every entry of the selection table of the starting node. 379 Set A^C = 1. Set tour,,., . = x,x1 • • • x„x,. *" global best 1 2 » ' 2. For every ant do Select the starting node according to the selection probability. Repeat Select the next node according to the node transition rule using Eq. (2). until a closed tour is completed. // the selection of the next node can not pass over the starting node // 3. Find out the shortest feasible tour, say tOUr , among the current m tours. current best 4. If our < our then \luulcurrent _besl\ ^ |l<J,"'gM>fl/_6asf | global_best ~ '""^current_besl 5. For every edge do Update the pheromone intensity according to the pheromone updating rule (see Eqs. (3) and (4)). 6. Update every entry of the selection table using Eq. (1). 7. If (NC = MAX_CYCLE) then output tourglM bal and stop; otherwise, NC =NC + l and goto step 2. To obtain the best performance of the AS/Poly, various parameter values have been experimented. The test benchmark curves which are used in the literature [2, 4, 5, 7, 8. 9] are shown in Figs. l(a)-l(b). The best performance of the AS/Poly was found when a = 1, p -5, r =0.4, p =0.1, and m =20 for both test curves. (a) (b) Fig. 1 The test benchmark curves. 3. Experimental Results and Discussions In this section, we will discuss more important properties of the AS/Poly through empirical studies. The performance of various strategies of the AS/Poly is compared to those of two other heuristics: genetic algorithms (GA) and tabu search (TS). 3.1. Properties of the Proposed Algorithm 3.1.1. AS/Poly v.s. Random Walk We have implemented a random walk search method (referred to as random walk for simplicity) where the distributed ants have no communications and no previous 380 experiences, just choose the next node randomly. Fig. 2 (a) shows the global shortest tour length obtained at each cycle for both of the AS/Poly and the random walk. It is seen that in the beginning stage of running cycles, the AS/Poly which does not yet cumulate enough feedback experiences has similar performance as that of random walk. After the 4th cycle, the global shortest tour length found by the AS/Poly keeps decreasing, while the one found by the random walk is almost unchanged. Consequently, the mechanisms facilitating the inter-ant communication and the persistence rate of previous experiences play significant roles in the search paradigm. ---ACSfl\4y ] -*- Ran.1oin Walk 4(1 43 4(5 49 C-'i'.-l (a) (b) Fig. 2 The important properties of the AS/Poly. 3.1.2. Node Branch Entropy To examine the convergence behavior of the AS/Poly, we can measure the average node branch entropy. For the rth node on the graph, the node branch entropy is computed by the transition probabilities on all of the directed edges which originate from the rth node. That is £,=-X^i°g/V (5) The node transition rule becomes more deterministic when the node branch entropy approaches to 0. The average node branch entropy is then computed over all node branch entropy values, i.e., £ = Y" E.ln- Fig- 2(b) shows the value of E at each cycle. Initially, the value of E decreases gradually since there is few experiences cumulated and the AS/Poly tries to explore new branches to cumulate more experiences. In the middle cycles (between the 40th and the 100th cycles), the value of E drops drastically because some edges become more preferable and have higher transition probabilities. In the later cycles, the value of E decreases 381 gradually again since the dominant edges stand out and the transition probabilities become stable. Hence, the value of the maximal number of running cycles which decides the stopping criterion of the AS/Poly could be set as the one falling in the stable part. 3.2. Searching Strategy In this subsection, we present two advanced searching strategies of the AS/Poly. 3.2.1. Elitist Strategy The elitist strategy which comes from GA [12] is also suggested to be used in the AS by Dorigo [11]. The basic idea is that the pheromone intensity on the best tour obtained so far (denoted as tour, L, , ) is further reinforced such that v global _besl tour has higher probability of changing edges with other tours. We have implemented the elitist strategy with the AS/Poly. 3.2.2. Hybrid Strategy We suggest to use a hybrid strategy which embeds a local search method in between the iterations of the AS/Poly. The local search method used here is an iterative one, and for saving the computational time, it is applied to tour only- First, global _ best randomly delete one node from tour • Then for each of the remaining nodes on tour > it is moved along every position between its two neighboring nodes global best and finally stops at the position which gives the minimal approximation error. An iteration is completed when all of the nodes on tour have been processed. The next iteration is activated if any node has been moved to a new position, i.e., there is an error reduction in the previous iteration. The iteration is repeated for at most five times to save computations. 3.2.3. Comparative Performances Experimentally we found that both of the elitist strategy and the hybrid strategy overcome the AS/Poly, and the hybrid strategy has the best performance. As will be seen in the next subsection, the two advanced strategies cost negligible extra CPU time than the AS/Poly, and they have more significant approximation improvement when smaller £ -bound is specified. 3.3. Comparison with GA and TS The experiments are conducted on a PC with a Pentium 120MHz CPU. The numerical results are obtained by averaging over 10 independent runs for each of the test methods. Various values of £ have been specified for each of the test curves. Table 1 lists the average number of line segments (d) of the final obtained approximating polygon and the average computational time (t) of every comparative 382 approach. It can be seen that, for both of the two assessing factors, the proposed AS/Poly and its variations have the best performance, the TS-based approach ranks in the middle, and the GA-based approach is the worst. It is also observed that the elitist strategy and the hybrid strategy have more prominent improvement in .educing the number of approximating line segments than the AS/Poly when the value of £ is decreasing. The average CPU cost time of the elitist strategy is similar to that of the AS/Poly because only a few computations are needed to update the pheromone trail of tour ... b • The extra CPU cost time of the hybrid strategy is also negligible if the user prefers to see a better approximation result with higher compression ratio. Table 1 The comparative performances of the GA-based approach, the TS-based approach, the AS/Poly approach, the elitist strategy and the hybrid strategy of the AS/Poly. GA-based TS-based AS/Poly Elitist Hybrid Curves £ d t d t d t d t d t Leaf (n=120) Semicircle (n=102) 150 100 90 30 20 60 30 25 20 15 15.6 16.3 17.3 20.5 23.1 13.2 13.9 16.8 19.2 23.0 5.71 4.45 5.28 4.62 5.65 4.56 4.80 4.29 4.67 4.44 10.6 13.7 14.6 20.1 21.9 11.0 13.6 14.9 16.2 18.3 0.93 0.92 0.89 0.90 0.90 0.87 0.79 0.78 0.78 0.76 11.2 13.0 13.2 17.2 19.8 10.0 12.6 13.4 16.4 18.0 0.68 0.70 0.70 0.71 0.72 0.59 0.59 0.59 0.60 0.61 11.6 13.0 13.0 17.0 19.0 10.0 12.4 13.0 16.0 17.4 0.69 0.68 0.71 0.72 0.72 0.59 0.57 0.61 0.62 0.63 11.0 12.6 12.8 16.6 18.8 10.0 12.0 13.0 15.8 16.8 0.87 0.84 0.89 0.90 0.90 0.78 0.75 0.74 0.73 0.73 Figs. 3(a)-3(e) show the final approximating polygon and the corresponding number of the approximating line segments for the leaf-shaped curve with the £ - bound set to 20 for each of the test methods; Figs. 3(f)-3(j) are those for the curve with four semicircles given the £ -bound equal to 15. It can be seen that the proposed algorithms produce the least number of approximating line segments for all test curves. 383 (a)GA-based (b) TS-based (c) AS/Poly (d) Elitist (e) Hybrid (rf = 23) (<*=21) (d=20) (rf=19) (rf=18) Fig. 3 The approximating polygon and the number of approximating line segments using different test approaches. 4. Summary Polygonal approximation of the digital curves is very important since it not only facilitates the reduction of the memory storage and the computational time but also provides the feature analysis of the digital curves. Most existing approaches are local search methods and can be classified to three classes: the sequential approaches, the split-and-merge approaches, and the dominant points detection approaches. Although they are computationally fast, the approximation results may be far from the global optimal ones. In this paper, we have proposed a new polygonal approximation method using a global search heuristic called the ant system (AS). The principal components, namely, graph representation, initial ant distribution, node transition rule, and pheromone updating rule, of the AS have been investigated and adapted to the underlying problem. Some important properties of the proposed method are examined through empirical studies. Inspired by the research of genetic algorithms, we have proposed the elitist strategy and the hybrid strategy for our method. The performances of the proposed methods are compared to those of genetic-based and tabu search-based methods. The numerical results are very encouraging. 384 References [I] J. Sklansky, V. Gonzalez, ast polygonal approximation of digitized curves", Pattern Recognition 12 (1980), pp. 327-331. [2] B. K. Ray, K. S. Ray, etermination of optimal polygon from digital curve using Lj norm", Pattern Recognition 26 (1993), pp. 505-509. [3] U. Ramer, n iterative procedure for the polygonal approximation of plane curves", Computer Graphics and Image Processing 1 (1972), pp. 244-256. [4] B. K. Ray, K. S. Ray, new split-and-merge technique for polygonal approximation of chain coded curves", Pattern Recognition Lett. 16 (1995), pp. 161-169. [5] C. H. Teh, R. T. Chin, n the detection of dominant points on digital curves", IEEE Trans. Pattern Anal. Machine Intell. 11 (1989), pp. 859-872. [6] W. Y. Wu, M. J. Wang, etecting the dominant points by the curvature-based polygonal approximation", CVGIP: Graphical Models and Image Processing 55 (1993), pp. 79-88. [7] P. Y. Yin, enetic algorithms for polygonal approximation of digital curves", Internat. J. Pattern Recognition Artificial Intelligence 13 (1999), pp. 1-22. [8] S. C. Huang, Y. N. Sun, olygonal approximation using genetic algorithms", Pattern Recognition 32 (1999), pp. 1409-1420. [9] P. Y. Yin, tabu search approach to the polygonal approximation of digital curves", Internat. J. Pattern Recognition Artificial Intelligence 14 (2000), pp. 243-255. [10] M. Dorigo, Optimization, learning and natural algorithms, Ph.D. Thesis, Dip. Elettronica e Informazione, Politecnico di Milano, Italy, (1992). [II] M. Dorigo, V. Maniezzo, A. Colorni, nt system: optimization by a colony of cooperating agents", IEEE Trans. Syst, Man, Cybern. - Part B 26 (1996), pp. 29-41. [12] D. E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, Reading, MA, 1989. A BIOLOGICAL VIEW ON INFORMATION ECOSYSTEMS BENGT CARLSSON AND PAUL DAVIDSSON Department of Software Engineering and Computer Science Blekinge Institute of Technology, 371 25 Ronneby, Sweden E-mail: bengt.carlsson@bth.se, paul.davidsson@bth.se We study the dynamics of information ecosystems where there are antagonistic agents or groups of antagonistic agents. In particular, we focus on systems that consist of exploiter agents and agents being exploited. When comparing information ecosystems with biological ecosystems, it becomes clear that both types of systems seem to support robust solutions that are hard to violate by a single agent. In the analysis of information ecosystems, it is important to take into consideration that agents may have a Machiavellian intelligence, i.e., that they take the self-interest of other agents into consideration. We conclude that in the interaction between antagonistic agents within information systems, arms race is a major force. A positive result of this is a better preparedness for innocent agents against the vigilant agents. Some examples are given to show how the modelling of information ecosystems in this way can explain the origin of more robust systems when antagonistic agents are around. 1 Introduction The emerging global information infrastructure may be regarded as a universal information ecosystem. An information ecosystem is populated by infohabitants, i.e., (semi-) autonomous software entities typically acting on the behalf of humans. These infohabitants, or agents, may or may not have conflicting interests. In this paper we will compare the dynamic process resulting from the interaction between agents in information ecosystems that have conflicting interests to the processes within biological ecosystems. Biological ecosystems are basically determined by an individually centered mechanism called natural selection [9, 10]. Normally, fitness expresses the rate of success for individuals within an ecosystem. If antagonistic agents of an information ecosystem that are acting in their own interests, are modeled as part of an independent biological system, this will represent a worst-case scenario because no cooperation can be taken for granted. As a result, the robustness of an information ecosystem may be explained from a truly distributed point of view. In section 2 we will examine the interaction between agents and between agents and the humans behind. Section 3 provides some Internet-based examples of antagonistic interactions, and is followed by a discussion of the dynamics of the information ecosystem compared to a biological system. Finally, in section 4 some conclusions are drawn. 385 386 Exploiter antagonism 9 ^—• User instructions feedback instructions feedback Exploiter agent interaction •« • User agent Fig. 1. Part of an antagonistic information ecosystem. 2 The dynamics of antagonistic information ecosystems Humans have the possibility to represent knowledge outside the brain as mind-tools [2, 3]. Computers, "intelligent" systems and agent technology within the global network may all be regarded as mind-tools, controlled by independent and selfish humans. In his book "The Prince", Machiavelli wrote about how to conquer and preserve authority. The impression of being fair and honest may, if the preservation of the authority requires it, be followed by tricks, lies and violence. Humans are presumed to have such a Machiavellian intelligence to bring out self-interest at the expense of others, mainly because we are part of the biological system. Thus, if the intentions of an agent involve some kind of conflicting goals, we should expect Machiavellian intelligence to be present. We will here focus on a typical scenario of such an ecosystem where there is one (or more) exploiter agent(s) and one (or more) user agent(s) being exploited. In Fig. 1 we see an example with one exploiter and one user. The goal for the human exploiter is to make profit from the agent interaction. Besides giving the initial instructions to the exploiter agent, the exploiter most likely has to continually instruct the exploiter agent because of the limited knowledge of the domain of a software agent compared to a human being. The human user and its agent when trying to avert the exploiter agent will perform the same kind of reasoning. The situation will end up in an arms race where the second agent retorts the improvement of the first agent by having its human owner giving it improved instructions. The long-term outcome of a continuing arms race is an improved retort against the unfriendly actions already performed by the opponent, and probably reducing the number of obvious new exploiting behaviours. 387 Unlike the traditional descriptions of agent systems based on mental states having belief, desire and intentions [6], we here focus on the humans owners. A human is capable of using knowledge outside the actual domain and arrange it using consciousness. This knowledge is then transferred to the agents through instructions and based on feedback. We believe that this interaction is a key component for understanding the mechanisms behind antagonistic information ecosystems. 3 Examples of antagonistic information ecosystems We will here give some examples of antagonistic conflicts within information ecosystems ranging from an illegal virus attack to legal registrations of users. 3.1 Computer viruses It is easy to imagine an arms race between crackers (the exploiters) and users referring to Fig. 1. In the first phase, the cracker finds out about the weakness of the user's computer system and then creates a program that sends out the viruses, or with the terminology used here, an exploiter agent. To avoid virus-attacks, the user has to use some filtering function or anti-virus program (the user agent). The result may be a refined arms race where the exploiter agent changes the appearance of the virus automatically and the user agent finds these new variants. Despite its harmful purpose, a virus could be treated as a catalyst for a more robust information ecosystem. We here refer to the biological concept of robustness, i.e., the ability of an agent to properly balance efficiency and efficacy in order to survive in many different environments. The robustness of an ecosystem is then dependent of the robustness of its members. Thus, if ecosystems can be made more robust, they can perform their functions longer and better, which will make costly redesigns unnecessary or at least less costly and less frequent. A time based security model [8] quantify the exposure time related to a virus attack as dependent on protection, detection and response. These variables in turn have a quality factor involved, expressed as the strength of the mechanism and some active time interval. If the final result of the virus battle is a balance where viruses only in exceptional cases do harm because of an improved time based security, we probably manage better with than without viruses. Just as in natural ecosystem we should not expect systems easy to manipulate. An information ecosystem with humans using their Machiavellian intelligence to control perceiving agents is a much more likely consequence. 3.2 Spy ware Today there are millions of Web sites reachable by any Internet user. These users spend more and more time to customize and improve their Web browser 388 experiences through third-party browser extensions. These extensions may be able to monitor and report the behavior of the users to anyone needing this information. Cookies are used to register information and later retrieve or modify this information with other Web browsers. A Web site may store information about the user in a cookie that the user does not know about; the cookie mechanism could be considered a form of spyware, abusing the privacy on the Internet. In general spyware (the exploiter agent) is any technology that aids in gathering information about a person or organization without their knowledge. On the Internet, spyware is programs that are put in someone's computer to secretly gather information about the user and relay it to advertisers or other interested parties. In a recent report, Martin et al. [4] examined business-to-consumer browser add-on products. The research focused on the ways that the browser extensions collected user information, how the data flowed back to the company's database and, how the data capture techniques matched the privacy policies of each company. "We downloaded 16 Internet Explorer browser extensions and watched them work. A number were well behaved. But others seemed to outridge exploit our hospitality, watching and reporting our every move in the browser, some intercepting data sent to competitors and other reporting back to headquarters on pages we 'securely' downloaded using Secure Socket Layers" [4] There is a lack of trust against third-party browser extensions and recently anti- spyware tools (user agents) were offered the Web users. These programs inform the user about any unwanted cookies present and, how to get rid of them. For both spywares and anti-spywares there is a profit calculation involved. The consumer does not exactly know the purpose of the data collection, but may suspect a third party selling of information. Commercial anti-spyware products replace the first free products. The spyware evolution is similar to the virus example but may be less destructive because: • The consumer is normally partly informed about the collecting activity (but not about the future use of the data). • The consumer may be offered an additional web-service or freeware program. The spyware acts as the commercial part of the offered service. • The data collection may improve the development of e-business. An arms race within spywares may improve some e-business concepts, i.e. act as a resource generating marketing. Between competing business companies such an improvement may cause advantages against other companies not using spywares. 4 Conclusions Based on the assumption that software agents may mediate the Machiavellian intelligence of their human owners, there are a lot of similarities between 389 information and biological ecosystems. The main conclusion we draw from these similarities is that arms race is a major force within information ecosystems. Both examples presented in section 3 show a development of the information ecosystem through an arms race. From a system perspective, this can be seen as a positive thing because the ecosystem will become more robust. If a user knows about complications caused by exploiting agents and prepare to defend against these intruders, the user will get off better compared to being unprepared. From the user's perspective, the disadvantage is the resources, e.g., money and time, spent on procuring anti-virus and anti-spyware programs. The Machiavellian intelligence has aroused through an arms race of the capacity to deceive, but this does not mean we lost our (inherited) capability to cooperate. The choice between long-term cooperation and getting some short-term advantage of being selfish is called Prisoner's Dilemma within game theory [7]. Prisoner's dilemma describes the rise of cooperation in both social [1] and natural science [5], within a restricted domain. The results from the analysis of Prisoner's Dilemma can be described as: every agent wins by cooperation, but if everybody else cooperate, the single agent will benefit by being selfish. If no one cooperates everybody will be worse off. Most efforts today to solve this dilemma are done by legislate methods but, as stated previous, we argue that there is a self-adjusting quality that influence the dynamics of antagonistic information ecosystems. References 1. Axelrod, R., The Evolution of Cooperation Basic Books Inc. (1984) 2. Dennett, D.C. Darwin's dangerous idea Allen Lane/The Penguin Press, London (1995) 3. Gardenfors, P. Hur Homo blev sapiens, Bokforlaget Nya Doxa, Nora (2000) 4. Martin Jr, D.M., Smith, R.M., Brittain, M„ Fetch, I., and Wu, H., The Privacy Practices of Web Browser Extensions, University of Denver (2000) 5. Maynard Smith, J., Evolution and the theory of games, Cambridge University Press, (1982) 6. Rao, A.S., and Georgeff, M.P., "BDI agents: from theory to practice." In Proceedings of the First International Conference on Multi-Agent Systems, p. 312-319, AAAI Press/The MIT Press (1995) 7. Rapoport, A., and Chammah, A.M., Prisoner's Dilemma A Study in Conflict and Cooperation Ann Arbor, The University of Michigan Press (1965) 8. Scgwartay W. Time Based Security. Practical and Provable Methods to Protect Enterprise and Infrastructure, Networks and Nation Interpact Press (1999) 9. Williams G. C, Adaptation and natural selection Princeton University Press 1966 10. Wilson, E. O., Sociobiology-The abridged edition, Belknap Press (1980) 1975 THE G.DAC COLLABORATION FRAMEWORK K. W. NG T. O. LEE Department of Computer Science & Engineering, The Chinese University of Hong Kong, Shatin, N. T. Hong Kong, China E-mail: {kwng, tolee}@cse.cuhk.edu.hk The Internet has been expanding rapidly over the recent decades as are the activities conducting over the World Wide Web. The complexity of online services grows along with the increasing population online. The robustness of network applications and distributed systems can no longer be sustained by the traditional distributed programming approaches in an effective manner. For this reason, the mobile agent paradigm has emerged as a promising methodology to resolve complex distributed computation problems at high scalability. In this paper, we present a C omponentware for Distributed Agent Collaboration (CoDAC) as a solution to general agent coordination problems. CoDAC implements the component model to offer flexible and reliable coordination support to mobile agents distributed over the Internet. 1 Introduction The mobile agent paradigm brings benefits in many ways. An agent may continue to operate even if it is temporarily disconnected from the network as it essentially performs its operation locally at the data source. In fact, an agent can be kept offline and is immune to any harm caused by network latency for most of the time of its execution. In addition, it utilizes the limited bandwidth efficiently by sending only the relevant results over the network. All these benefits justify the deployment of agents in the distributed computation environment. The multiagent paradigm stems from employing multiple agents to add further capabilities and performances to distributed systems. The multiagent paradigm further unravels the potential of software agents in realizing various attractive goals. For example, more elaborated services, parallel processing, and increased system throughput with high flexibility and fault tolerance, etc. In this paper, we present a Componentware for Distributed Agent Collaboration (CoDAC) as a solution to general agent coordination problems. CoDAC utilizes the component model [7] to offer flexible and reliable coordination services to mobile agents distributed over the network. It functions on top of the Jini infrastructure [1,4] in order to be deployable with plug-and-play capability at runtime. CoDAC encapsulates its constituent features with respect to the enforcement of common knowledge [2] and interacts with agents through well-defined interfaces. It features modularized and interchangeable building blocks for multiagent systems. On top of that, it exercises the self-managing properly to manage resources of its own and adds no management burden on the associated agents. 390 2 Collaboration Life Cycle 391 2.1 Initialization At the very beginning, the coordinator agent [8] c starts a collaboration group by instantiating a Distributed Agent Adapter (DA adapter) [8] with a unique group ID. This instance of DA adapter, in turn, discovers all available lookup services on the network. The DA adapter opens the collaboration group to the public through registering a serialized instance of its clone as a service proxy on each lookup service it has discovered. Each registered proxy shares the same service ID [Sun99a]. For each agent p that intends to engage into a collaboration group, it first gains access to one or more lookup services around. Next, p searches for the desired service proxy, that is, a serialized instance of DA adapter in our case, through the lookup service. The search criteria can be based on the group ID, the Jini service ID [5] or even the agent ID of the coordinator. As long as the desired collaboration service is located, the relevant DA adapter will be downloaded to p. After being deserialized, the DA adapter contacts the original DA adapter (the one associated with c) and issues a request to join the collaboration group on behalf of p. In response, the DA adapter of c verifies the request, checks for data consistency and grants the membership for/? under mutual agreement with all available members within the group. Such mutual agreement is enforced by the group membership protocol described in [8]. If the request is granted, p becomes part of this group and is ready to collaborate. 2.2 Resources Gathering The collaboration begins with the assembling of available resources within the collaboration context. To begin collaboration, the DA adapter of c instantiates a collaboration manager [8]. Upon initiation, the collaboration manager issues a collaboration request to each DA adapter within the collaboration context. This request signals each participating agent to deliver its individual computation results to c. As a Distributed Agent Manager [8] (DA manager) receives the collaboration request, it notifies its associated agent immediately by firing a GatherResourcesEvent. In response, the agent presents the relevant data to the DA manager as soon as the data is available and the DA manager simply forwards the data to the collaboration manager. The resources gathering phase terminates after all the participating agents have contributed their computation results or when the collaboration manager times out. Either case, all the gathered information will be delivered to the kernel [8] for analysis. 2.3 Results Delivery After the kernel has completed evaluating the collection of data assembled, it comes up with some kind of collaboration results and the collaboration may end. At the end of collaboration, the kernel returns the collaboration results to the collaboration manager, which, in turn, 392 forwards the collaboration results to each DA manager within the collaboration context inside a transaction. The underlying atomic commitment protocol will be described in Section 3. Eventually, all collaborating agents will install the same collaboration results consistently as long as the transaction commits while the coordinator may initiate subsequent collaboration as needed. 3 Atomic Commitment Protocol After the kernel has finished computing the collaboration results R, it returns R to the collaboration manager. The collaboration manager is then responsible to coordinate all agents within the group to deliver R consistently in order to terminate the collaboration transaction. The protocol proceeds in five rounds [3] as follows: 1. The collaboration manager sends a deliver_req predicate enclosed with R to every DA manager within the collaboration context. 2. Next, each DA manager fires a PrepareDeliveryEvent, embedded with R, to the associated agent. 3. In response, each agent checks its own state to see if it can commit to R. The agents may throw a VetoDeliveryException to vote against delivering R, or it may remain silent to indicate an implicit agreement. 4. The DA managers return the appropriate vote (either yes or no) to the collaboration manager on behalf of the participating agents 5. The collaboration manager collects all the votes among the group a) If none of the participants vetoes the transaction, the decision will be to deliver R. The collaboration manager will coordinate all DA managers to deliver R by initiating a Jini transaction [6] to forward a deliver predicate to every DA manager. b) Otherwise, the collaboration manager will coordinate the rollback of R by initiating a Jini transaction to deliver a rollback predicate to every DA manager. 6. Finally, each DA manager receives either a deliver or rollback predicate as the transaction terminates. The DA manager then signals the agent whether to deliver or abort R by firing the CommitDeliveryEvent or AbortDeliveryEvent respectively. Figure 1 summarizes the above protocol. For simplicity, only one agent and one DA manger is shown to interact with the collaboration manager. The delivery of each R is totally ordered by the transaction ID. Agent DA Mgr 7 \ VetoDeliveryException/ p 393 CommitDeliveryEvent/ AbortDeliveryEvent commit Collaboration Mgr — / \ f&eliverl \ p COmn, \ / rollback \/ abort Figure 1. Atomic commitment protocol Whenever the delivery of R starts from step 1, there are two phases in the protocol where some CoDAC entity is waiting for remote messages: in the beginning of step 5 and step 6. As remote messages may get lost or their delivery time may vary due to link failures or network latency, these phases are bounded to a timeout delay d to trigger fault discovery. The actions triggered by a timeout are explained as follows. In step 5, the collaboration manager is waiting for votes ftom all the DA managers. At this stage, the collaboration manager has not yet reached any decision. In addition, no participating agent can have decided to commit. Therefore, as it times out without getting all vote to make the decision, (e.g. because of a vote is lost or delayed, the agent has crashed or even the request has not reached the ajent in the beginning) the collaboration manager can decide to abort and proceed to step 6 by sending a rollback predicate to every DA manager. In step 6, a DA manager that voted Yes is waiting for a deliver or rollback predicate in return. In this case, the DA manager cannot unilaterally decide to rollback because the Jini transaction guarantees that either one of these two predicates will eventually reach all DA managers as long as the collaboration manager (and the associated coordinator) keeps fiinctioning, although the delivery time may vary after all. Therefore, the DA manager should not decide to rollback unless it gets a rollback predicate or has certified the coordinator as crashed. In other words the timeout triggers a fault discovery and the necessary recovery procedure. This is done as follows: When a DA manager dmgrp times out in step 6 of the commitment protocol, it retrieves the coordinator channel in the space and writes a decision_req predicate to it. If the channel cannot be found in the first place (because the coordinator failed to renew the lease on its channel), then the coordinator may have failed and dmgrp thus triggers the recovery through the group membership protocol described in [8]. Otherwise, dmgrp, waits for another d units of time before it re-issues the decision_req. In the mean time, dmgrp may also break the loop and proceed with the recovery as long as the lease on the coordinator channel expires. On the other hand, the collaboration manager, in response to the decision_req, checks to see if it has gathered enough votes to make the decision. If it possesses enough knowledge to decide or if it has actually decided but the decision somehow has not been delivered to the agents yet (perhaps due to network latency), then the collaboration manager retransmits the decision to all DA managers inside a Jini transaction given the same transaction ID. 394 Otherwise, it waits until either all votes are gathered or its timer expires and to deliver the appropriate decision by then. Otherwise, if the original coordinator has crashed, the new coordinator c' elected from the recovery protocol coordinates all agents to rollback. The atomicity is still preserved because the Jini transaction model guarantees no participating agent can have committed. Hence, c' can rollback the delivery of R by distributing a rollback predicate inside a Jini transaction to all agents within the group. 4 Conclusion We have introduced CoDAC as a comprehensive solution to general agent coordination problems. The major contribution of CoDAC is to embed atomic commitment capabilities into collaboration amongst distributed agents with enhanced fault tolerance. It delivers the core functionality to attain common knowledge within a collaboration group and signals all participants to take consistent actions. It fulfills the key properties in the component model to offer reliable coordination service to mobile agents with plug-and-play capability, encapsulated functionality and self-managing capacity. Beyond that, CoDAC breaks the gap between different agent platforms with its strong compatibility to enable heterogeneous agents to engage into a collaboration group as a whole. References 1. W. Kenith Edwards, Core JINI, The Sun Mircosystems Press, Java Series, Prentice Hall, Inc, Sept (1999). 2. Joseph Y. Halpern, and Yoram Moses, Knowledge and Common Knowledge in a Distributed Environment, Journal of the Association for Computing Machinery, Vol 37, No. 3, July (1990), pp549-587. 3. Suciu, O., Cristian, F.: Evaluating the performance of group membership protocols, Engineering of Complex Computer Systems, (1998) pp.13 -23 4. Sun Microsystems, Jini™ Architecture Specification, Version 1.1 Alpha, Nov (1999), http://www.sun.com/jini/ 5. Sun Microsystems, Jini™ Lookup Service Specification, Version 1.1 Alpha, Nov (1999), http://www.sun.com/jini/ 6. Sun Microsystems, Jini™ Transaction Specification, Version 1.1 Alpha, Nov (1999), http://www.sun.com/jini/ 7. Clemens Szyperski, Component Software, ACM Press Books, Addison-Wesley, (1997) 8. T.O. Lee and K.W. Ng, A Componentware for Distributed Agent Collaboration, in Proc. of First Int. Workshop on Web-Agent Systems and Applications, IEEE Computer Society, (2000), pp. 780-784. A Multi-agent Approach to Modelling Interaction in Human Mathematical Reasoning Alison Pease, Simon Colton, Alan Smaill, John Lee Division of Informatics, University of Edinburgh, 80 South Bridge, Edinburgh, EH1 1HN, Scotland {alisonp, simonco}9dai.ed.ac.uk, A.SmaillQed.ac.uk, john9cogsci.ed.ac.uk 1 Introduction Current work in automated reasoning does not in general model social aspects of human mathematics, with a few exceptions, for example [1]. We are inter­ ested in modelling concept and conjecture refinement, i.e. the way in which the definition of a concept evolves as a conjecture develops. Modelling this process is important because (a) it will illuminate aspects of the social nature of mathematics and (b) it may be useful for improving existing automated reasoning programs. In §2 we outline descriptions by Devlin and Lakatos of the human process. In §3 we describe an agent architecture for this task and how it could be implemented using the HR theory formation system[2]. 2 Social Aspects of Human Mathematical Reasoning Devlin [3] challenges the viewpoint that mathematics should be seen as a purely logical endeavour. Logic seeks to abstract terms from their context and manip­ ulate them in an objective way which is independent of the motivation behind the manipulation. Instead Devlin claims that any field of human knowledge should be interpreted in terms of communication and context, in which mean­ ing and motivation play key roles. He argues that humans act rationally rather than logically, i.e. they act according to individual goals, desires and capabil­ ities rather than following objective rules of logic. Therefore any sphere of human knowledge, even mathematics, should be seen as rational rather than logical. Lakatos[4] highlights the roles that intuition, dialectic and group dynamics play in mathematics. He presents a rational reconstruction of the development of Euler's Conjecture and its proof, as a discussion between a cast of stu­ dents and a teacher. The students start with different examples of polyhedra, from which someone suggests that 'for all polyhedra there is a relationship between the number of vertices (V), edges (E) and faces (F), namely that V — E + F = 2' (C). The discovery of counter-examples soon leads to a discus­ sion of what constitutes a polyhedron and whether it is possible or useful to 395 396 define a subset of polyhedra for which the equation holds. According to their intuition (influenced by their experience of objects they classify as polyhedra), the students use different methods which enable them to accept, reject or im­ prove the concept or conjecture. We list some of the methods. 1. Induction - generalise from particulars. (Since the equation holds for all regular polyhedra it holds for all polyhedra, i.e. C.) 2. Surrender - look for counter-examples and use them to refute C. (The hollow cube" is a counter-example since 16 - 24 + 12 = 4.) 3. Monster-barring - given a counter-example, modify the definition of the con­ cept or subconcept so as to exclude it. (The hollow cube is not a polyhedron and therefore is not a real counter-example.) Note that Lenat's AM program was able to perform monster-barring [5]. 4. Exception-barring 1: piecemeal exclusion - find those properties which make a counter-example fail C and then modify C by excluding that type of counter­ example. (Generalising from the hollow cube we say that any polyhedron with a cavity will be a counter-example. Therefore C" becomes 'for all polyhedra without cavities, V - E + F = 2'.) 5. Exception-barring 2: strategic withdrawal - instead of listing the exceptions (as above), withdraw into a much smaller domain for which C seems certain to hold. (Generalising from the examples for which the equation holds we see that they are all convex. So C" becomes 'for all convex polyhedra, V — E + F = 2\) Devlin and Lakatos both stress the development of mathematics through social interaction. This indicates that an agent architecture, in which the agents are defined by their intuitions, motivations and actions would provide an appropriate framework. 3 An Agent Architecture for Concept Refinement We define below an architecture of equal-status agents in terms of the problem being addressed, what the agents do and how they communicate. Following this, we suggest an implementation of this architecture using the HR theory formation system. The Problem: We want to model the social process of concept refinement in the face of conjectured general properties and counter-examples to them. The Agents: The task of the agents in our architecture is to develop in­ teresting concepts, conjectures and examples, and to react accordingly to the introduction of counter-examples to a false conjecture. The methods outlined above guide the roles that intuition, motivation and action play. Intuition is built up via experience and used to generate concepts and conjectures via in- aA cube with a cube-shaped hole in it. 397 duction. Each agent is given a different set of examples, so that experience differs between them, and their intuitions vary. The strength of the intuition also varies, depending on the number of examples from which it derives. Intu­ ition is used to guide the motivation to action. Motivation includes accepting, rejecting and modifying concept definitions and conjectures. Actions specify ways in which to do this, such as the methods outlined above. For example, an agent which suggests a conjecture based on many examples would have a strong intuition about it and in the face of counter-examples, the motivation would be to modify it rather than reject it, using either method 4 or 5 above. Intuition and motivation are dynamic since agents influence each other. Communication: The agents communicate by sending concepts, counter­ examples and conjectures whenever one is found and negotiating about con­ cept definitions. In Lakatos' discussion (temporary) agreement about concept definitions is reached by a strategy of accepting the most exclusive definition in order to continue the discussion. Another strategy might be to accept the definition suggested by the agent with the strongest intuition. In [2] Colton describes a multi-agent approach to concept formation in pure mathematics, using the HR program which invents concepts, makes and proves theorems and finds counter-examples to false conjectures. Four copies of HR were employed as agents in an architecture where they communicated the most interesting concepts they invented. By enabling them to communicate conjectures and counter-examples instead, we suggest that the system could model the process of concept refinement using the architecture described above. Each copy of HR must be supplied with a set of objects of interest (for example integers in number theory) from which it makes conjectures empiri­ cally about the concepts it invents. We propose to give each agent a different set of objects of interest. This means that the conjectures they make will be more likely to be false as they are based on a subset of the empirical evidence available. However, it will provide an opportunity for concept and conjecture refinement to occur in the following way: 1. Agent X makes a conjecture about a concept and communicates it to all agents. 2. Each agent looks at the examples it has and communicates back any counter­ examples it finds. 3. X uses the counter-examples to redefine the concept and restate the conjec­ ture. For example all the others are asked for concepts which are true about their counter-examples. If one concept is true of all the counter-examples, X generalises the counter-examples into a concept definition and fixes the con­ jecture by exception-barring. It then starts the process again. As a theoretical example in number theory, suppose agents X, Y and Z 398 are working with the numbers 1 - 10, 11 - 50 and 51 - 60 respectively. Using the method of induction, Z forms the conjecture C — all integers have an even number of divisors, and sends it to the others. X finds and communicates counter-examples 1, 4 and 9. Since a high proportion of his knowledge refutes C he has a strong intuition that it is false (not worth modifying). Therefore his motivation is to reject the conjecture and he uses the method of surrender to claim — C. Y finds and communicates counter-examples 16, 25, 36 and 49. Since they form quite a low proportion of his knowledge his intuition is that it might be worth modifying C, so he uses the method of exception- barring (piecemeal withdrawal) and looks for a generalisation of the known counter-examples. He finds the concept square numbers which has already been formed, and modifies C to Cy = all integers except square numbers have an even number of divisors. Z might use the method of exception-barring (strategic withdrawal) to modify C to Cz = all integers between 50 and 60 have an even number of divisors. The new conjectures are now considered and counter-examples sought, with none found as they are all true (—C is a dead end, Cy an interesting conjecture and Cz trivial). The methods described by Lakatos have thus led in the above example to the discovery of the concept all integers except square numbers and the conjec­ tures all integers except square numbers have an even number of divisors and all integers between 50 and 60 have an even number of divisors. It shows how we might begin to model simplified social aspects of mathematical reasoning via agent interaction. 4 Future Work and Conclusions We intend to implement the number theory example in the agent architecture to observe concept and conjecture refinement. Testing will be carried out to ascertain whether the architecture improves HR's performance or degrades it (by wasting time on false or trivial conjectures), where performance is measured by the number of interesting concepts, conjectures, and examples found. There is also much scope for extending the architecture suggested above. Lakatos describes further methods which allow interaction between proof at­ tempts and counter-examples. For instance, a counter-example may show where a proof is faulty (which steps of the proof it violates) and analysis of the faulty proof may suggest a counter-example. The value of a hierarchy should also be investigated. In Lakatos' discussion there is a teacher whose role is to stimulate (suggest a controversial conjecture), guide (judge when a certain topic or method has been fully explored and suggest another) and evaluate discussion (point out any inconsistencies in the students' claims). This may 399 tie in with a notion of respect between agents (thus more realistically simu­ lating group dynamics). Agents could record the sender's name along with a received message and build a respect measure from the value of the message. They would then give priority to messages from more highly respected agents. This extended architecture would better capture what is meant by social in­ teraction. The dialogue involved in producing a mathematical theory should then itself be evaluated, although this will be harder (since it is a qualitative judgement). Modelling social aspects of mathematical reasoning within an agent ar­ chitecture is worthwhile since it would lead to a better understanding of the human process. This would have theoretical value for philosophers of math­ ematics and practical value for students of mathematics, as a history of the proof, including failures and collaboration between experts would avoid the mystifying steps which are often a feature of published mathematics. Ad­ ditionally in providing new methods it may show how to model aspects of mathematics not yet automated, or provide more efficient ways of modelling those aspects already automated. The theoretical example suggests that im­ plementation of the architecture described is a very promising approach. Acknowledgements We would like to thank Paul Crook for comments on an earlier draft, as well as the anonymous reviewers for their comments. This work was supported by EPSRC grants GR/M45030. and GR/M98012. The second author is also affiliated with the Department of Computer Science, University of York. References [1] C. Benzmuller, Jamnik M., Kerber M., and Sorge V. An agent-oriented ap­ proach to reasoning. In Proceedings of the German Conference on Artificial Intelligence (2001). Springer, 2001. [2] S. Colton, A. Bundy, and T. Walsh. Agent based cooperative theory for­ mation in pure mathematics. In Proceedings of AISB-00, pages 11 - 18. The Society for the Study of Artificial Intelligence and the Simulation of Behaviour, UK, 2000. [3] K. Devlin. Goodbye, Descartes. John Wiley & Sons, Inc., NY, 1997. [4] I. Lakatos. Proofs and Refutations. CUP, Cambridge, UK, 1976. [5] D. Lenat. AM: An Artificial Intelligence Approach to Discovery in Math­ ematics. PhD thesis, Stanford University, 1976. SECURE ASYNCHRONOUS SEARCH MARIUS-CALIN SILAGHI, DJAMILA SAM-HAROUD, AND BOI FALTINGS Swiss Federal Institute of Technology Lausanne 1015 Ecublens, Switzerland {silaghi, haroudjaltings] <§lia. di. epfl. ch Distributed Constraint Satisfaction (DisCSP) is a general framework for modeling distributed combinatorial problems. Practical distributed problems can involve competitional situations among agents. In such cases not all agents are needed for building a final solution and agents may try to hamper their competitors from reaching a solution. This peculiarity cannot be modeled by current DisCSP for­malisms. In particular, there is no mechanism for thwarting agents from dis­ seminating fake nogoods in order to impede their competitors from proposing or reaching a good solution. In this paper we extend a DisCSP framework in order to model competition. We adapt one of the most recent search algorithms to offer agents means to check that received messages are legal. 1 Introduction A wide variety of problems such as negotiation, resource allocation, design or scheduling are inherently distributed. Importing techniques from a prob­ lem to another is easier when general frameworks are used. Distributed Constraint Satisfaction (DisCSP) provides such a framework for static dis­ tributed combinatorial problems. A DisCSP is composed of a set of agents A = {Ai, A?., ---An} and a set of k variables V = {vi,t>2, •••ffc}, each of them under the control of the agents interested in it. The variables in V are called external variables. With each agent Aj is associated a set of external variables Vi = {vii,Vi2,...,vimi}, V C V, and a set of constraints C% = {cii,Ci2,...cik,} such that any external variable constrained by a constraint in Cj is also con­ tained in Vi. The domain of a variable Vi is Di. All the variables Xj constrained by constraints in Cj, and such that Xj $ Vi are said to be internal. In problems with self-interested agents, the agents can actually be com­ peting for a resource or state and some agents can reach a solution without the agreement of some others (e.g. with several clients or several providers). The competitors are interested in concealing solutions they dislike. Often they can do it by illegally generating nogood messages for solutions that normally do not need their agreement. The existing distributed protocols for DisCSPs do not offer the possibility to check these byzantine failures. In this paper, we present an extended DisCSP framework that can model these missing features. Since in practice competition between agents often 400 401 occurs in conjunction with negotiation problems, the extended framework also enables the agents to attach preferences to their alternatives and to relax their constraints. We assume that the sum of preferences in solutions has to be minimized. The relaxation consists in either reducing values or, as proposed in 1, in accepting new tuples of valuations. The new algorithms can prevent agents neither from making coalitions, nor from byzantine failures that act against themselves. However, the new technique helps agents to avoid being cheated with the help of the distributed search protocol. 2 Dynamic DisCSPs By dynamism we understand that the participation of an agent to the solu­ tion/search process is dynamically conditioned by certain value assignments. The extended framework builds on the notion of Valued CSPs 2. First we de­scribe the problem of an agent, Au, as a Negotiation Valued CSP, (NVCSPU). NVCSPU consists of a minimal increment, e, a set of external variables, V(u), and an ordered set of global constraints, ci(u),...,c„u(u). The domain of each external variable contains a value, F, meaning unchanged and indifferent. Each pair (valuation v, constraint Cj(u)) has associated a tuple: Ti(u) = (f'easible^(u), preference^u)). T"(u) is such that if nu>i>j>0 then for any valuation v, feasible*-(u)-^>feasible"(u) and pref erencej(u) ^preference" (u). There ex­ ists a valuation, v, such that either feasiblej(u) ^= f' easiblevAu), or otherwise feasible^(u) = f easiblevAu) = T and preference"(u) + £ < preferencevAu). A Dynamic DisCSP (DyDisCSP) is defined by a set of agents Ao,...,An. Ak,k=[o,h),n>h>o are ^ agents called initiators. Each agent Aj owns a NVCSP, NVCSPj. Given a valuation, v, for a set of external variables, S(v) is the set of agents owning a variable not instantiated in v to F. By convention, the initiators always belong to S(v). An agent is active if it belongs to the minimal subset, A(v), of S(v) such that S{Uvars(A(v))v) n (S{v) \ A(v)) = 0. Definition 1 (Acceptable valuation) A valuation v is acceptable if each agent in S(v) proposes for v a feasible associated tuple (f easibleWi) = T). Definition 2 (Solution) A solution of a DyDisCSP is an acceptable valua­ tion v of all the external variables such that if each agent Ai in S(v) is active and proposes for v an associated tuple (T,preference^ (i)), where hi < rij, then v G {6| b = argmin( YJ preference%.(i))} a Aies(v),i>h 402 and no agent Ai, i>0, wants to reveal a constraint Cj, j>ki- The feasibility condition is Y^AieS(v)Preference<ki('>') — 0- The feasibility condition verifies that the solution is acceptable to the initiators. If v is a solution of a DyDisCSP, then S(v) is the solver set for v. 3 Extending AASR In this section we introduce Secure Asynchronous Search (SAS) which is an adaptation of Asynchronous Aggregation Search with Reordering (AASR) to the DyDisCSP framework. First we recall the basic elements of AASR 4. Definition 3 (Aggregate) An aggregate is a triplet (v, s, h) where v is a variable, s a set of values for v and h a history of the pair (v, s). A history h for an aggregate a = (v, s, h) proposed by an agent Ak takes the form of a list of pairs \i : l\ where i is the index of an ancestor of h that has made a proposal on v and I is the value of a counter. An aggregate requests higher priority agents to comply with a proposal, therefore it defines by itself a nogood. Such nogoods are called nogoods entailed by the view. Definition 4 An explicit nogood has the form -V. V is a list of aggregates. The agents communicate via: ok, nogood, add-link and reorder mes­ sages. ok messages are sent from agent Aj to agent Ai, and have as parameter a list of aggregates for variables in which Ai is interested, nogood messages have as parameter an explicit nogood. add-link messages are sent from agent Aj to agent Ai, informing Ai that Aj is interested in a set of variables, vars. The agent on position i is denoted A1. Rl is the agent that can reorder Al+1 by sending reorder messages.4 3.1 Legal messages In AASR, both ok and nogood messages transport some kind of nogoods. These are the nogoods entailed by the view, respectively the explicit nogoods. In order to allow the agents detect messages that are potentially harmful for the quality of the computed solution, we introduce the notions of legal nogood and legal aggregate. We want to prevent the agents from disturbing the search by generating illegal messages. A message is illegal if it is generated by an inactive agent. SAS requests agents to build messages in such a way that their lawfulness can be proved. Definition 5 (Legal explicit nogood) Any legal explicit nogood generated by an agent Ai, where Ai is not an initiator, must contain at least one aggre­ gate (VJ, s, h), Vj € V(i) such that s does not contain F. 403 Definition 6 (Justification) Each aggregate Ii generated by an agent Ai that is not initiator needs a justification. The justification of the aggregate It consists of a pair (v,h) built from an aggregate (v,s,h) that activates Ai. The justification of an aggregate, a, corresponds to a relaxation of the nogood entailed by the view given by a and is stored in the history of the aggregate, attached to the pair corresponding to the agent that has generated a. A history has now the form \i\,h, j\\i2-,h, J2\--- where ik is the index of an agent, Ik is the value of an instantiation counter and jk is the justification of the corresponding instantiation. Property 1 The space needed by an agent to store all the aggregates is 0(nv), where n is the number of agents and v is the number of variables. Corolary 1 The size of an aggregate is 0(nv). Property 2 SAS has polynomial space complexity in each agent. The proofs are given in 3. Besides generating illegal nogoods, the agents can also generate illegal aggregates against their competitors. Definition 7 (Legal aggregate) An aggregate is legal if its justification is valid and the variable in the justification does not contain F in its instantia­ tion. By convention, any aggregate generated by an initiator is legal. 3.2 The SAS protocol In SAS the messages must prove that their sender is active. Agents must generate only legal nogoods. Any other nogood would be discarded. The next rule shows how legal nogoods can be obtained." Rule 1 (Nogood generation) An agent Ai may compute an explicit no- good N that is not legal, but the set in the newest aggregate it has received for some variable Vj from V(i) does not contain F. Ai should add the newest aggregate of Vj to N. If this is not possible, it should refrain from sending N to other agents. This rule does not apply to initiators. Rule 2 (Checking) The receiver of an explicit nogood ^N should check that -•A?" is legal. Also the receiver of any aggregate, should check that the new aggregate is legal. Illegal information is discarded. The justifications trigger add-link messages in the same conditions as the aggregates received in an explicit nogood in AASR. Justified nogoods should not be delivered to the agent and integrated in the other structures inherited from AASR before the answer to eventual add-link messages is received. "When illegal nogoods are made legal, they are in fact relaxed. Agents that must relax nogoods can use heuristics for choosing the variable Vj from V(i). (e.g. choosing the variable for which the known aggregate was generated by an agent with the lowest position.) 404 Rule 3 (Justification change) Whenever the justification of an agent Ai is modified, Ai has to send again all its aggregates. Rule 4 (Justification invalidation) Whenever the justification J of a stored aggregate a\ in Ai is invalidated by some incoming new aggregate a-i, Ai has to invalidate a\ and has to apply again this invalidation rule as if a new aggregate of the variable in oi would have been received. Each proposal that activates or inactivates agents is broadcast to all agents with higher positions. Rule 5 (Next active) If possible, acting for Rl, each A1, proposes new or­ ders to ensure that Al+1 is active. R° is an initiator. Proposition 1 The Secure Asynchronous Search maintains the characteris­ tics of completeness, correctness, and termination of AASR. SAS is an asynchronous protocol. A corresponding synchronous protocol (SSS) can be obtained in an obvious way. 4 Conclusions We present an approach to distributed problems with competition and byzantine-failures. The concept of Dynamic Distributed Constraint Satisfac­ tion is proposed and we show how it allows for modeling complex characteris­ tics of such problems. As shown in 3, DyDisCSPs can easily model and solve (Generalized) English Auctions. The presented algorithms and framework inherit from Constraint Reasoning generality and flexibility in modeling. References 1. K. Hirayama and M. Yokoo. Distributed partial constraint satisfaction problem. In CP,LNCS 1330, pages 222-236, 97. 2. T. Schiex, H. Fargier, and G. Verfaillie. Valued constraint satisfaction problems: hard and easy problems. In IJCAI, pages 631-637, 95. 3. M.-C. Silaghi, D. Sam-Haroud, and B. Faltings. Generalized En­ glish Auctions by relaxations in DyDisCSPs with private constraints. TR #01/365, EPFL, 2001. 4. M.-C. Silaghi, D. Sam-Haroud, and B. Faltings. ABT wih asynchronous reordering. IAT, 2001. FOUNDATIONS OF MARKET-DRIVEN AGENTS: AN ADAPTATION OF ZEUTHEN'S BARGAINING MODEL KWANG MONG SIM1 and CHUNG YU CHOI2 department of Information Engineering, Chinese University of Hong Kong, Shatin, NT, Hong Kong. Email: kmsim(ai.ie. cuhk. edu. hk 2Hong Kong and Shanghai Banking Cooporation, Hong Kong. Market-driven agents are negotiation agents that react to changing market situations by making adjustable rates of concession. In determining the amount of concession for each trading cycle, market-driven agents are guided by four mathematical functions of eagerness, remaining trading time, trading opportunity and competition. At different stages of trading, agents may adopt different trading strategies and make different rates of concession. Four classes of strategies with respect to remaining trading time are discussed. Trading opportunity is determined by considering: (i) number of trading partners, (ii) spreads - differences in utilities between an agent and its trading partners, and (iii) probability of completing a deal. While eagerness represents an agent's desire to trade, trading competition is determined by the probability that it is not considered as the most preferred trader by its trading partners. 1 Introduction One of the most crucial issues in both conventional and electronic commerce is for both sellers and buyers to reach a consensus on pricing and other terms of transactions. While there are extant agent-based negotiation systems [1, 2, 3, 4], agents in these systems adopt fixed (pre-specified) negotiation strategies which may not necessarily be the most appropriate strategy for changing market situations. As products/services become available and traders enter and leave a market, the conditions for deliberation change as new opportunities/threats are constantly being introduced. In addition, deliberation may also be bounded by time. In fixed negotiation strategies, these issues are not addressed and agents increase/relax their bids at a constant rate. Preliminary results from [5] showed that by making dynamic adjustment of bids, market-driven agents outperformed fixed strategies agents in most situations. The motivating consideration of market driven agent is to assist human users in making optimal trading decisions in response to changing market situations. The goal of this research is to design and engineer agents that are guided by market-driven strategies adapted from Zeuthen's bargaining model [6]. While Sim and Wong's agents [5] search for optimal deals in a given market situation by considering market factors such as opportunity, competition and remaining trading time, Zeuthen's model can be used to analyze the probability of generating a successful deal. In particular, it seems prudent to supplement the market-driven approach by also considering the risk of not completing a deal if an agent insists on its bid/offer. Market-driven strategy and Zeuthen's model: This research extends Sim and Wong's market-driven strategy [5] by considering the spread k (difference) 405 406 between an agent's and its trading partners' bids/offers. In making concession, an agent may need to assess the probability p of successfully completing a deal when bargaining for/giving up some items/commodities in different market situations. One way to assess/? is to consider the value of A:. Let v be the utility of an agent and w be the best utility among all its trading partners in the current trading cycle. The (best) spread in the current cycle is k = v - w. A market-driven agent makes concession by determining the utility of its next bid/offer v" using the sum of the spread in the next cycle k' and w. Hence, v' = k' + w. An agent strives to attain the highest possible utility while maintaining a reasonable level of p in the next trading cycle. It is more probable to complete a deal when k" is small, and vice versa. k' is determined by assessing current market situation: k'= E(e)T(t,t',T,X)P(n,< wt >,v)C(m,n)k . E() is a function on an agent's eagerness that determines the amount of concession (section 2). T(t,t',,") is a function on trading time that determines an agent's rate of concession (section 3). 0(n,<w{>,v) is a function that determines the amount of concession according to an agent's own expectation, number of trading partners and their (differences in) offers/bids (section 4). C(m,n) is a function on the number of competitors m and trading partners n that determines the probability that an agent is ranked as the most preferred trader by at least one of its trading partners (section 5). 2 Eagerness and Convergence The eagerness e of an agent is the percentage of convergence of the spread k. e represents an agent's desire to make concession, that is, to narrow k in each trading cycle, independent of the current trading time, number of competitors and trading partners. If the market situation remains unchanged, an agent's spread A' in its next bid/offer is given by: k'= (1 -e)k where 1 > e > 0. e is a user-supplied value that is assumed to be constant throughout the entire trading process. The function E(e) = \-e where 1 > e > 0 determines the ratio between k* and k, hence, the amount of concession. A high value of e corresponds to a strong desire of an agent to narrow k' and moving closer to reach a consensus. For instance, with other market factors being constant, if e = 0, then k'=(]-0)k = k; an agent does not make any concession. If e= 1, then k'= (1 -\)k = 0; a consensus is reached. 3 Trading Time and Strategies At different stages of trading, agents may make different rates of concession. Their strategies with respect to remaining trading time can be classified as follows: 1. An agent that is keen to complete a deal quickly may make large concessions in the first few trading cycles. However, if a consensus is not reached rapidly, there will be little room for negotiation in later stages. 2. An agent may choose to make minimal concession in early trading cycles and large concession when the trading time is expiring. 407 3. An agent may make constant rate of concession throughout the trading process. 4. Remaining trading time does not affect an agent's next bid/offer. It sticks to its original bid/offer throughout the entire trading process. The formulation of next spread k' with respect to trading time is given as: ((< T) where k0 is the initial spread; t is the current trading *'= "(T time, T is an agent's closing time, and A is a non-negative temporal sensitivity factor that models the four classes of strategies (see figure 1). Since —— = -A——k0 j—= —A(A — 1)—— k„ dt T and dt t and A, t, and T are non- negative, the slope (^LL) is always negative. Hence, k' and also the utility of the agent either decreases with time (since concession is being made) or remain constant throughout the trading process. The rate of change of the slope <ilk' | I'"2 J differentiates the four strategies as follows: 1. when 0 < A < 1, the rate of change in the slope is increasing, corresponding to larger concession in the early cycles but smaller concession in later cycles. 2. when A > 1, the rate of change in the slope is decreasing, corresponding to smaller concession in the early cycles but larger concession in later cycles. 3. when A = 1, the rate of change in the slope is zero, corresponding to making constant concession throughout the trading process. 4. when A = 0 , the rate of change of the slope and the slope itself are always zero, corresponding to not making any concession throughout the entire trading process. This is based on the assumption that the number of trading partners and, their bids/offers remain unchanged. X is supplied by a user and is assumed to remain constant throughout the entire trading process. Let the spread at time t (when the last bid/offer was made) be k, and the next spread at time t' (when the next bid/offer to be made) be k\ With other market factors unchanged, an agent's next spread is: and *'= ^ ' ' k T(t,f,T,X) ill -(f) '-(H 4 Trading Opportunities In determining trading opportunities, (i) number of trading partners n, (ii) spreads k between an agent and its trading partners, and (iii) probability p of completing a deal, are considered. The probability /?' of a completing a deal in the next cycle can be assessed by considering the value of the next spread k\ If A:' is very large, it is 408 less likely that a consensus is reached in the next cycle and vice versa. Hence, p" and k' are inversely proportional: (3-0 i.L P •" k p is determined by considering the notion of a conflict probability [6]. Conflict probability: Suppose that at any stage of negotiation, an agent B/'s last bid is represented as a utility vector v = (vb, vs) and its trading partner Si's offer is a utility vector w = (wh, ws) with vb > wb and vs < ws (since B; and 5/ are utility maximizing agents). Based on Zeuthen's analysis [6], if Bt accepts Si's last offer, then it will obtain wb with certainty. If Bt insists on its last bid and (i) Si accepts it, Bi obtains vb and (ii) St does not accept it, Bt may be subjected to a conflict utility cb. cb is the worst acceptable utility for Bj (e.g., its reserved price). If Sj does not accept Bi's last bid, B, may ultimately have to settle with lower utilities (the lowest possible being the conflict utility), if there are changes in the market situation in subsequent cycles. For instance, B/ may face more competitions in the next or subsequent cycles and may have to ultimately accept a utility that is lower than wb (possibly as low as cb). If the subjective probability of B/ obtaining ch ispc (conflict probability) and the probability that 5/ achieving vb is (1- pc), then according to Zeuthen's analysis [6], if Bi insists on holding its last bid, Bj will obtain a payoff of (1- pc) vh + pc cb. Hence, B] will find that it is advantageous to insist on its last bid only if [(1- pc) vh + pc ch] > wh. The probability of conflict pc is given as v _ w . Consequently, the maximum value of p < —- — v i, - c i, v _ w k is the highest probability of a conflict that B, may v i, - c * v i, - c i, encounter. pc is a ratio of two utility differences. While (vh- wb) measures the cost of accepting the trading agent's last offer (the spread k or difference between the bid and offer of 5/ and Si), (vh- cb) measures the cost of provoking a conflict. (vb- cb) represents the range of possible values of utilities between the best case utility and the worst case (conflict) utility. Aggregated Probability of Conflict: Let/?, be the conflict probability of B, with any of its trading partner Sh then the aggregated conflict probability of B, with all of its n trading partners is given as follows: (3.2) . . k n K - */) P c = 11 P i• = IT ' = "^ ;— with k; = v - w, < = i f>i v* - c* (v„ - c„ ) Probability of completing a deal: The probability p that Bt will obtain a utility vb with at least one of its n trading partners is: (3.3) . _ , D. _ , n, (v" - w -} p = l - p C = 1 - o» - c i, y 409 From (3.1) and (3.3), P n (v» w ,. ) Oft - c I. )" The function that determines trading opportunities is given as follows: (3.4) O ( n , < w , > ) = n (v t -»-, > ; = l p' can be a user-defined parameter as a trading agent may try to maintain a certain level of probability for completing the transaction while demanding the highest possible utility in a given market situation. Although the notion of /?' appears to resemble the notion of eagerness e, they are different. While p" models the extent that an agent will make concessions in response to a market situation to complete a deal, e models an agent's desire to acquire a product/service regardless of the market condition. Furthermore, the market-driven strategy in this research is designed for buyer and seller agents, hence (3.4) can be re-written as follows: (3.5) O ( n , < w t > , v ) n (v w,) o )" where v and c are the utility of the last bid/offer of a trading agent and its conflict utility respectively. 5 Trading Competition The amount of competition of a market-driven agent is determined by the probability that it is not being considered as the most preferred trading partner. Suppose an agent B, has m-1 competitors, B = {B2,...,B„,} and n trading partners, S = {S,,...,S„}. The probability that B, is not the most preferred trading partner of any S, G S is m ~ ' . The probability of B, not being the most preferred trading partner m of all S, e S is (m ~x\ • Hence, the probability of B, being considered the most If preferred trading partner by at least one of S, e S is C(m, n) = , _ (m - ' m xl m = n = 1 then there is one buyer and one seller. Hence, c(l,l) = l - [ -—- | = l - If « = /, there are m buyers B = {B,,..., B„,} and one seller S,. Then, m ~' i Each 5, G B is exposed to a probability of it not the most C(m,\) = l preferred trading partner of S,. If there are m buyers B = {B,,..., B„,} and n sellers S 410 = {S,,..., S„], then each 5, £ B has a probability of " -' that it not the most m preferred trading agent of any S, e S. The probability that B, e Bis not the most preferred trading partner of all S, e S is (m ~ ' Y . Therefore, the probability that a Bi e B is the most preferred trading partner of at least one St e S is cim n) = i {'" ~ ' T • The market-driven strategy considered in this research is designed for both buyer and seller agents and the above arguments hold for both buyer and seller agents. Furthermore, it is reminded that the cardinalities of B and S vary with changing market situations (as buyers and sellers can enter and leave the market at any time). 6 Conclusion This paper has presented the foundations of engineering market-driven agents. It extends the work in [5] by considering the additional notions of spread, and probability of completing a deal. The mathematical functions in sections 2 through 5 were used to guide the design of market agents that can automatically control the amount of concession in a negotiation for a given market situation. Experimental results in [7] demonstrated that the reactions of market-driven agents coincide with intuitions in real life trading (space limitation precludes them from being included here). It is hoped that the foundations discussed in this paper can shed new light in engineering negotiation agents. References 1. J. Rodriguez-Aguilar, et.al. Towards a Testbed for Trading Agents in Electronic Auction Markets. AI Communication, Vol. 11, No. 1, pp 5-19, 1998. 2. P. Wurman, et.al. The Michigan Internet AuctionBot: A Configurable Auction Server for Human and Software Agents. Proceedings of the 2nd Int. Conf. on Autonomous Agents, May 1998. 3. D. Chavez, et.al. A Real-Life Experiment in Creating an Agent Marketplace. Proceedings of the 2nd Int. Conf. on the Practical Application of Intelligent Agents and Multi-Agent Technology, London, April 1997. 4. R. Guttman and P. Maes. Cooperative vs. Competitive Multi-Agent Negotiations in Retail Electronic Commerce. Proceedings of the 2nd Int. Workshop on Cooperative Information Agents (CIA'98). Paris, July 1998. 5. K.M. Sim and E. Wong. Engineering Market-driven Agents for E-auction. In proceedings of the Pacific Asia Conf. in Info. System 2000, pp 1128 - 1138. 6. John C. Harsanyi, Bargaining, In (Eatwell, Milgate, & Newman eds.), The New Palgrave: Game Theory, Macmillan Press Limited, 1989, pp. 60-62. 7. C. Choi. Towards Strategic Agents for Auction & Negotiation. MSc. Thesis, Dept. of Comp., HK Poly. Univ., HK, 2001. Supervised by Prof. K. M. Sim. CHAPTER 6 AGENT BASED APPLICATIONS KAVANAH: AN ACTIVE USER INTERFACE INFORMATION RETRIEVAL APPLICATION EUGENE SANTOS JR. AND HIEN NGUYEN Intelligent Distributed Information System Lab Department of Computer Science and Engineering University of Connecticut {eugene,hien} @cse. uconn.edu SCOTT M. BROWN Air Force Research Laboratory Crew System Interface Division Wright-Patterson AFB, OH sbrown777@acm. org This paper reports our implementation and evaluation of an active user interface in an information retrieval application called Kavanah. The goal of the active user interface is to improve the quality of information retrieval and to reduce the user's cognitive workload while searching for information. Our underlying concept is to dynamically construct the search queries based on a dynamic representation that captures user interests, preferences and searching context (as represented in a user ontology). Our approach to disaggregating the essential aspects of a user's intent for searching allows for focused multi-agent based construction and correction of the overall user model that captures the user's intent, thus promoting increased effectiveness and efficiency. We evaluate the effectiveness of the active user in­ terface with commonly used metrics from the information retrieval community by measuring retrieval performance with and without the presence of an active user interface. Furthermore, we measure the ability to discover new knowledge by eval­ uating our dynamic online ontology construction. The evaluations use the Unified Medical Language System knowledge base as a test bed. 1 Introduction During the last few years, as the result of the overwhelming number of choices of online and offline information resources, we have witnessed an increasing trend towards the construction of personal assistant agents in information fil­ tering, recommender systems and agent communities2,9'11. The main focus of these approaches is to capture user interests by analyzing the user interactions with the system and to use these interactions to guide the system reactions accordingly to improve the quality of the users' work. In this paper, we hypothesize that constructing a unified model of the user's interests, preferences, and context in an information seeking task pro­ vides a fine-grained model that more effectively captures the user's informa- 412 413 Hon seeking intent than a model addressing a subset of these salient charac­ teristics. While other previous efforts have focused exclusively on learning any one aspect of information seeking, none of them has attempted to integrate all three aspects together for determining a user's intent in seeking information. We refer to our personal assistant agent as an active user interface (AUI) in this paper. Active user interfaces not only capture user interests, preferences, and contexts but also focus on the interactions among them in a dynamic fashion. In particular, our focus is on deriving and learning the context or user ontology. Most existing methods assume that all users share a single common ontology13. This implicitly assumes that all users have the same level of understanding and beliefs expressed in the common ontology. We believe that users understand information and how it interacts in their own individual way. This arises from many factors ranging from user experience and expertise to basic differences in user style and operation. We show that by using our model, we can do more than just elicit the user interests and preferences. We provide a learning capability for the system to discover new knowledge based on analyzing the documents relevant to the user and the context, i.e. why the user is focusing on the given information. This work is derived from our earlier research with a predecessor system, Clavin4,15'16. We evaluate our hypothesis by constructing an AUI in an information retrieval application called Kavanah. The implementation of our AUI is a multi-agent based system in which the main agent contains the user model consisting of user preference, interest, and context and the suppporting agents are used to dynamically construct and maintain the user model based on changes in the user's intent as well as incorrectness and incompleteness in the user model. Our evaluation goal is to show the effectiveness of this model by comparing the system performance in cases with and without an AUI using commonly used metrics in information retrieval. The rest of the paper is organized as follows: the next section discusses the architecture of the system followed by a detailed description of our im­ plementation. Next, we discuss our preliminary empirical evaluation. Finally, related work and future research issues are considered. 2 System architecture The main goal of Kavanah is to use its AUI to assist the users in getting the right information at the right time using the right tools4. The goal of the AUI is to accurately represent a user's intent. Intent inference involves deducing an individual's goals based on observations of that individual's actions12. In automated intent inference, this process is typically implemented through 414 one or more behavioral models that have been constructed and optimized for the individual's behavior patterns. In an automated intent inference system, data representing observations of an individual, the individual's actions, or the individual's environment (collectively called observables) are collected and delivered to the model(s), which match the observables against patterns of behavior and derive inferred intent from those patterns. These inferences can then be passed to an application for generation of advice, definition of future information requirements, or proactive aiding. We partition intent inference into three formative components. The first, interests, captures at a high level the focus and direction of the individual's attention. The second, preferences, describes the actions and activities that can be used to carry out the goals that currently hold the individual's at­ tention, with a focus on how the individual tends to carry them out. The third, context, provides insight into the user's knowledge and deeper motiva­ tions behind the goals upon which the individual is focused and illuminates connections between goals. In other words, the first component captures what the individual is doing, the second captures how the individual might do it, and the third infers why the individual is doing it. With regards to the re­ search presented in this paper, the AUI needs to provide the right assistance to the information retrieval application on what the user is currently inter­ ested in; how a query needs to be constructed and returned results needs to be portrayed; and why the user dwells on a search topic. We assume that the interests are influenced by the ultimate goal that the user is trying to reach and the methods which she uses to accomplish that goal. For example, suppose that the user's goal is to study lung cancer and her approach is to scan materials from general definitions to specific methods used to treat this disease. Her interests will thus vary from general treatments to specific chematography processes. In particular, her interests may change from a certain drug to a more general approach for treatment. The user interests, in turn, influence user preferences and context. If user interests appear to be far off the goal that the user is trying to reach, she may change her search strategies and understanding of the subject accordingly. In our AUI, we capture the interest, preference, and context aspects of user intent with an interest relevancy set, a user ontology network, and a preference network correspondingly. The interests relevancy set determines what is currently relevant to the user. It is generated by reasoning over the user ontology network. Based on the utility values of each concept node in the user ontology network, we end up with a rank ordering of the concepts to build an interest relevancy set. Since user interests change over time, we incorporate a fading function to make the irrelevant interests fade away. We 415 will describe this process in more detail in Section 3. The user ontology network captures the user's knowledge of concepts and the relations among concepts in a specific domain. Before further discussing the user ontology network, we introduce briefly the concept of a domain on­ tology. The domain ontology captures the domain knowledge containing the concepts and the relations among them in a specific domain. The user on­ tology exploits the domain ontology by extracting the missing information that it needs to have in order to help identify the concepts which the user is interested in. Therefore, the user ontology will be similar to a subgraph of the domain ontology. However, in the user ontology network, additional relations which are not found in the domain ontology may exist as a result of user misconceptions or user-specific expert knowledge that is not fully captured in the domain ontology. The user ontology network in Kavanah is represented by a Bayesian network in which each node either represents a concept or a relation among concepts. The preference network represents how the user wants to form the query, how this query should be answered, and how results should be portrayed. The user's preference in Kavanah is reflected by how the user prefers to use a class of tools. A tool is defined as an operator to perform specific actions to transform the information that the user needs based on preference. Each node in the preference network represents a tool, an action associated with that specific tool, or a pre-condition which represents the requirements of the tool connected with it. An example of a tool is a filter that removes those documents that do not match a certain criteria. Another example of a tool is an expander that searches for documents that expand the searching topic. Figure 3(a) shows an example of a preference network. The AUI uses correction adaptation agents to maintain the preference network. Each correction adaptation agent offers a bid to the AUI to change the preference network. They maintain a user model that is identical to that of the AUI until the AUI requests bids, at which time the AUI adapts its own user model based on its bidding behavior component. In the bidding process, the correction adaptation agent that most likely improves the AUI's effectiveness will win the bid and this winning agent is permitted to correct the user model. We evaluate the user model by a set of metrics that measure its adaptivity, autonomy, collaboration and robustness requirements. We capture the user's utility for having the AUI perform an action on his behalf to achieve a goal by the utility function over that set of requirements. An example of such a utility function is as follows: Ureq = 0.14*Ureactive+0.14 * Upredictive + U.14 * Uperceptive + 0. 14 * Uautonomous + 0.14 * t/coMadoratiue"("0.14 * Ucapability ' 0.14 * V'misconception- For more information about the correction adaptation 416 agents, the bidding process and the metrics, please see our previous paper5. 3 System implementation We start this section by describing the overall process in Kavanah and then describe in detail how the AUI helps the system build the adapted query. Kavanah consists of five modules as shown in Figure 1(a). The input module accepts the user's natural language queries and transfers them to the query module where they are parsed and converted into a query graph (QG) which is similar in construction to the user ontology network except that it may con­ tain a node(s) representing a variable (usually denoted as X) that is necessary to represent unknown concepts in the user query. A query graph is a directed acyclic graph, where each node represents a concept or a relation among the concepts. A relation node should have concept nodes as parents and children. A concept node represents a noun phrase while a relation node represents a verb phrase in a user query or a natural language sentence. An example of a QG of the query "What causes liver damage?" is shown in the left side of Figure 3(b). The AUI uses the query graph and generates a new adapted query for the search module based on the current user model. An example of an adapted query is shown in the right side of Figure 3(b). The search module matches the QG of the adapted query against each document graph represent­ ing a record in the database of documents, chooses those records that have the number of matches greater than a user-defined threshold, and displays the output to the user. A document graph (DG) is a directed graph that contains concept and relation nodes and is also similar to the user ontology network (e.g Figure 2(a)). Note that all of the common concepts in all of documents are found in a global dictionary and domain ontology. A match between a QG and a DG is defined as the number of concept and relation nodes of the QG being found in the DG over the number of nodes of the QG. After the search module returns the search results, the feedback module allows the user to indicate whether the search result is relevant or not. The AUI constructs the adapted query in Kavanah by maintaining the updated user interests, preferences and context. The logical architecture of the AUI is shown in Figure 1(b). The AUI determines the current interests by reasoning over the user ontology network with the concepts found in the user query set as evidence. Each element of the interest relevancy set consists of an interest concept and an interest level. The interest concept represents the topic that the user is currently interested in. It is determined from the user query and the set of documents that the user has indicated as relevant in the recent search. The interest level is a real number indicating how much the user is 417 , DO . I- Interest -i >, User Ontology network J? S Preference network ^ " (b)"~"~ Figure 1. (a) The overall Kavanah architecture, (b) AUI Architecture interested in the corresponding interest concept. Denote each interest concept as a and its associated interest level as L(a). We compute L(a) after every query by: L(a) = 0.5(L(o) + ^) with n as the number of relevant documents containing this concept a and m as the number of relevant documents. If L(a) falls below the user-defined threshold value, the corresponding interest concept a is removed from the interest relevancy set. To compute the new set of interests, we set as evidence in the user ontology network those concepts found in the query and the interest relevancy set, and perform belief updating on the user ontology network. We construct the user ontology network dynamically by finding a common set of subgraphs of all relevant documents. Each document is represented as a DG(e.g Figure 2(a) and 2(c)). For each relevant document, we build a set of its subgraphs. A subgraph X of a DG Y is a DG such that each node a belongs to X also belongs to Y. The sets of subgraphs of the concepts "urate oxidase" and "cosmids" are generated in Figure 2(b) and 2(d). After generating all of the subgraphs, we compute the number of occurences for each subgraph in the entire set. We select those subgraphs that have the number of occurences greater than a user-defined threshold and consider them as the common subgraphs of the relevant documents. The common set of subgraphs of the two above concepts is shown in Figure 2(f). This set is used by an agent to update the user ontology network. The agent will check if a subgraph is not currently in the user ontology network, and adds it accordingly. This agent will ensure that the update will not result in a loop in the existing ontology network. If it does, we skip this addition. A new link between two existing concepts in the user ontology network will also be created if two concepts are linked in the set of common subgraphs and the frequency of these links exceeds a certain user-defined threshold. The preference network is built when the user issues a new query and gives relevance feedback. Each user query is considered as a pre-condition Input ^ Query module 5_ M AUI Search/ Output (a) 418 Document graphs Cosmids }*{ lsa \—WLnzymeJ Subgraphs Cosmids) llinzymc ( Cosmids VK lsa J~*\ Enzyme J Common subgraphs Figure 2. (a) The graph represents "cosmids". (b) Subgraphs of concept "cosmids". (c) The graph represents "urate oxidase", (d) Subgraphs of concept "urate oxidase", (f) The set of common subgraphs of the concepts "cosmids" and "urate oxidase". Query graph ofquery_02: Query graph of adapted query 02: Liver ssjamagg/ (b) Figure 3. (a) An example of a preference network, (b) Examples of query graphs associated with the user query and the adapted query generated by AUI node in the preference network. If this query or its part is already asked, the existing node in the preference network which has a QG matched with the QG of the new query or of its part will be set as evidence. Each interest concept from the interest relevancy set is added to the preference network as pre­ condition node and set as evidence. If the user query is totally new, the tool being used by the user is set to the default value (a filter) and a goal node 419 representing the filter tool is added to the preference network. Otherwise, it is set to the tool being represented by the goal node with highest utility value. Each action node represents a way to construct a adapted query based on the current tool, interests and user query. Figure 3(a) shows an example of a preference network in which the user is using an expander, is currently interested in the concept "urate" and wants to find out the causes of the liver damage. Note that each user query and adapted query generated by AUI associates with a QG. The current way of forming a new query is determined as a result of reasoning over the preference network by taking top n goals nodes as defined by the user. We computed the expected utility for a goal g with associated action Oj as follows: EU(g) = P{g = T)U(a — g = T) + P(g = F)U(a - g = F) in which U(a - g = T/F) = Y,P(a-g = T/F)U(ai) and U(a,i) — J2 U(attributes) The set of attributes in the formulas above includes the workload, user expertise, temporal memory, and spatial memory. For more detail information on this, please refer to our ealier work6. 4 Empirical Evaluation We empirically evaluate the system using the definitions of 100 concepts ex­ tracted from the Unified Medical Language System (UMLS). In the first eval­ uation, we focus on the quality of the retrieval process. We constructed a set of queries and processed this set through the system with and without the AUI. In this query set, we are mainly using the "wh" questions to find out the definitions of concepts or identify concepts that match certain requirements. For example, "what is urate oxidase?" or "which enzyme inhibits monoami oxidase and causes liver damage?". We made an assumption that the user does not just explore the concept randomly, but focuses on what he is study­ ing. We used the precision and recall metrics commonly used in information retrieval14 as our evaluation criteria. Figure 4 shows the precision and recall for all the questions in the cases with and without AUI. As we see, the preci­ sion and recall in cases that have an AUI are better than those without any help. If Kavanah is working without an AUI, it simply matches the QG of the user query with each DG representing each record in the database. Depending on how well the user manipulates the keywords in a query, the search may return more, less or even none of documents. This process requires the user either know the contents of the database or be very familiar with the search topics to achieve a decent result. The user's feedback is not used to adapt the search query. With AUI, depending on the user's feedback, Kavanah helps the user construct an appropriate search query that satisfies the user's search­ ing intent. For example, if the user does not indicate any documents from 420 Precision comparisons with ant) without active user interface (AIM) -WithoutAUl "With AUI —i 1 r Q1 Q2 Q3 Q4 G5 Q6 Q? Q8 Q9 Q10 Questions Recall comparisons with and without active user interface (AUI) 1.5 -. = 1 - on o i) * 0.5 - 0 ^Su«iaB)fflnaiti>i»B|Hkn ^ZZ -WithoutAUl With AUI Q1 Q2 Q3 Q4 Q5 Q6 Q7 Q8 Q9Q10 Questions Figure 4. Precision and recall for Kavanah using with and without active user interface. the returned list relevant, Kavanah then knows that perhaps, a wrong tool has been used, or the interests are not up-to-date or the ontology is far off the mark. It will automatically correct those misses in order to improve the quality of the search. We also evaluated the process of constructing and updating the user's ontology network by building simulated user ontologies from the domain on­ tology. We randomly choose some concept nodes from the domain ontology (referred in this experiment as testing concepts) and randomly remove some links associated with them to see if our system can reconstruct those missing links in the user ontology network. For each testing concept, we construct a set of queries such that they reflect the relations between the testing concept and the removed links. We compute the link error as follows: LinkError = ^ in which n is the number of links in the user ontology network matched against the target user subgraph's and m is the total number of links of the user ontology network constructed by AUI. First, we performed this experiment using the testing database mentioned above and found out that there is a large mismatch between the domain ontology and the set of concepts being 421 1.50 1 1.00- 0.50- 0.00- Matching percentage of ontology construction >c^ 12 3 4 5 Test case —•»-— Matching percentage with separated database and domain ontology ••i.^im, Matching percentage with domain ontology as database Figure 5. Link matching percentage of ontology construction used to construct our testing database. As it is shown in Figure 5, the match­ ing percentage of the links created in the user ontology against the real user subgraph in domain ontology is low for this case. Next, we tried to use a portion of the domain ontology as the testing database to perform the same experiment in order to see if the matching percentage of those links is changed without the mismatch between the database and domain ontology. Obviously, we see that the results are significantly better. 5 Related work In the information retrieval and information filtering community, dynamically reacting to the changes in the user interests, preferences and context (i.e., ontology) to reduce the user cognitive workload is a challenging problem. Most of the research from information retrieval has been focused on capturing the user interests only7-18,3. Work by the InfoSleuth Group explores the ontology to improve the retrieval process10 is a closely related to our work. In this work, ontology is dynamically constructed from the text and constantly been updated as the retrieval process goes on. However, this technique doesn't focus on the preferences and interests. There is not enough detail from the paper about this approach to empirically compare it with ours. Another work related to ours is IRIA1 which unobtrusively builds a map of accessible relevant information and uses it to enable users to find information quickly. The difference between this work and ours was the IRIA approach is based on a context-sensitive search which spreads the user interests based on the relevant information to users while ours is based on a decision theoretic approach to maintain the model of the user preferences, interests and context. 422 6 Future work This paper has described our on-going work to construct an active user inter­ face that provides intelligent assistance to the user in an information retrieval system. There are a number of issues that arise from our design and empiri­ cal evaluation. We want to extend our evaluation to a more complex scenario with different kind of questions and search strategies. Unfortunately, the cur­ rent database has the problem of low term frequency which is usually referred to as data sparseness problem in information retrieval17. We are also looking for another supplement database or semantic network in UMLS that will help us to overcome the problem of disjointness between the domain ontology and the datab