`
`Proceedings ACM SIGMOD
`International Conference
`
`on Management of_ Data
`
`May 13-15, 1997
`Tncson, Arizona, USA
`
`Edited by Joan M. Peckman
`SIGMOD Record, Volume 26, Issue 2, June 1997
`
`Plaid 1004
`
`Plaid 1004
`
`
`
`This material may be protected by Copyright law (Title 17 U.S. Code)
`
`
`
`lnformation—gathering
`does not depend on their presence.
`tasks are thus defined generically, and their results are sensi-
`tive to the available resources. InfoSleuth must consequently
`provide flexible, extensible means to locate information dur-
`ing task execution, and must deal with incomplete informa-
`tion and partial results.
`To achieve this flexibility and openness, lnfoSleuth inte-
`grates the following new technological developments in sup-
`porting mediated interoperation of data and services over
`information networks:
`
`1. Agent Technology. Specialized agents that represent
`the users,
`the information resources, and the system
`itself cooperate to address the information processing
`requirements of the users, allowing for easy, dynamic
`reconfiguration of system capabilities.
`For instance,
`adding a new information source merely implies adding
`a new agent and advertising its capabilities. The use
`of agent technology provides a high degree of decen-
`tralization of capabilities which is the key to system
`scalability and extensibility.
`
`2. Domain models (ontologies). Ontologies give a cori-
`cise, uniform, and declarative description of semantic
`information,
`independent of the underlying syntactic
`representation or the conceptual models of informa-
`tion bases. Domain models widen the accessibility of
`information by allowing the use of multiple ontologies
`belonging to diverse user groups.
`
`3. Information Brokerage. Specialized broker agents se-
`mantically match information needs (specified in terms
`of some ontology) with currently available resources,
`so retrieval and update requests can be routed only to
`the relevant resources.
`
`4. Internet Computing. Java and Java Applets are used
`extensively to provide users and administrators with
`system-independent user interfaces, and to enable ubiq-
`uitous agents that can be deployed at any source of
`information regardless of its location or platform.
`
`In this paper, we present our working prototype version
`of lnfosleuth, which integrates the aforementioned technolo-
`gies with more classic approaches to querying (SQL) and
`schema mapping. We also describe a utilization of Info-
`Sleuth in the domain of health care applications.
`This paper is organized as follows. The overall architec-
`ture is described in section 2. Detailed descriptions of the
`agents are given in section 3. Section 4 describes the Info-
`Sleuth and the domain ontology design. Brokering and con-
`strained information matching is described in section 5. A
`data mining application in the health care domain is briefly
`presented in section 6. Related work is discussed in sec-
`tion 7. Finally, section 8 gives the conclusion and future
`work.
`
`2 Architecture
`
`2.1 Architectural Overview
`
`lnfoSleuth is comprised of a network of cooperating agents
`communicating by means of the high-level agent query lan-
`guage KQML [11]. Users specify requests and queries over
`specified ontologies via applet-based user interfaces. The
`dialects of the knowledge representation language KIF [13]
`and the database query language SQL are used internally
`to represent queries over specified ontologies. The queries
`
`196
`
`
`
`Figure 1: The lnfoSleuth architecture
`
`are routed by mediation and brokerage agents to specialized
`agents for data retrieval from distributed resources, and for
`integration and analysis of results. Users interact with this
`network of agents via applets running under a Java-capable
`Web browser that communicates with a personalized intel-
`ligent User Agent.
`Agents advertise their services and process requests ei-
`ther by making inferences based on local knowledge, by rout-
`ing the request to a more appropriate agent, or by decom-
`posing the request into a collection of sub-requests and then
`routing these requests to the appropriate agents and inte-
`grating the results. Decisions about routing of requests are
`based on the “InfoSleuth” ontology, a body of metadata that
`describes agents’ knowledge and their relationships with one
`another. Decisions about decomposition of queries are based
`on a domain ontology, chosen by the user, that describes the
`knowledge about the relationships of the data stored by re-
`sources that subscribe to the ontology.
`Construction of ontologies for use by InfoSleuth is accom-
`plished most easily by the use of the Integrated Management
`Tool Suite (IMTS, not discussed in this paper), which pro-
`vides a set of graphic user interfaces for that purpose.
`Figure I shows the overall architecture of InfoSleuth, in
`terms of its agents. The functionalities of each of the agents
`are briefly described below. Detailed descriptions are given
`in the following section.
`
`constitutes the user’s intelligent gateway into
`User Agent:
`InfoSleuth.
`It uses knowledge of the system’s common do-
`main models (ontologies) to assist the user in formulating
`queries and in displaying their results.
`
`provides an overall knowledge of ontolo-
`Ontology Agent:
`gies and answers queries about ontologies.
`
`receives and stores advertisements from
`Broker Agent:
`all InfoSleuth agents on their capabilities. Based on this
`information, it responds to queries from agents as to where
`to route their specific requests.
`
`
`
`provides a mapping from the common
`Resource Agent:
`ontology to the database schema and language native to its
`resource, and executes the requests specific to that resource,
`including continuous queries and notifications. It also adver-
`tises the resources’ capabilities.
`
`corresponds to resource agents spe-
`Data Analysis Agent:
`cialized for data analysis/mining methods.
`
`coordinates the execution of high-
`Task Execution Agent:
`level
`information—gathering subtasks (scenarios) necessary
`to fulfill the queries.
`It uses information supplied by the
`Broker Agent
`to identify the resources that have the re-
`quested information, routes requests to the appropriate Re-
`source Agents, and reassembles the results.
`
`tracks the agent interactions and the task
`Monitor Agent:
`execution steps. It also provides a visual interface to display
`the execution.
`
`2.2 Agent Communication Languages
`
`KQML [11] is a specification of a message format and pro-
`tocol for semantic knowledge-sharing between cooperative
`agents. Agents communicate via a standard set of KQML
`performatives, which specify a set of permissible actions that
`can be performed on the recipient agent,
`including basic.
`query performatives (“evaluate,” “ask-one,” “ask-all”),
`in-
`formational performatives (“tell,” “untell”), and capability-
`definition performatives (“advertise,” “subscribe,” “moni-
`tor”). Since KQML is not
`tied to any one representation
`language,
`it can be used as a “shell” to contain messages
`in various languages and knowledge representation formats,
`and permit routing by agents which do not necessarily un-
`derstand the syntax or semantics of the content message.
`The Knowledge Interchange Format, KIF [13], provides
`a common communication mechanism for the interchange of
`knowledge between widely disparate programs with differing
`internal knowledge representation schemes.
`It
`is human-
`readable, with declarative semantics.
`It can express first-
`order logic sentences, with some second-order capabilities.
`Several
`translators exist.
`that convert existing knowledge
`representation languages to and from KIF. Infosleuth agents
`currently share data via KIF. Typically, an agent converts
`queries or data from its internal format into KIF, then wraps
`the KIF message in a KQML performative before sending to
`the recipient agent.
`Both languages have been extended to provide additional
`functionalities required by the design of lnfosleuth.
`
`2.3 Agent Interactions
`
`In the following, we demonstrate a scenario of interaction
`among the InfoSleuth agents in the context of a simple query
`execution:
`
`During system start-up, the Broker Agent initializes its
`Infosleuth ontology, and commences listening for queries
`and advertisement information at a well-known address. Each
`Agent advertises its address and function to the Broker
`Agent using the InfoSleuth ontology.
`it sets up its con-
`When a Resource Agent initializes,
`nection to its resource and advertises the components of
`ontology(ies) that it understancls to the Broker Agent. One
`specialized Resource Agent, the Ontology Agent, deals with
`the information system’s metadata.
`
`A user commences interaction with lnfoSleuth by means
`of a Web Browser or other Java applet viewer interacting
`with her personal User Agent. The user poses a query by
`means of the viewer applet. At this point, the User Agent
`queries the Broker Agent for the location of an applicable
`Execution Agent. The User Agent then issues the query to
`that Execution Agent.
`On receiving a request, the Execution Agent then queries
`the Broker Agent for the location of the Ontology Agent (if
`it does not already know it), and queries the Ontology Agent
`for the ontology appropriate to the given query. Based on
`the ontology for the domain of the query,
`the Execution
`Agent queries the Broker Agent for currently appropriate
`Resource Agents. The Broker Agent may return a different
`set of Resource Agents if the same query is posted at a
`different time, depending on the availability of the resources.
`The Execution Agent
`takes the set of appropriate Re-
`source Agents, decomposes the query, and routes it appro
`priately. Each Resource Agent translates the query from
`the query domain’s global ontology into the resource-specific
`schema, fetches the results from the resource, and returns
`them to the Execution Agent. The Execution Agent
`re-
`assembles the results and returns them to the User Agent,
`which then returns the results to the user’s Viewer applet
`for display.
`The above scenario of a simple query execution is cho-
`sen for brevity. Other common scenarios of interactions in
`InfoSleuth would reflect complex queries with multiple—task
`plans and data mining queries that require knowledge dis-
`covery tasks.
`
`3 Agent Design and Implementation
`
`In this section, we describe the functionality, design ratio-
`nale, and implementation of each of the InfoSleuth agents.
`
`3.1 User Agent
`
`The User Agent is the user’s intelligent interface to the Info-
`Sleuth network.
`It assists the user in formulating queries
`over some common domain models, and in displaying the
`results of queries in a manner sensitive to the user’s con-
`text.
`
`the User Agent advertises itself to
`Upon initialization,
`the broker, so that other agents can find it based on its ca-
`pabilities.
`It
`then obtains information from the ontology
`agent about the common ontological models known to the
`system. It uses this information to prompt its user in select-
`ing an ontology in which a set of queries will be formulated.
`After a query is formulated in terms of the selected com-
`mon ontology, it is sent to the task execution agent that best
`meets the user’s needs with respect to the current query cori-
`text. When the task execution agent has obtained a result,
`it engages in a KQML “conversation” with the user agent, in
`which the results are incrementally returned and displayed.
`The User Agent is persistent and autonomous; storing in-
`formation (data and queries) for the user, and maintaining
`the user’s context between browser sessions.
`
`implemented as a
`is
`Implementation. The User Agent
`stand-alone Java application. As with the other agents in
`the architecture, explicit thread management is used to sup-
`port concurrent KQML interactions with other agents, so
`that the User Agent does not suspend its activity while wait-
`ing for the result of one query to be returned. Currently, the
`
`197
`
`
`
`
`
`agents query the task execution agents using KQMI. wit.h
`SQL content.
`
`A user interface is provided via Java applets for query
`formulation. ontology manipulation, and data display, which
`communicate with the User Agent by means of .lava’s Re-
`mote Method Invocation (RMI). The applets provide a flex-
`ible, platforin-iiidependent, and context-sensitive user inter-
`face, where query formulation can be based on knowledge
`of the concepts in the relevant common ontology, the user’s
`profile, and/or application-specific knowledge. Various sets
`of applets may be invoked based on these different contexts.
`The User Agent is capable of saving the queries created via
`applets, as well as results of queries. As the complexity
`of the lnfoSleuth knowledge domain grows, this set of ap-
`plets may eventually be inaintaiiied as reusable modules in
`a warehouse separate from the User Agent.
`
`3.2 Task Execution Agent
`
`The Task Execution Agent coordinates the execution of high-
`level information gathering tasks. We use the term “high-
`level” to suggest workflow-like or data mining and analy-
`sis activities. Such high-level tasks can potentially include
`global query decomposition and post—processing as sub—tasks
`carried out by decomposition sub-agents, where the global
`query is couched in terms of a common ontology; and sub-
`queries must be generated based on the schemas and capa-
`bilities of the various resources known to the system, and
`then the results joined.
`The Execution Agent is designed to deal with dynamic,
`incomplete and uncertain knowledge.
`\/Ve were motivated in
`our design by the need to support flexibility and extensibility
`in dyna.inic environments. This means that task execution,
`including interaction with users via the user agents, should
`be sensitive both to the query context and the currently
`available information.
`The approach we have taken for the Task Execution
`Agent is based on the use of declarative task plans, with
`asynchronous execution of procedural attaclinients. Plan
`execution is data-driven, and supports flexibility in reacting
`to unexpected events and handling incomplete information.
`The declarative specification of the agent’s plan and sub-
`task knowledge supports task plan maintenance. as well as
`the opportunity for collaborative task execution via the ex-
`change of plan fragments. This declarative specification re-
`sides in the agent’s knowledge base, and consists of several
`components,
`including:
`(1) Domain-independent
`informa-
`tion about how to execute task plan structures; (2) knowl-
`edge of when it
`is acceptable to invoke a task operator
`(including its preconditions) and how to instantiate it; (3)
`knowledge of how to execute the operator; (4) a Task Plan
`library; and (5) agent state.
`The task plans are declarative structures, which can ex-
`press partial-orders of plan nodes, as well as simple execu-
`tion loops. Plans are currently indexed using information
`about the domain of the query and the KQML “conversa-
`tional"_context for which the task has been invoked.
`
`Task Plan Execution Using Domain-independent Rules.
`After an agent's knowledge base has been populated with
`operator descriptions and declarative task plans,
`it uses its
`domain-independent task execution knowledge to carry out
`the plans. Its knowledge. in the form of rules, supports the
`following functionality:
`
`0 Multiple plans and/or multiple instantiations of the
`same plan may concurrently execute.
`
`I98
`
`0 For a given node in a plan, multiple instantiations of
`the node may be created.
`
`0 Task execution is data-driven: a plan node is not exe-
`cuted until its preconditions are met [34].
`
`0 Execution of a plan node can be overridden by rules
`for unusual situations.
`
`0 Reactive selection of operations not in the current ex-
`plicit. plan can occur based on domain heuristics.
`
`o Information-gatliering operators [21, 8], and conditional
`operator execution are supported.
`
`Each time a query from the user agent is received. a new
`instantiation of the appropriate plan from the plan library is
`initialized by the rule-based system. A task execution agent
`can concurrently carry out multiple instantiations of one or
`more plans, with potentially multiple instantiations of steps
`in each plan. The plan execution process is what defines the
`Task Execution Agent’s behavior. The sequences of inter-
`actions with other agents are determined by the task plans
`the agent executes, and the conversations with a given agent
`are determined by the KQML protocols and supported pri-
`marily by the procedural attachments to the task operators.
`For example, a user agent can request that the results of the
`query be returned incrementally.
`
`Example: General Query Task Plan. Executing a general
`query task plan causes the Task Execution Agent to carry
`out the following steps.
`
`0 Advertise to the Broker, using a. tell performative, and
`wait to receive a reply (done at agent iiiitialization).
`
`0 Wait to receive queries from User Agents. These will
`typically be encoded as KQML directives, such as as}:-
`all, standby, or subscribe). The query as well as the
`domain context determines the task plan that is in-
`stantiated to process the query.
`
`if appropriate’.
`9 Parse the query, and decompose it
`Parsing involves getting an ontological model from the
`Ontology Agent; once this model
`is obtained,
`it
`is
`cached for future use.
`
`0 Construct KIF queries based on the SQL queries’ con-
`tents, and query the Broker using the KIF queries and
`the ask-allperforinative to find relevant resources.
`
`0 Query the relevant
`broker.
`
`resource agents specified by the
`
`0 (lonipose the results.
`
`0 Increnientally return the results to the user agent using
`a streaming protocol, Using this protocol,
`the user
`agent successively requests additional result tuples.
`
`lOnly query union decomposition is performed at the task plan
`level Previous work in the InfoSleuth project has focused on tech-
`niques for global query decomposition and post-processing. Work is
`currently in progress to port this functionality to the agent architeo
`ture while supporting the dynamic nature of resource availability, via
`decomposition agents invoked from the task level See Section 8.3.
`
`
`
`The Task Execution Agent is implemented
`Implementation.
`by embedding a CLIPS [32] agent in Java, using .lava’s “na-
`tive method” facility. CLIPS provides the rule-based ex-
`ecution framework for the agent, and, as described above,
`the declarative specification of plan and operator knowledge.
`The Java wrapper supports procedural attachments for the
`plan operators, as well as providing the Java KQML commu-
`nications packages used by all the agents in the lnfoSleuth
`system. Thus, all communication with other agents takes
`place via procedural operator implementations.
`A CLIPS/Java API has been defined to send information
`from CLIPS to the Java sub-task implementations, and for
`each plan operator (in CLIPS) that invokes a Java method,
`a new thread is created to carry out the sub-task, param-
`eterized via this API. During sub-task execution, new in-
`formation (in the form of CLIPS facts and objects) may be
`passed back to the CLIPS database, and this is how the Java
`sub-task methods communicate their results. The sub-task
`execution is asynchronous, and results may be returned at
`any time. Because the task execution is data-driven, new
`task steps will not be initiated until all the required infor-
`mation for those steps are available.
`
`3.3 Broker Agent
`
`is a semantic “match—making"' service
`The Broker Agent
`that pairs agents seeking a particular service with agents
`that can perform that service. The Broker Agent,
`there-
`fore, is responsible for the scalability of the system as the.
`number and volume of its information resources grow. The
`Broker Agent determines the set of relevant resources that
`can perform the requested service. As agents come on line,
`they advertise their services to the broker via KQML. The
`Broker Agent responds to an agent’s request for service with
`information about the other agents that have previously ad-
`vertised relevant service. Details of the Broker protocols de-
`scribing the exchanged information are given in section 5.2.
`In effect, the Broker Agent is a cache of metadata that op-
`timizes access in the agent network. Any individual agent
`could perform exactly the same queries on an as-needed ba-
`sis.
`In addition,
`the existence of the Broker Agent both
`reduces the individual agent’s need for knowledge about the
`structure of the network and decreases the amount of net-
`work traffic required to accomplish an agent’s task.
`Minimally, an agent must advertise to the Broker its
`location, name, and the language it speaks. Additionally,
`agents may advertise meta—information and domain con-
`straints based on which it makes sense to query a given
`agent. The purpose of domain advertising is to allow the
`Broker to reason about queries and to rule out those queries
`which are known to return null results. For example,
`if a
`Resource Agent advertises that it knows about only those
`medical procedures relating to heart surgery, it is inappro-
`priate to query it regarding liver resection, and the Broker
`would not recommend it to an agent seeking liver resection
`data. The ontology used to express advertisements is called
`the “InfoSleuth" ontology because the metadata the Broker
`Agent is storing is a description of the relationships between
`agents.
`
`is written in Java
`implementation. The Broker Agent
`and the deductive database language LDL++ [38]. It sup-
`ports queries from other agents using KQML for the com-
`munication layer and Kli’ for the semantic content (based
`on the “lnfoSleuth ontology”). The constraint matching
`and data storage for the Broker Agent are implemented in
`
`LDL++. The Broker translates the KIF statements into
`LDL++ queries and then sends them off to the LDL server
`to be processed. The use of the deductive database allows
`the broker to perform rule-based matching of advertisements
`to user requests.
`
`3.4 Resource Agent
`
`The purpose of the Resource Agent. is to make information
`contained in an information source (e.g., database) available
`for retrieval and update. It acts as an interface between a lo-
`cal data source and other lnfoSleuth agents, hiding specifics
`of the local data organization and representation.
`To accomplish this task, a Resource Agent must be able
`to announce and update its presence, location, and the de-
`scription of its contents to the broker agent. There are three
`types of contents information that are of potential interest
`to other agents:
`(1) metadata information,
`i.e., ontologi-
`cal names of all data objects known to the Resource Agent,
`('2) values (ranges) of chosen data objects, and (3) the set
`of operations allowed on the data. The operations range
`from a simple read/update to more complicated data anal-
`ysis operations. The advertisement information can be sent
`by the Resource Agent to the broker at the start-up time
`or extracted from the Resource Agent during the query pro
`cessing stage.
`The Resource Agent also needs to answer queries. The
`Resource Agent has to translate queries expressed in a com-
`mon query language (such as KQML/KIF) into a language
`understood by the underlying system. This translation is fa-
`cilitated by a mapping between ontology concepts and terms
`and the local data concepts and terms, as well as between
`the common query language syntax, semantics and opera-
`tors, and those of the native language. Once the queries are
`translated, the resource agent sends them to the information
`source for execution, and translates the answers back into
`the format understood by the requesting agent. Addition-
`ally, the resource agent and the underlying data source may
`group certain operations requested by other agents into an
`atomic (local) transaction. Also, the resource agent provides
`limited transactional capabilities for (global) multi~resource
`transactions.
`The capability of a Resource Agent can be enhanced in
`many ways. For example, it may be able to keep the query
`context and thus allow for retrieval of results in small in-
`crements. Handling of event notifications (eg., new data
`is inserted, an item is deleted) can be another important
`functionality of a Resource Agent.
`The components of an example Resource Agent are pre-
`sented in Figure ‘2. The communication module interacts
`with the other agents. The language processor translates
`a query expressed in terms of global ontology into a query
`expressed in terms of the Oracle database schema.
`It also
`translates the results of the query into a form understood
`by other agents. Mapping information necessary for this
`process is created during the agent installation time as it
`requires specialized knowledge of both the local data and
`the global ontology. The task of the event detection module
`is to monitor the data source for the events of interest and
`prepare the notifications to be sent to the agents interested
`in those events.
`The lnfoSleuth architecture has a specialized resource
`agent, called the ontology agent, which responds to the
`queries related to ontologies. It uses the same
`mes-
`sage exchange as other agents, but unlike resource agents
`that are associated with the databases,
`it only interprets
`
`I99
`
`
`
`intensional declarative description of the informa-
`tion source contents. Object—oriented and rela-
`tional DBMSS do not support the ability to rea-
`son about their schemas. Ontologies specified in
`a knowledge representation or logic programming
`language (e.g., LDL) can be used to reason about
`information content and hence enable determina-
`tion of relevance.
`
`0 Capture new and different. world views in an open
`environment as domain models. Wider accessi-
`bility of the data is obtained by having multiple
`ontologies describe data in the same information
`source.
`
`t0
`
`Specification of the agent infrastructure: Ontologies
`are used to specify the context in which the various
`agents operate,
`i.e.,
`the information manipulated by
`the various agents and the relationships between them.
`This enables decisions on which agents to route the
`various requests to. This information is represented in
`the lnfoSleuth ontology and represents the world view
`of the system as seen by the broker agent. As the func-
`tionality of the various agents evolves, it can be easily
`incorporated into the ontology.
`
`Thus, ontologies are used to specify both the infrastruc-
`ture underlying the agent-based architecture and character-
`ize the information content in the underlying data reposito-
`ries.
`
`4.1
`
`A Three-layer Model for Representation and Storage
`of Ontologies
`
`Info-
`Rather than choose one universal ontology format,
`Sleuth allows multiple formats and representations, repre-
`senting each ontology format with an ontology meta-model
`which makes it easier to integrate between different ontol-
`ogy types. We now discuss an enhancement of the 3-layer
`model for representation of ontologies presented in [20]. The
`three layers of the model (shown in Figure 3) are: Frame,
`Meta—model, and Ontology.
`The Frame layer (consisting of Frame, Slot, and Meta-
`Model classes) allows creation, population. and querying of
`new meta-models. Meta—model layer objects are instances
`of frame layer objects, and simply require instantiating the
`frame layer classes. Ontology layer objects are instances of
`meta-model objects.
`The objects in the lnfoSleuth ontology are instantiations
`of the entity, attribute and relationship objects in the Meta-
`model layer. In our architecture, agents need to know about
`other entities, called “agents”. Each “agent” has an at-
`tribute called “name” that is used to identify an agent dur-
`ing message interchange. The “type” of an agent is relevant
`for determining the class of messages it handles and its gen-
`eral functionality.
`A key feature of the lnfoSleuth ontology is that it is
`self-describing. As illustrated in Figure 3, the entity agent
`has ontologies associated with it. The entity ontology is an
`object in the meta-model layer and the various ontologies
`of the system are its instantiations. However in the case
`of the InfoSleuth ontology, the instantiation “InfoSleuth” of
`the ontology object is also a part of the Infosleuth ontology.
`This is required as the InfoSleuth ontology is the ontology
`associated with the broker agent.
`
` Mapping
`inlonuaxinn
`
`
`messages * '“<’d“l°
`
`Local
`T
`
`
`format reply
` jEycni
`
`delccuon
`module
`
`Local
`langungelschcnni
`‘-lU\'-0’
`Dula access
`
`
`
`I-3"SU3*S¢
`
`KQML
`
`Cummunicaliun
`
`Ontology
`
`Figure 2: An example of a resource agent
`
`RIF queries. This agent is designed to respond to queries
`concerning the available list of ontologies,
`the source of an
`ontology and searching the ontologies for concepts.
`VVe are currently researching the possibility of adding in-
`ferencing capability to respond to more sophisticated queries.
`There is also a need for maintaining different versions of the
`same ontology as the agent architecture is scaled up. These
`two capabilities become particularly relevant as the number
`of served ontologies increases, especially when multiple on-
`tologies are integrated for more complex query formulation.
`
`implementation. The resource agent is written in Java and
`provides access to relational databases via JDBC and ODBC
`interfaces. We have run it successfully with Oracle and Mi-
`crosoft Access and SQL Server databases. The funct.ionality
`of the implemented agent covers advertisements about.
`tl1e
`data (both metadata information and ranges/values of the
`data contained in the database), and processing of queries
`expressed in either global ontology terms or local database
`schema terms. We implemented three types of query perfor-
`matives: ask-one. ask-all and standby, thus giving the other
`agents the option of retrieving one reply, all replies or all
`replies divided in smaller chunks.
`
`4 Ontologies in the lnfosleuth architecture
`
`The lnfosleuth architecture as discussed in the previous sec-
`tion is based on the communication among a community of
`agents, cooperating to help the user to find and retrieve
`the needed information. A critical issue in the communica-
`tion among the agents is that of ontological commitments,
`i.e. agreement among the various agents on the terms for
`specifying agent context and the context of the information
`handled by the agents.
`An ontology may be defined as the specification of a rep-
`resentational vocabulary for a shared domain of discourse
`which may include definitions of classes, relations, functions
`and other objects [15]. Ontologies in lnfoSleuth are used
`to capture the database schema (e.g.,
`relational, object-
`oriented, hierarchical), conceptual models (e.g., l3—R models,
`Object Models, Business Process models) and aspects of the
`InfoSleuth agent architecture (e.g., agent configurations and
`workflow specifications). The motivations for using ontolo-
`gies are two—fold:
`
`information content.’
`l. Capturing and 7'cas0nir1g about
`In an open and dynamic environment,
`the volume of
`data available is a critical problem affecting the scala-
`bility of the system. Ontologies may be used to:
`
`0 Determine the relevance of an information source
`without accessing the underlying data. This re-
`quires the ability to capture and reason with an
`
`200
`
`
`
`
`
`.
`»_
`,.
`.
`'
`-___._..._,q_...,,_.:,s..\._., _ _ _ _ _ _ _ _ _ _ _ _ _ _.-
`
`MelaModeI
`
`Figure 3: The three—layer ontology model
`
`m|ok3gy(o_8S745613S)
`unnlogy_nuu(o_8574563-15. taltbcau‘)
`unI)logy_lrIncto_l$57lS6.‘M5. i_l13l2444l
`fnm¢(!_l23l 2-£44)
`tmne_rumc(f_|2Jl24-M, ‘mrouau.-r_&;'l
`st-x(s_BS56M6)
`lmnc_abt(t_l15l2-$44. s_3~455(s3-46)
`xlol_ium¢(s_345S6346. 'paucu(_agc‘)
`crns1l'IInt(C_674S7£SO)
`slu_cunamni(i_}4S563-t(>. c_(274574S6}
`cmnnml_exprumn(r_674S'N.S6.
`
`llgi. ‘pa11cni_tgc‘, ‘Suit ‘pm¢ni_igc‘. 75).‘)
`
`(mlnbgy ‘ml
`(ontology 7. in
`mm: 1: "Mailman?!
`llllmt 7!)
`(lmnc 70 W)
`(sum: 1! ‘amumu_drg")
`lrlut 75!
`(rim Y ls‘:
`mm: 7: ”[\ll!€nl_lg€")
`iwtuimni ‘kl
`(unsuunx 1; 7:)
`(cxpruuon ‘c (and (> "s 43) ¢< 7: 73.»)!
`
`Figure 4: Multiple representation of same ontology
`
`4.2 Utilization of Multiple Representations of Ontologies
`
`One of the reasons for representing ontologies is the ability to
`reason about them. For this purpose, different agents might
`represent them in different languages depending on the type
`of inferences to be made. Figure 4 shows an example of the
`same piece of ontology represented by the resource agent in
`KIF and by the broker agent in LDL. The broker agent uses
`this representation to determine whether a resource agent is
`relevant for a particular query.
`The Broker Agent utilizes a representation of the ontol-
`ogy exported by the Resource Agent (shown in Figure 4)
`in LDL [38]. The deductive mechanisms of LDL help deter-
`mine the consistency of the constraints in the user query and
`those exported by the Resource Agent which in turn deter-
`mines the relevance of the information managed by Resource
`Agent. The Resource Agent, on the other hand, translates
`this