Abstract. Commitments among agents are widely recognized as an important basis for organizing interactions in multiagent systems. We develop an approach for formally representing and reasoning about commitments in the event calculus. We apply and evaluate this approach in the context of protocols, which represent the interactions allowed among communicating agents. Protocols are essential in applications such as electronic commerce where it is necessary to constrain the behaviors of autonomous agents. Traditional approaches, which model protocols merely in terms of action sequences, limit the flexibility of the agents in executing the protocols. By contrast, by formally representing commitments, we can specify the content of the protocols through the agents' commitments to one another. In representing commitments in the event calculus, we formalize commitment operations and domain-independent reasoning rules as axioms to capture the evolution of commitments. We also provide a means to specify protocol-specific axioms through the agents' actions. These axioms enable agents to reason about their actions explicitly to flexibly accommodate the exceptions and opportunities that may arise at run time. This reasoning is implemented using an event calculus planner that helps determine flexible execution paths that respect the given protocol specifications.
No abstract
Abstract. Commitments are a powerful representation for modeling multiagent interactions. Previous approaches have considered the semantics of commitments and how to check compliance with them. However, these approaches do not capture some of the subtleties that arise in real-life applications, e.g., e-commerce, where contracts and institutions have implicit temporal references. The present paper develops a rich representation for the temporal content of commitments. This enables us to capture realistic contracts and institutions rigorously, and avoid subtle ambiguities. Consequently, this approach enables us to reason about whether and when exactly a commitment is satisfied or breached and whether it is or ever becomes unenforceable.
Abstract-Developing, maintaining, and disseminating trust in open, dynamic environments is crucial. We propose self-organizing referral networks as a means for establishing trust in such environments. A referral network consists of autonomous agents that model others in terms of their trustworthiness and disseminate information on others' trustworthiness. An agent may request a service from another; a requested agent may provide the requested service or give a referral to someone else. Possibly with its user's help, each agent can judge the quality of service obtained. Importantly, the agents autonomously and adaptively decide with whom to interact and choose what referrals to issue, if any. The choices of the agents lead to the evolution of the referral network, whereby the agents move closer to those that they trust. This paper studies the guidelines for engineering self-organizing referral networks. To do so, it investigates properties of referral networks via simulation. By controlling the actions of the agents appropriately, different referral networks can be generated. This paper first shows how the exchange of referrals affects service selection. It identifies interesting network topologies and shows under which conditions these topologies emerge. Based on the link structure of the network, some agents can be identified as authorities. Finally, the paper shows how and when such authorities emerge. The observations of these simulations are then formulated into design recommendations that can be used to develop robust, self-organizing referral networks.
Preserving users’ privacy is important for Web systems. In systems where transactions are managed by a single user, such as e-commerce systems, preserving privacy of the transactions is merely the capability of access control. However, in online social networks, where each transaction is managed by and has effect on others, preserving privacy is difficult. In many cases, the users’ privacy constraints are distributed, expressed in a high-level manner, and would depend on information that only becomes available over interactions with others. Hence, when a content is being shared by a user, others who might be affected by the content should discuss and agree on how the content will be shared online so that none of their privacy constraints are violated. To enable this, we model users of the social networks as agents that represent their users’ privacy constraints as semantic rules. Agents argue with each other on propositions that enable their privacy rules by generating facts and assumptions from their ontology. Moreover, agents can seek help from others by requesting new information to enrich their ontology. Using assumption-based argumentation, agents decide whether a content should be shared or not. We evaluate the applicability of our approach on real-life privacy scenarios in comparison with user surveys.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.