Autonomous intelligent agents are playing increasingly important roles in our lives. They contain information about us and start to perform tasks on our behalves. Chatbots are an example of such agents that need to engage in a complex conversations with humans. Thus, we need to ensure that they behave ethically. In this work we propose a hybrid logic-based approach for ethical chatbots.
Transparency is a key requirement for ethical machines. Verified ethical behavior is not enough to establish justified trust in autonomous intelligent agents: it needs to be supported by the ability to explain decisions. Logic Programming (LP) has a great potential for developing such perspective ethical systems, as in fact logic rules are easily comprehensible by humans. Furthermore, LP is able to model causality, which is crucial for ethical decision making.
In this paper, we discuss the potential role of answer set programming (ASP) in the context of approaches to the development of agents and multi-agent systems especially in the realm of Computational Logic. After shortly recalling the main (computational-logic-based) agent-oriented frameworks, we introduce ASP; then, we discuss the usefulness of a potential integration of the two paradigms in a modular heterogeneous framework, and the feasibility of such integration. This also in the more general view of improving and empowering flexibility of agent-oriented frameworks. Relevant literature will be mentioned and discussed. Possible future directions and potential developments will be outlined.
Dialogue Systems are tools designed for various practical purposes concerning human-machine interaction. These systems should be built on ethical foundations because their behavior may heavily influence a user (think especially about children). The primary objective of this paper is to present the architecture and prototype implementation of a Multi Agent System (MAS) designed for ethical monitoring and evaluation of a dialogue system. A prototype application, for monitoring and evaluation of chatting agents' (human/artificial) ethical behavior in an online customer service chat point w.r.t their institution/company's codes of ethics and conduct, is developed and presented. Future work and open issues with this research are discussed.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.