Humans and machines harmoniously collaborating and benefiting from each other is a long lasting dream for researchers in robotics and artificial intelligence. An important feature of efficient and rewarding cooperation is the ability to assume possible problematic situations and act in advance to prevent negative outcomes. This concept of assistance is known under the term proactivity. In this article, we investigate the development and implementation of proactive dialogues for fostering a trustworthy humancomputer relationship and providing adequate and timely assistance. Here, we make several contributions. A formalisation of proactive dialogue in conversational assistants is provided. The formalisation forms a framework for integrating proactive dialogue in conversational applications. Additionally, we present a study showing the relations between proactive dialogue actions and several aspects of the perceived trustworthiness of a system as well as effects on the user experience. The results of the experiments provide significant contributions to the line of proactive dialogue research. Particularly, we provide insights on the effects of proactive dialogue on the human-computer trust relationship and dependencies between proactive dialogue and user specific and situational characteristics.
In mixed-initiative user interactions, a user and an autonomous agent collaborate for solving tasks by taking interleaving actions. However, this shift of control towards the agent requires a formation of trust for the user, otherwise the assistance possibly will be rejected and becomes obsolete. One approach for fostering a trustworthy interaction is to equip an agent with proactive dialogue capabilities. However, the development of adequate proactive dialogue strategies is complex and highly user-as well as contextdependent. Inappropriate usage of proactive conversation may even do more harm than good and corrupt the human-computer trust relationship. In order to alleviate this problem, modelling and predicting a proactive system's perceived trustworthiness during an ongoing interaction is essential. Therefore, this paper presents novel work on the development of a user model for live prediction of trust during proactive interaction, incorporating user-, system-, and context-dependent features. For predicting trust, three machinelearning algorithms -support vector machine, eXtreme Gradient Boost, gated recurrent unit network -are trained and tested on a proactive dialogue corpus. The experimental results show that among the classifiers the support vector machine showed the most well-rounded performance, while the gated recurrent unit had the best accuracy. The results prove the developed user model to be reliable for predicting trust in proactive dialogue. Based on the outcomes, the usability of the proposed method in real-life scenarios is discussed and implications for developing user-adaptive proactive dialogue strategies are described.
CCS CONCEPTS• Human-centered computing → Human computer interaction (HCI); HCI design and evaluation methods; User models; HCI theory, concepts and models.
Mental health and mental wellbeing have become an important factor to many citizens navigating their way through their environment and in the work place. New technology solutions such as chatbots are potential channels for supporting and coaching users to maintain a good state of mental wellbeing. Chatbots have the added value of providing social conversations and coaching 24/7 outside from conventional mental health services. However, little is known about the acceptability and user led requirements of this technology. This paper uses a living lab approach to elicit requirements, opinions and attitudes towards the use of chatbots for supporting mental health. The data collected was acquired from people living with anxiety or mild depression in a workshop setting. The audio of the workshop was recorded and a thematic analysis was carried out. The results are the co-created functional requirements and a number of use case scenarios that can be of interest to guide future development of chatbots in the mental health domain.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.