While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility, especially for end-users. In this paper, we explore the effects of incorporating virtual agents into explainable artificial intelligence (XAI) designs on the perceived trust of end-users. For this purpose, we conducted a user study based on a simple speech recognition system for keyword classification. As a result of this experiment, we found that the integration of virtual agents leads to increased user trust in the XAI system. Furthermore, we found that the user's trust significantly depends on the modalities that are used within the user-agent interface design. The results of our study show a linear trend where the visual presence of an agent combined with a voice output resulted in greater trust than the output of text or the voice output alone. Additionally, we analysed the participants' feedback regarding the presented XAI visualisations. We found that increased human-likeness of and interaction with the virtual agent are the two most common mention points on how to improve the proposed XAI interaction design. Based on these results, we discuss current limitations and interesting topics for further research in the field of XAI. Moreover, we present design recommendations for virtual agents in XAI systems for future projects.
As the complexity of work tasks rises for maintenance workers in modern production facilities, new technologies will be required to support and integrate the service worker of tomorrow. This paper gives an insight into an ongoing research project examining the potential of smart glasses used as a component of assistant systems for workers performing maintenance tasks in an industry 4.0 context. A human centered design process is used to identify the needs of workers and to specify requirements for the assistant system being developed. Thereby, the maintenance of a CNC lathe is used as an example and assistant functions were developed for one specific maintenance task. The architecture of the assistant system proposed in this paper is based on an analysis of the work system including the tasks of the maintenance worker. Finally, the implementation of a first prototype, using state-of-the-art augmented reality smart glasses, is described.
Creativity as a skill is associated with a potential to drive both productivity and psychological wellbeing. Since multimodality can foster cognitive ability, multimodal digital tools should also be ideal to support creativity as an essentially cognitive skill. In this paper, we explore this notion by presenting a multimodal pen-based interaction technique and studying how it supports creativity. The multimodal solution uses microcontroller-technology to augment a digital pen with RGB LEDs and a Leap Motion sensor to enable bimanual input. We report on a user study with 26 participants demonstrating that the multimodal technique is indeed perceived as supporting creativity significantly more than a baseline condition. We conclude with a critical discussion of our results, considering implications for creativity support through multimodal interaction techniques and the culture and materiality surrounding lived practices of pen-based sketching. To this end, we utilize insights based on our own experience observing and engaging with various sketching communities in our town, including the urban sketchers.
Recent pandemic-related contact restrictions have made it difficult for musicians to meet in person to make music. As a result, there has been an increased demand for applications that enable remote and real-time music collaboration. One desirable goal here is to give musicians a sense of social presence, to make them feel that they are "on site" with their musical partners. We conducted a focus group study to investigate the impact of remote jamming on users' affect. Further, we gathered user requirements for a Mixed Reality system that enables real-time jamming and developed a prototype based on these findings.
While the research area of artificial intelligence benefited from increasingly sophisticated machine learning techniques in recent years, the resulting systems suffer from a loss of transparency and comprehensibility. This development led to an ongoing resurgence of the research area of explainable artificial intelligence (XAI) which aims to reduce the opaqueness of those black-box-models. However, much of the current XAI-Research is focused on machine learning practitioners and engineers while omitting the specific needs of end-users. In this paper, we examine the impact of virtual agents within the field of XAI on the perceived trustworthiness of autonomous intelligent systems. To assess the practicality of this concept, we conducted a user study based on a simple speech recognition task. As a result of this experiment, we found significant evidence suggesting that the integration of virtual agents into XAI interaction design leads to an increase of trust in the autonomous intelligent system. CCS CONCEPTS • Human-centered computing → Empirical studies in HCI; Empirical studies in interaction design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.