Research in social robotics is commonly focused on designing robots that imitate human behavior. While this might increase a user’s satisfaction and acceptance of robots at first glance, it does not automatically aid a non-expert user in naturally interacting with robots, and might hurt their ability to correctly anticipate a robot’s capabilities. We argue that a faulty mental model, that the user has of the robot, is one of the main sources of confusion. In this work, we investigate how communicating technical concepts of robotic systems to users affect their mental models, and how this can increase the quality of human-robot interaction. We conducted an online study and investigated possible ways of improving users’ mental models. Our results underline that communicating technical concepts can form an improved mental model. Consequently, we show the importance of consciously designing robots that express their capabilities and limitations.
The deployment of versatile robot systems in diverse environments requires intuitive approaches for humans to flexibly teach them new skills. In our present work, we investigate different user feedback types to teach a real robot a new movement skill. We compare feedback as star ratings on an absolute scale for single roll-outs versus preference-based feedback for pairwise comparisons with respective optimization algorithms (i.e., a variation of co-variance matrix adaptationevolution strategy (CMA-ES) and random optimization) to teach the robot the game of skill cup-and-ball. In an experimental investigation with users, we investigated the influence of the feedback type on the user experience of interacting with the different interfaces and the performance of the learning systems. While there is no significant difference for the subjective user experience between the conditions, there is a significant difference in learning performance. The preference-based system learned the task quicker, but this did not influence the users' evaluation of it. In a follow-up study, we confirmed that the difference in learning performance indeed can be attributed to the human users' performance.
In recent years, an increased effort has been invested to improve the capabilities of robots. Nevertheless, human-robot interaction remains a complex field of application where errors occur frequently. The reasons for these errors can primarily be divided into two classes. Foremost, the recent increase in capabilities also widened possible sources of errors on the robot's side. This entails problems in the perception of the world, but also faulty behavior, based on errors in the system. Apart from that, non-expert users frequently have incorrect assumptions about the functionality and limitations of a robotic system. This leads to incompatibilities between the user's behavior and the functioning of the robot's system, causing problems on the robot's side and in the human-robot interaction. While engineers constantly improve the reliability of robots, the user's understanding about robots and their limitations have to be addressed as well. In this work, we investigate ways to improve the understanding about robots. For this, we employ FAMILIAR -FunctionAl user Mental model by Increased LegIbility ARchitecture, a transparent robot architecture with regard to the robot behavior and decision-making process. We conducted an online simulation user study to evaluate two complementary approaches to convey and increase the knowledge about this architecture to non-expert users: a dynamic visualization of the system's processes as well as a visual programming interface. The results of this study reveal that visual programming improves knowledge about the architecture. Furthermore, we show that with increased knowledge about the control architecture of the robot, users were significantly better in reaching the interaction goal. Furthermore, we showed that anthropomorphism may reduce interaction success.
Machine learning is a double-edged sword: it gives rise to astonishing results in automated systems, but at the cost of tremendously large data requirements. This makes many successful algorithms from machine learning unsuitable for human-machine interaction, where the machine must learn from a small number of training samples that can be provided by a user within a reasonable time frame. Fortunately, the user can tailor the training data they create to be as useful as possible, severely limiting its necessary size -as long as they know about the machine's requirements and limitations. Of course, acquiring this knowledge can in turn be cumbersome and costly. This raises the question how easy machine learning algorithms are to interact with. In this work we address this issue by analyzing the intuitiveness of certain algorithms when they are actively taught by users. After developing a theoretical framework of intuitiveness as a property of algorithms, we present and discuss the results of a large-scale user study into the performance and teaching strategies of 800 users interacting with prominent machine learning algorithms. Via this extensive examination we offer a systematic method to judge the efficacy of human-machine interactions and thus, to scrutinize how accessible, understandable, and fair, a system is.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.