Understanding the impact of robot errors in child-robot-interactions (CRI) is critical, as current technological systems are still limited and may randomly present a variety of mistakes during interactions with children. In this study we manipulate a task-based error of a NAO robot during a semi-autonomous computational thinking task implemented with the Cozmo robot. Data from 72 children aged 7–10 were analysed regarding their attitudes towards NAO (social trust, competency trust, liking, and perceived agency), their behaviour towards the robot (self-disclosure, following recommendations), as well as their task performance. We did not find quantitative effects of the robot’s error on children’s self-reported attitudes, behaviour, or task performance. Age was also not significantly related to either social attitudes or behaviours towards NAO, although there were some age-related differences in task performance. Potential reasons behind the lack of statistical effects and limitations of the study with regards to the manipulation of robot errors are discussed and insights into the design of future CRI studies provided.
Conversational AI, like Amazon’s Alexa, are often marketed as tools assisting owners, but humans anthropomorphize computers, suggesting that they bond with their devices beyond an owner-tool relationship. Little empirical research has studied human-AI relationships besides relational proxies such as trust. We explored the relationships people form with conversational AI based on the Relational Models Theory (RMT, Fiske, 1992). Results of the factor analyses among frequent users (Ntotal = 729) suggest that they perceive the relationship more as a master-assistant relationship (i.e., authority ranking) and an exchange relationship (i.e., market pricing) than as a companion-like relationship (i.e., peer bonding). The correlational analysis showed that authority ranking barely correlates with system perception or user characteristics, whereas market pricing and peer bonding do. The relationship perception proved to be independent of demographic factors and label of the digital device. Our research enriches the traditional dichotomous approach. The extent to which users see their conversational AI as exchange partners or peer-like has a stronger predictive value regarding human-like system perception of conversational AI than the perception of it as servants.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.