Source credibility is known as an important prerequisite to ensure effective communication (Pornpitakpan, 2004). Nowadays not only humans but also technological devices such as humanoid robots can communicate with people and can likewise be rated credible or not as reported by Fogg and Tseng (1999). While research related to the machine heuristic suggests that machines are rated more credible than humans (Sundar, 2008), an opposite effect in favor of humans’ information is supposed to occur when algorithmically produced information is wrong (Dietvorst, Simmons, and Massey, 2015). However, humanoid robots may be attributed more in line with humans because of their anthropomorphically embodied exterior compared to non-human-like technological devices. To examine these differences in credibility attributions a 3 (source-type) x 2 (information’s correctness) online experiment was conducted in which 338 participants were asked to either rate a human’s, humanoid robot’s, or non-human-like device’s credibility based on either correct or false communicated information. This between-subjects approach revealed that humans were rated more credible than social robots and smart speakers in terms of trustworthiness and goodwill. Additionally, results show that people’s attributions of theory of mind abilities were lower for robots and smart speakers on the one side and higher for humans on the other side and in part influence the attribution of credibility next to people’s reliance on technology, attributed anthropomorphism, and morality. Furthermore, no main or moderation effect of the information’s correctness was found. In sum, these insights offer hints for a human superiority effect and present relevant insights into the process of attributing credibility to humanoid robots.
Robots are used in various social interactions that require them to be perceived as credible agents (e.g., as product recommenders in shopping malls). To be rated credible (i.e., competent, trustworthy, and caring) a robot’s mentalizing abilities have shown to be beneficial because they allow a robot to infer users’ inner states, thus serving as a prerequisite for understanding their beliefs and attitudes. However, social robots are often deployed by private and thus profit-oriented companies. In such cases where an organization’s implied manipulative intent is salient, the effect of robots’ mentalizing abilities might be reversed. The reason for this is that mentalizing abilities could pose a persuasive threat to users rather than a feature for better understanding, thereby decreasing credibility attributions. These assumptions were tested in a three (robot’s mentalizing abilities) by two (external manipulative intent) between-subjects, pre-registered, laboratory experiment during which participants interacted with a social robot that recommended experience vouchers as potential gifts for participants’ target persons. Contrary to our assumptions, inferential statistical results revealed no significant differences in explicit or indirect credibility attributions caused by the experimental manipulation. The external manipulative intent of an organization using the robot caused no differences in participants’ behavioral intentions or evaluations of it. Furthermore, only participants’ attribution of empathic understanding to the robot varied significantly between the three mentalizing conditions. Our results suggest that people focus more on the robot than on the organization using it, causing potential opportunities for such organizations to hide their economic interests from the users.
During interactions with others, people have expectancies concerning their communication partners' behaviors. Negative violations of these expectancies are known to exert an adverse impact on people's perceptions and attitudes. Likewise, with regard to human-robot interactions, previous research on one-time interactions indicates that people's evaluations of robots can deteriorate equally. To simultaneously investigate possible compensation for this effect, the concept of idiosyncrasy credit was tested for transferability to human-robot interactions. It postulates that compensation can be achieved by an individual's consistent, beneficial behavior over time that results in a social deviation credit. During an experimental medium-term study with a 2x2 between-subjects design, 80 participants interacted with a humanoid, social robot in three time-separated sessions distributed over an average time span of ten days. The robot's acquisition of idiosyncrasy credit was manipulated as well as the occurrence of a negative expectancy violation by systematically varying the robot's polite communication style. Results of repeated measures ANOVAs support the assumption that negative expectancy violations decrease people's evaluations of a robot. However, this effect seems non-persistent over time. Furthermore, no support for the robot's acquisition of idiosyncrasy credit was gained.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.