Objective The objectives of this meta-analysis are to explore the presently available empirical findings on the antecedents of trust in robots and use this information to expand upon a previous meta-analytic review of the area. Background Human–robot interaction (HRI) represents an increasingly important dimension of our everyday existence. Currently, the most important element of these interactions is proposed to be whether the human trusts the robot or not. We have identified three overarching categories that exert effects on the expression of trust. These consist of factors associated with (a) the human, (b) the robot, and (c) the context in which any specific HRI event occurs. Method The current body of literature was examined and all qualifying articles pertaining to trust in robots were included in the meta-analysis. A previous meta-analysis on HRI trust was used as the basis for this extended, updated, and evolving analysis. Results Multiple additional factors, which have now been demonstrated to significantly influence trust, were identified. The present results, expressed as points of difference and points of commonality between the current and previous analyses, are identified, explained, and cast in the setting of the emerging wave of HRI. Conclusion The present meta-analysis expands upon previous work and validates the overarching categories of trust antecedent (human-related, robot-related, and contextual), as well as identifying the significant individual precursors to trust within each category. A new and updated model of these complex interactions is offered. Application The identified trust factors can be used in order to promote appropriate levels of trust in robots.
Objective The present meta-analysis sought to determine significant factors that predict trust in artificial intelligence (AI). Such factors were divided into those relating to (a) the human trustor, (b) the AI trustee, and (c) the shared context of their interaction. Background There are many factors influencing trust in robots, automation, and technology in general, and there have been several meta-analytic attempts to understand the antecedents of trust in these areas. However, no targeted meta-analysis has been performed examining the antecedents of trust in AI. Method Data from 65 articles examined the three predicted categories, as well as the subcategories of human characteristics and abilities, AI performance and attributes, and contextual tasking. Lastly, four common uses for AI (i.e., chatbots, robots, automated vehicles, and nonembodied, plain algorithms) were examined as further potential moderating factors. Results Results showed that all of the examined categories were significant predictors of trust in AI as well as many individual antecedents such as AI reliability and anthropomorphism, among many others. Conclusion Overall, the results of this meta-analysis determined several factors that influence trust, including some that have no bearing on AI performance. Additionally, we highlight the areas where there is currently no empirical research. Application Findings from this analysis will allow designers to build systems that elicit higher or lower levels of trust, as they require.
We explore the applications of our conceptualization of human-robot trust and human-automation trust, and develop a theoretical model for wider human-human trust. The exploration of similarities and differences between trust in robots and general automation aid in the establishment of our foundation for this comprehensive model of interpersonal trust. Our proposed model is described and its implications for research, design, and applications in applied behavioral research are adumbrated.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.