With the rise of increasingly complex artificial intelligence (AI), there is a need to design new methods to monitor AI in a transparent, human-aware manner. Decades of research have demonstrated that people, who are not aware of the exact performance levels of automated algorithms, often experience a mismatch in expectations. Consequently, they will often provide either too little or too much trust in an algorithm. Detecting such a mismatch in expectations, or trust calibration, remains a fundamental challenge in research investigating the use of automation. Due to the context-dependent nature of trust, universal measures of trust have not been established. Trust is a difficult construct to investigate because even the act of reflecting on how much a person trusts a certain agent can change the perception of that agent. We hypothesized that electroencephalograms (EEGs) would be able to provide such a universal index of trust without the need of self-report. In this work, EEGs were recorded for 21 participants (mean age = 22.1; 13 females) while they observed a series of algorithms perform a modified version of a flanker task. Each algorithm’s degree of credibility and reliability were manipulated. We hypothesized that neural markers of action monitoring, such as the observational error-related negativity (oERN) and observational error positivity (oPe), are potential candidates for monitoring computer algorithm performance. Our findings demonstrate that (1) it is possible to reliably elicit both the oERN and oPe while participants monitored these computer algorithms, (2) the oPe, as opposed to the oERN, significantly distinguished between high and low reliability algorithms, and (3) the oPe significantly correlated with subjective measures of trust. This work provides the first evidence for the utility of neural correlates of error monitoring for examining trust in computer algorithms.
With the rise of automated and autonomous agents, research examining Trust in Automation (TiA) has attracted considerable attention over the last few decades. Trust is a rich and complex construct which has sparked a multitude of measures and approaches to study and understand it. This comprehensive narrative review addresses known methods that have been used to capture TiA. We examined measurements deployed in existing empirical works, categorized those measures into self-report, behavioral, and physiological indices, and examined them within the context of an existing model of trust. The resulting work provides a reference guide for researchers, providing a list of available TiA measurement methods along with the model-derived constructs that they capture including judgments of trustworthiness, trust attitudes, and trusting behaviors. The article concludes with recommendations on how to improve the current state of TiA measurement.
Trust is important for any relationship, especially so with self-driving vehicles: passengers must trust these vehicles with their life. Given the criticality of maintaining passenger’s trust, yet the dearth of self-driving trust repair research relative to the growth of the self-driving industry, we conducted two studies to better understand how people view errors committed by self-driving cars, as well as what types of trust repair efforts may be viable for use by self-driving cars. Experiment 1 manipulated error type and driver types to determine whether driver type (human versus self-driving) affected how participants assessed errors. Results indicate that errors committed by both driver types are not assessed differently. Given the similarity, experiment 2 focused on self-driving cars, using a wide variety of trust repair efforts to confirm human-human research and determine which repairs were most effective at mitigating the effect of violations on trust. We confirmed the pattern of trust repairs in human-human research, and found that some apologies were more effective at repairing trust than some denials. These findings help focus future research, while providing broad guidance as to potential methods for approaching trust repair with self-driving cars.
This paper reviews current human-automation trust and trust repair literature as it applies to health-care systems. In addition, we examine the increased use and relevance of social agents, such as robots and virtual agents, within the medical field and consider the importance of using social agents in this particular domain. Furthermore, we examine strategies for trust repair following errors in health-care settings, and provide a conceptual framework for repairing trust with social automation. Most literature to date stems from a human-human perspective, and we hope to extend this work to the field of social automation. If these strategies are effective, humanautomation systems in health-care can maintain appropriate levels of trust, ensuring effective and efficient long-term collaborations in critical work areas.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.