Human-automation interactions are rapidly transitioning from single-component automated systems to multiple-component systems. The human-automation literature has yet to adequately explore trust within multiple-component systems. A currently unanswered question is whether one faulty component causes an operator to lose trust in that one component (Component-Specific Trust; CST) or in every component in the system (System-Wide Trust; SWT). The goals of this paper were to 1) summarize the current work on trust in multiple-component systems, and 2) identify any trends that emerge during the literature review. We reviewed 17 experimental studies that tested whether operators tend to adopt CST or SWT under different conditions. Overall, most studies suggest that operators adopt SWT. However, studies that provided the operator with high decisional freedom and more time with the automated systems suggest that CST is the dominant strategy. Future work should explicitly test these and other variables that may promote users to adopt CST.
Objective Determining the efficacy of two trust repair strategies (apology and denial) for trust violations of an ethical nature by an autonomous teammate. Background While ethics in human-AI interaction is extensively studied, little research has investigated how decisions with ethical implications impact trust and performance within human-AI teams and their subsequent repair. Method Forty teams of two participants and one autonomous teammate completed three team missions within a synthetic task environment. The autonomous teammate made an ethical or unethical action during each mission, followed by an apology or denial. Measures of individual team trust, autonomous teammate trust, human teammate trust, perceived autonomous teammate ethicality, and team performance were taken. Results Teams with unethical autonomous teammates had significantly lower trust in the team and trust in the autonomous teammate. Unethical autonomous teammates were also perceived as substantially more unethical. Neither trust repair strategy effectively restored trust after an ethical violation, and autonomous teammate ethicality was not related to the team score, but unethical autonomous teammates did have shorter times. Conclusion Ethical violations significantly harm trust in the overall team and autonomous teammate but do not negatively impact team score. However, current trust repair strategies like apologies and denials appear ineffective in restoring trust after this type of violation. Application This research highlights the need to develop trust repair strategies specific to human-AI teams and trust violations of an ethical nature.
Advancements and implementations of autonomous systems coincide with an increased concern for the ethical implications resulting from their use. This is increasingly relevant as autonomy fulfills teammate roles in contexts that demand ethical considerations. As AI teammates (ATs) enter these roles, research is needed to explore how an AT’s ethics influences human trust. This current research presents two studies which explore how an AT’s ethical or unethical behavior impacts trust in that teammate. In Study 1, participants responded to scenarios of an AT recommending actions which violated or abided by a set of ethical principles. The results suggest that ethicality perceptions and trust are influenced by ethical violations, but only ethicality depends on the type of ethical violation. Participants in Study 2 completed a focus group interview after performing a team task with a simulated AT that committed ethical violations and attempted to repair trust (apology or denial). The focus group responses suggest that ethical violations worsened perceptions of the AT and decreased trust, but it could still be trusted to perform tasks. The AT’s apologies and denials did not repair damaged trust. The studies’ findings suggest a nuanced relationship between trust and ethics and a need for further investigation into trust repair strategies following ethical violations.
Introduction: The use of shared automated vehicles (SAVs) should lead to several societal and individual benefits, including reduced greenhouse gas emissions, reduced traffic, and improved mobility for persons who cannot safely drive themselves. We define SAVs as on-demand, fully automated vehicles in which passengers are paired with other riders traveling along a similar route. Previous research has shown that younger adults are more likely to report using conventional ridesharing services and are more accepting of new technologies including automated vehicles (AVs). However, older adults, particularly those who may be close to retiring from driving, stand to greatly benefit from SAV services. In order for SAVs to deliver on their aforementioned benefits, they must be viewed favorably and utilized. We sought to investigate how short educational and/or experiential videos might impact younger, middle-aged, and older adult respondents’ anticipated acceptance and attitudes toward SAVs. Knowing what types of introductory experiences improve different age groups’ perceptions of SAVs will be beneficial for tailoring campaigns aiming to promote SAV usage. Methods: We deployed an online survey using the platform Prolific for middle-aged and older respondents, and our departmental participant pool for younger adults, collecting 585 total responses that resulted in 448 valid responses. Respondents answered questions regarding their demographic attributes, their ridesharing history, preconceptions of technology, as well as their anticipated acceptance attitudes towards SAVs as measured by the dimensions of the Automated Vehicle User Perception Survey (AVUPS). After this, respondents were randomly assigned to an intervention condition where they either watched 1) an educational video about how SAVs work and their potential benefits, 2) an experiential video showing a AV navigating traffic, 3) both the experiential and educational videos, or 4) a control video explaining how ridesharing works. Anticipated acceptance attitudes towards SAVs were measured again after this intervention and difference scores calculated to investigate the effect of the intervention conditions. Prolific respondents were paid at a rate of $9.50/hour and younger adults received course credit. Results: Controlling for preconceptions of technology and ridesharing experience, a MANOVA was run on the difference scores of the dimensions of the AVUPS (intention to use, trust/reliability, perceived usefulness (PU), perceived ease of use (PEOU), safety, control/driving-efficacy, cost, authority, media, and social influence). Both older and middle-aged adults expressed significantly greater increases in PEOU and PU of SAVs than younger adults. We also observed an interaction between age and condition for both PU and PEOU. For PU, older adults’ difference scores were found to be significantly greater than younger adults’ for the control video condition. With PEOU, older adults’ difference scores were significantly greater than both younger adults’ for the control video condition, and middle-aged adults had greater difference scores for the educational-only video condition than younger or older adults. Discussion: The increases in PU observed for older adults in the control condition suggests that educating them on how to use currently available ridesharing services might transfer to and/or highlight the benefits that automated ridesharing might provide. The PEOU interactions also suggest that middle-aged adults might respond more positively than younger or older adults to an educational introduction to SAVs. Conclusion: The positive findings pertaining to PU and PEOU show that exposure to information related to SAVs has a positive impact on these attitudes. PU’s and PEOU’s positive relationship to behavioral intentions (BI) in the Technology Acceptance Model, coupled with the findings from this study, bode well for higher fidelity interventions seeking to inform and/or give individuals experience with SAVs. Providing information on how currently available ridesharing services work helped our older adult respondents recognize the potential usefulness of SAVs. Knowing that different age groups may respond better to educational versus experiential interventions, for example middle-aged adults in this study responding more positively to the educational video condition than younger or older adults, may be useful for targeted promotional campaigns.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.