Acknowledgment:We would like to thank Kevin Li and Benjamin Pinzone for 15 programming the simulator. 16Précis: We examined the effects of disclosing likelihood information on trust, 17 compliance and reliance, and task performance. Results indicate that operators 18 informed of the predictive values or the overall likelihood value, rather than the hit and 19 correct rejection rates, relied on the decision aid more appropriately and obtained 20 higher task scores. Abstract 1 Objective: The study examines the effects of disclosing different types of 2 likelihood information on human operators' trust in automation, their compliance and 3 reliance behaviors, and the human-automation team performance. 4 Background: To facilitate appropriate trust in and dependence on automation, 5 explicitly conveying the likelihood of automation success has been proposed as one 6 solution. Empirical studies have been conducted to investigate the potential benefits of 7 disclosing likelihood information in the form of automation reliability, (un)certainty, 8 and confidence. Yet, results from these studies are rather mixed.9Method: We conducted a human-in-the-loop experiment with 60 participants 10 using a simulated surveillance task. Each participant performed a compensatory 11 tracking task and a threat detection task with the help of an imperfect automated 12 threat detector. Three types of likelihood information were presented: overall likelihood 13 information, predictive values, and hit and correct rejection rates. Participants' trust in 14 automation, compliance and reliance behaviors, and task performance were measured. 15Results: Human operators informed of the predictive values or the overall 16 likelihood value, rather than the hit and correct rejection rates, relied on the decision 17 aid more appropriately and obtained higher task scores. 18Conclusion: Not all likelihood information is equal in aiding human-automation 19 team performance. Directly presenting the hit and correct rejection rates of an 20 automated decision aid should be avoided. 21 Application: The findings can be applied to the design of automated decision 22 aids.23
Human-autonomy teaming is a major emphasis in the ongoing transformation of future work space wherein human agents and autonomous agents are expected to work as a team. While the increasing complexity in algorithms empowers autonomous systems, one major concern arises from the human factors perspective: Human agents have difficulty deciphering autonomy-generated solutions and increasingly perceive autonomy as a mysterious black box. The lack of transparency could lead to the lack of trust in autonomy and sub-optimal team performance (Chen and Barnes, 2014; Endsley, 2017; Lyons and Havig, 2014; de Visser et al., 2018; Yang et al., 2017). In response to this concern, researchers have investigated ways to enhance autonomy transparency. Existing human factors research on autonomy transparency has largely concentrated on conveying automation reliability or likelihood/(un)certainty information (Beller et al., 2013; McGuirl and Sarter, 2006; Wang et al., 2009; Neyedli et al., 2011). Providing explanations of automation’s behaviors is another way to increase transparency, which leads to higher performance and trust (Dzindolet et al., 2003; Mercado et al., 2016). Specifically, in the context of automated vehicles, studies have showed that informing the drivers of the reasons for the action of automated vehicles decreased drivers’ anxiety, increased their sense of control, preference and acceptance (Koo et al., 2014, 2016; Forster et al., 2017). However, the studies mentioned above largely focused on conveying simple likelihood information or used hand-drafted explanations, with only few exceptions (e.g.(Mercado et al., 2016)). Further research is needed to examine potential design structures of transparency autonomy. In the present study, we wish to propose an option-centric explanation approach, inspired by the research on design rationale. Design rationale is an area of design science focusing on the “representation for explicitly documenting the reasoning and argumentation that make sense of a specific artifact (MacLean et al., 1991)”. The theoretical underpinning for design rationale is that for designers what is important is not just the specific artifact itself but its other possibilities – why an artifact is designed in a particular way compared to how it might otherwise be. We aim to evaluate the effectiveness of the option-centric explanation approach on trust, dependence and team performance. We conducted a human-in-the-loop experiment with 34 participants (Age: Mean = 23.7 years, SD = 2.88 years). We developed a simulated game Treasure Hunter, where participants and an intelligent assistant worked together to uncover a map for treasures. The intelligent assistant’s ability, intent and decision-making rationale was conveyed in the option-centric rationale display. The experiment used a between-subject design with an independent variable – whether the option-centric rationale explanation was provided. The participants were randomly assigned to either of the two explanation conditions. Participants’ trust to the intelligent assistant, confidence of accomplishing the experiment without the intelligent assistant, and workload for the whole session were collected, as well as their scores for each map. The results showed that by conveying the intelligent assistant’s ability, intent and decision-making rationale in the option-centric rationale display, participants had higher task performance. With the display of all the options, participants had a better understanding and overview of the system. Therefore, they could utilize the intelligent assistant more appropriately and earned a higher score. It is notable that every participant only played 10 maps during the whole session. The advantages of option-centric rationale display might be more apparent if more rounds are played in the experiment session. Although not significant at the .05 level, there seems to be a trend suggesting lower levels of workload when the rationale explanation displayed. Our study contributes to the study of human-autonomy teaming by considering the important role of explanation display. It can help human operators build appropriate trust and improve the human-autonomy team performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.