2019
DOI: 10.1177/0018720819887252
|View full text |Cite
|
Sign up to set email alerts
|

Human Performance Benefits of The Automation Transparency Design Principle

Abstract: Objective: Test the automation transparency design principle using a full-scope nuclear power plant simulator. Background: Automation transparency is a long-held human factors design principle espousing that the responsibilities, capabilities, goals, activities, and/or effects of automation should be directly observable in the human–system interface. The anticipated benefits of transparency include more effective reliance, more appropriate trust, better understanding, and greater user satisfaction. Transparenc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
35
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 45 publications
(38 citation statements)
references
References 29 publications
2
35
0
Order By: Relevance
“…When the interpretations are addressed in a similar way to human logical reasoning processes, a series of data are integrated to deduce the situation, and operators may tend to trust the system more. For example, Skraaning and Jamieson [31] provided feedback (i.e., a program is starting up due to A) and reported the highest positive association. Similarly, three written reasons for a suggestion were presented (i.e., A vehicle will arrive faster than B vehicle because A vehicle follows a direct flight path) in the study by [41], yielding the second-highest association between trust and transparency level.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…When the interpretations are addressed in a similar way to human logical reasoning processes, a series of data are integrated to deduce the situation, and operators may tend to trust the system more. For example, Skraaning and Jamieson [31] provided feedback (i.e., a program is starting up due to A) and reported the highest positive association. Similarly, three written reasons for a suggestion were presented (i.e., A vehicle will arrive faster than B vehicle because A vehicle follows a direct flight path) in the study by [41], yielding the second-highest association between trust and transparency level.…”
Section: Discussionmentioning
confidence: 99%
“…Many researchers have also tried to expand the field of transparency. Skraaning and Jamieson [31] employed verbal and diagnostic feedback in the system behaviour for nuclear system monitoring, focusing on automation observability. Dikmen et al [32] implemented the automation support system for a target identification task that informs its system limitation (i.e., factors not considered in its suggestions).…”
Section: Transparencymentioning
confidence: 99%
“…For this research question, we predicted human-automation performance to improve for the specific automated system following the uncertainty communication. This was based on the success previous experiments had when they incorporated transparency that was specific and at a basic functionality level (Bhaskara et al, 2021;Skraaning & Jamieson, 2021). In the present study, the uncertainty communication was focused on a specific automated system that was only completing a basic target discrimination task, so we predict it will improve humanautomation performance.…”
Section: Introductionmentioning
confidence: 93%
“…Instead, the machine could refer to an automated or autonomous system, an autonomous agent, a robot, an algorithm, or AI. Studies on human-machine collaboration span a variety of fields, including human-machine teaming (Calhoun et al, 2018;Daugherty and Wilson, 2018;Wynne and Lyons, 2018;Ferrari, 2019;Parker and Grote, 2019;Seeber et al, 2020;Laid et al, 2020;Saenz et al, 2020), human-machine relationship (de Visser et al, 2018;Lyons et al, 2018), transparency (Patel et al, 2019;Skraaning and Jamieson, 2019;Kraus et al, 2020), explainability (Gunning, 2016;Degani et al, 2017;DARPA, 2018;Amann et al, 2020;Cadario et al, 2021), task allocation (van Maanen andvan Dongen, 2005;Roth et al, 2019;Dubois and Le Ny, 2020), acceptance (Gursoy et al, 2019;Shin, 2020), human trust in machine (Hoff and Bashir, 2015;de Visser et al, 2018;Gutzwiller and Reeder, 2021), (shared) mental models (Cannon-Bowers et al, 1993;Flemisch et al, 2012;Goodrich and Yi, 2013), situation awareness (Salmon et al, 2008;Ososky et al, 2012), measurement (Damacharla et al, 2018), and practice in decision-making (Jarrahi, 2018;Duan et al, 2019;Haesevoets et al, 2021).…”
Section: Literature On Human-machine Collaborationmentioning
confidence: 99%