2014
DOI: 10.1177/0018720814561675
|View full text |Cite
|
Sign up to set email alerts
|

Are Well-Calibrated Users Effective Users? Associations Between Calibration of Trust and Performance on an Automation-Aided Task

Abstract: Users who were better able to perform the task unaided were better able to identify and correct aid failure, suggesting that user task training and expertise may benefit human-automation performance.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
35
0

Year Published

2017
2017
2025
2025

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(35 citation statements)
references
References 22 publications
0
35
0
Order By: Relevance
“…For instance, a popular task to examine this trust in automation cycle is the bag-screening task. In this task, participants can screen for dangerous objects themselves and follow an automated agent’s recommendation ( Madhavan et al, 2006 ; Madhavan and Wiegmann, 2007a ; Merritt and Ilgen, 2008 ; Merritt et al, 2013 , 2014, 2015 ; Pop et al, 2015 ). A complete neural explanatory mechanism of trust calibration in automated agents would need to include the ERP profile of the observation of the automation’s performance, the evaluation of the automated agent’s decision recommendation as well as the feedback on the consequence of either complying or not complying with the advice of the agent.…”
Section: Discussionmentioning
confidence: 99%
“…For instance, a popular task to examine this trust in automation cycle is the bag-screening task. In this task, participants can screen for dangerous objects themselves and follow an automated agent’s recommendation ( Madhavan et al, 2006 ; Madhavan and Wiegmann, 2007a ; Merritt and Ilgen, 2008 ; Merritt et al, 2013 , 2014, 2015 ; Pop et al, 2015 ). A complete neural explanatory mechanism of trust calibration in automated agents would need to include the ERP profile of the observation of the automation’s performance, the evaluation of the automated agent’s decision recommendation as well as the feedback on the consequence of either complying or not complying with the advice of the agent.…”
Section: Discussionmentioning
confidence: 99%
“…These measures were selected because of their potential relevance to responses to automation and to ensure that both groups did not differ on personality measures. Research suggests that personality and other individual differences can have strong effects on a number of automation-related outcomes (Merritt, Heimbaugh, LaChapell, & Lee, 2013; Merritt, Lee, Unnerstall, & Huber, 2015; Merritt & Ilgen, 2008; Szalma & Taylor, 2011; Szalma, 2009). …”
Section: Methodsmentioning
confidence: 99%
“…Trust is difficult to measure, monitor [21] and especially hard to assess in a real-time manner, as it is often too disruptive to interrupt and ask users to report trust ratings during the course of an interaction. Measuring and monitoring trust, however, is paramount to the success of human-agent teaming [31]. When trust in agents is too high, users tend to have a more complacent attitude, whereas when trust is too low, users tend to overlook or ignore agents' inputs.…”
Section: Trust In Automationmentioning
confidence: 99%