2019
DOI: 10.3389/fpsyg.2019.00519
|View full text |Cite
|
Sign up to set email alerts
|

Effects of Trust, Self-Confidence, and Feedback on the Use of Decision Automation

Abstract: Operators often fail to rely sufficiently on alarm systems. This results in a joint human-machine (JHM) sensitivity below the one of the alarm system. The ‘confidence vs. trust hypothesis’ assumes the use of the system depends on the weighting of both values. In case of higher confidence, the task is performed manually, if trust is higher, the user relies on the system. Thus, insufficient reliance may be due to operators’ overconfidence in their own abilities and/or insufficient trust in the decision automatio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(13 citation statements)
references
References 65 publications
0
12
1
Order By: Relevance
“…Moreover, the present study failed to replicate benefits of time pressure on performance in combination with an automated diagnostic aid (e.g., Rice and Keller 2009). Instead, the results confirm other findings (e.g., Wiczorek and Meyer 2019) that providing only…”
Section: Resultscontrasting
confidence: 91%
“…Moreover, the present study failed to replicate benefits of time pressure on performance in combination with an automated diagnostic aid (e.g., Rice and Keller 2009). Instead, the results confirm other findings (e.g., Wiczorek and Meyer 2019) that providing only…”
Section: Resultscontrasting
confidence: 91%
“…While disuse can refer to the complete disengagement from an automated system, this behavioral pattern can also include actions like silencing alarms on a system which have high false alarm rates (Parasuraman & Riley, 1997; Tenhundfeld et al, 2020; Tomzcak et al, 2019; L.; Wang et al, 2009). This disuse can lead to both safety issues and worse performance for the human-machine team (Phillips et al, 2011; Wiczorek & Meyer, 2019). The irony of distrust is that it can be largely influenced by initial trust and expectations which are miscalibrated to actual system performance (de Visser et al, 2018b; J. D.; Lee & See, 2004; Madhavan & Wiegmann, 2007a).…”
Section: The Case For Trust Assessment In Human-machine Interactionmentioning
confidence: 99%
“…With poor transparency comes higher levels of trust and greater expectations of the systems (de Visser et al, 2020; Matthews et al, 2019). Once system performance violates the trust and contradicts user expectations, users tend to under-trust the system, which leads to under-reliance and even complete disuse of the system (Dzindolet et al, 2002; J. D.; Johnson, 2004; Parasuraman & Riley, 1997; Wiczorek & Meyer, 2019).…”
Section: The Case For Trust Assessment In Human-machine Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…A second iteration of this study will be conducted to look at other forms of feedback on the same task. Previous work looked at use of automation when the participant was familiar or confident with the task at hand (Wiczorek & Meyer, 2019) and whether or not the participant trusted the automation to do a task without being checked (Pak et. al 2016).…”
Section: Future Researchmentioning
confidence: 99%