2016
DOI: 10.1177/1541931213601422
|View full text |Cite
|
Sign up to set email alerts
|

Behavioral Measurement of Trust in Automation

Abstract: Stating that one trusts a system is markedly different from demonstrating that trust. To investigate trust in automation, we introduce the trust fall: a two-stage behavioral test of trust. In the trust fall paradigm, first the one learns the capabilities of the system, and in the second phase, the ‘fall,’ one’s choices demonstrate trust or distrust. Our first studies using this method suggest the value of measuring behaviors that demonstrate trust, compared with self-reports of one’s trust. Designing interface… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 66 publications
(26 citation statements)
references
References 23 publications
(23 reference statements)
0
25
0
1
Order By: Relevance
“…Based on literature (Rasmussen, 1985;Seppelt and Lee, 2007;Xu et al, 2014;Biassoni, Ruscio and Ciceri, 2016;Feldhütter et al, 2016;Miller et al, 2016;Bennett, 2017), the following classification for knowledge about the capabilities and limitations of automated systems was proposed by (Khastgir, Birrell, Dhadyalla and Jennings, 2017):  Static knowledge: Understanding of the functionality of the automated system (intentions behind the design of the system and functionality) (Larsson, 2012;Eichelberger and McCartt, 2014). Static knowledge is administered prior to the driving task and is akin to an owner's instruction manual, however with information at a higher abstraction level.…”
Section: Types Of Knowledgementioning
confidence: 99%
“…Based on literature (Rasmussen, 1985;Seppelt and Lee, 2007;Xu et al, 2014;Biassoni, Ruscio and Ciceri, 2016;Feldhütter et al, 2016;Miller et al, 2016;Bennett, 2017), the following classification for knowledge about the capabilities and limitations of automated systems was proposed by (Khastgir, Birrell, Dhadyalla and Jennings, 2017):  Static knowledge: Understanding of the functionality of the automated system (intentions behind the design of the system and functionality) (Larsson, 2012;Eichelberger and McCartt, 2014). Static knowledge is administered prior to the driving task and is akin to an owner's instruction manual, however with information at a higher abstraction level.…”
Section: Types Of Knowledgementioning
confidence: 99%
“…Based on those elements, drivers can be classified as Trustful or Distrustful according to their initial level of trust (Manchon et al, submitted). Dynamic learned trust is expected to fluctuate during actual interaction with automation (Bueno et al, 2016;Feldhütter et al, 2016;Gold et al, 2015), depending on AD's features (Miller et al, 2016;Payre et al, 2017), performance (Abe et al, 2018;Morris et al, 2017), and type of feedback provided (Häuslschmid et al, 2017;Koo et al, 2015;Lu et al, 2019;Wintersberger et al, 2017). These factors are expected to induce periodic trust recalibrations.…”
Section: Introductionmentioning
confidence: 99%
“…These issues are not often reflected in research which uses questionnaires with rather general questions about trust. This difference between opinions as measured via questionnaires and signs of trust when experiencing automated driving has also been described as the Trust Fall (Miller et al 2016). An explanation, as provided in Miller et al (2016), is that in questionnaires more time is available to come to a decision.…”
Section: Engender Correct Calibration Of Trustmentioning
confidence: 99%
“…This difference between opinions as measured via questionnaires and signs of trust when experiencing automated driving has also been described as the Trust Fall (Miller et al 2016). An explanation, as provided in Miller et al (2016), is that in questionnaires more time is available to come to a decision. A third explanation is that in questionnaires there is no direct risk perceived in case of automation failure (Lee 1991;Muir and Moray 1996).…”
Section: Engender Correct Calibration Of Trustmentioning
confidence: 99%