2020
DOI: 10.1609/hcomp.v8i1.7464
|View full text |Cite
|
Sign up to set email alerts
|

Soliciting Human-in-the-Loop User Feedback for Interactive Machine Learning Reduces User Trust and Impressions of Model Accuracy

Abstract: Mixed-initiative systems allow users to interactively provide feedback to potentially improve system performance. Human feedback can correct model errors and update model parameters to dynamically adapt to changing data. Additionally, many users desire the ability to have a greater level of control and fix perceived flaws in systems they rely on. However, how the ability to provide feedback to autonomous systems influences user trust is a largely unexplored area of research. Our research investigates how the a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(24 citation statements)
references
References 28 publications
0
23
1
Order By: Relevance
“…On the contrary, mistrust and distrust can cause users to underestimate the system and rely on themselves, eventually causing them to stop using the automated system in the future. For example, getting people to provide feedback to an intelligent system to fix errors can amplify user mistrust in the system (Honeycutt, Nourani, and Ragan 2020). More related to this paper, Nourani et al (2020b) found that after observing system's weakly-justified predictions, users tend to disagree with the system even when it is right.…”
Section: Related Workmentioning
confidence: 76%
“…On the contrary, mistrust and distrust can cause users to underestimate the system and rely on themselves, eventually causing them to stop using the automated system in the future. For example, getting people to provide feedback to an intelligent system to fix errors can amplify user mistrust in the system (Honeycutt, Nourani, and Ragan 2020). More related to this paper, Nourani et al (2020b) found that after observing system's weakly-justified predictions, users tend to disagree with the system even when it is right.…”
Section: Related Workmentioning
confidence: 76%
“…Understanding also plays a role. Failure to understand an explanation may drive users to immediately and unjustifiably distrust the system (Honeycutt et al, 2020) and, rather surprisingly, in some cases the mere fact of being exposed to the machine's internal reasoning may induce a loss of trust (Honeycutt et al, 2020). More generally, the link between interaction and trust is under-explored.…”
Section: Modulating and Manipulating Trustmentioning
confidence: 99%
“…This way, the human can act as a teacher to Artificial Intelligence models and learn from them through Explainable Artificial Intelligence. This two-way relationship can lead to a trusted collaboration [29,30].…”
Section: Related Workmentioning
confidence: 99%