2006
DOI: 10.21236/ada471484
|View full text |Cite
|
Sign up to set email alerts
|

Toward Harnessing User Feedback For Machine Learning

Abstract: There has been little research into how end users might be able to communicate advice to machine learning systems. If this resource-the users themselves-could somehow work hand-in-hand with machine learning systems, the accuracy of learning systems could be improved and the users' understanding and trust of the system could improve as well. We conducted a think-aloud study to see how willing users were to provide feedback and to understand what kinds of feedback users could give. Users were shown explanations … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
33
0

Year Published

2007
2007
2021
2021

Publication Types

Select...
5
3
2

Relationship

1
9

Authors

Journals

citations
Cited by 32 publications
(34 citation statements)
references
References 7 publications
1
33
0
Order By: Relevance
“…Work on how people perceive explanations of ML systems is a growing area [4,20,25,32] which aims to inform the choices and design of explanations for particular systems or tasks. Recent work calls for taxonomic organizations of explanations to enable design guidelines [25].…”
Section: Explanation Of Machine Learningmentioning
confidence: 99%
“…Work on how people perceive explanations of ML systems is a growing area [4,20,25,32] which aims to inform the choices and design of explanations for particular systems or tasks. Recent work calls for taxonomic organizations of explanations to enable design guidelines [25].…”
Section: Explanation Of Machine Learningmentioning
confidence: 99%
“…This is not to say that users will blindly trust ML systems. But, prior work suggests that evaluations of a system’s reliability are based on subjective perceptions of its output and the perceived “soundness” of its reasoning rather than on statistical evidence of its accuracy[43]. The implication for system designers is that users’ willingness to trust and ultimately adopt ML systems is dependent on perceived “legibility,” meaning, the degree to which system behavior seems to “make sense” to its users, regardless of its mechanism or statistical accuracy [19].…”
Section: Discussionmentioning
confidence: 99%
“…For example, [25] found strong user preference for, and trust in, models that exhibit “sound” (i.e., human comprehensible) reasoning and “clear communication” about decision making. These models were also perceived as more accurate, which did not necessarily correlate with actual or statistical accuracy.…”
Section: Design Issuesmentioning
confidence: 99%