Proceedings of the 24th International Conference on Intelligent User Interfaces 2019
DOI: 10.1145/3301275.3302289
|View full text |Cite
|
Sign up to set email alerts
|

The effects of example-based explanations in a machine learning interface

Abstract: The black-box nature of machine learning algorithms can make their predictions difficult to understand and explain to end-users. In this paper, we propose and evaluate two kinds of example-based explanations in the visual domain, normative explanations and comparative explanations (Figure 1), which automatically surface examples from the training set of a deep neural net sketch-recognition algorithm. To investigate their effects, we deployed these explanations to 1150 users on QuickDraw, an online platform whe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

4
96
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 150 publications
(100 citation statements)
references
References 25 publications
4
96
0
Order By: Relevance
“…Research on how to reduce over-trust effect suggested different solutions, but it needs more investigation to measure and adjust the relationship between different variable including trust, certainty level, cognitive styles, personality and liability. Existing proposals revolve around comparative explanations [13], argumentation [20], personalised explanation based on user personality [75], uncertainty and error presentation [76]. Research is still needed to investigate how to embed these solutions in the interfaces considering other usability and user experience factors such as the timing, the level of details, the feedback to collect from end-users and the evolution of explanation to reflect it.…”
Section: Reported Risksmentioning
confidence: 99%
“…Research on how to reduce over-trust effect suggested different solutions, but it needs more investigation to measure and adjust the relationship between different variable including trust, certainty level, cognitive styles, personality and liability. Existing proposals revolve around comparative explanations [13], argumentation [20], personalised explanation based on user personality [75], uncertainty and error presentation [76]. Research is still needed to investigate how to embed these solutions in the interfaces considering other usability and user experience factors such as the timing, the level of details, the feedback to collect from end-users and the evolution of explanation to reflect it.…”
Section: Reported Risksmentioning
confidence: 99%
“…Hence, amplification systems also require high transparency [ 189 , 192 ]. To support algorithm transparency, amplification systems can show visual activation of features that led to the recommendation [ 9 ] or similar cases in the data that serve as evidence for the current recommendation [ 193 ]. Summarizing human decisions can involve expressing data transformations as natural language rules [ 4 , 28 ] and visual node-link diagrams [ 22 ].…”
Section: Taxonomy Of Expertise Amplificationmentioning
confidence: 99%
“…We wanted the discussion on values to be grounded in a broad range of realistic examples based on the current capabilities of AI technologies. AI systems are notoriously opaque, making them difficult to understand and explain independent of examples [12]. The AI cards were created based on findings from a pilot, conducted with 5 journalism students.…”
Section: Value Cardsmentioning
confidence: 99%