2019
DOI: 10.1109/tvcg.2019.2934595
|View full text |Cite
|
Sign up to set email alerts
|

Visual Interaction with Deep Learning Models through Collaborative Semantic Inference

Abstract: Automation of tasks can have critical consequences when humans lose agency over decision processes. Deep learning models are particularly susceptible since current black-box approaches lack explainable reasoning. We argue that both the visual interface and model structure of deep learning systems need to take into account interaction design. We propose a framework of collaborative semantic inference (CSI) for the co-design of interactions and models to enable visual collaboration between humans and algorithms.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
36
0

Year Published

2020
2020
2025
2025

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 46 publications
(42 citation statements)
references
References 89 publications
(115 reference statements)
0
36
0
Order By: Relevance
“…21 Automation bias, is an over-reliance on decision making technology due to high system complexity and low understanding. Thus, as suggested by 13 we use 'model understanding' techniques to peer into our models' black-box to increase awareness of its capabilities and limitations by inspecting learned abstractions. Specifically, we analyze how our best model combines and filters patient features and how important different model components are overall for predicting each task.…”
Section: (Which Was Not Certified By Peer Review)mentioning
confidence: 99%
See 3 more Smart Citations
“…21 Automation bias, is an over-reliance on decision making technology due to high system complexity and low understanding. Thus, as suggested by 13 we use 'model understanding' techniques to peer into our models' black-box to increase awareness of its capabilities and limitations by inspecting learned abstractions. Specifically, we analyze how our best model combines and filters patient features and how important different model components are overall for predicting each task.…”
Section: (Which Was Not Certified By Peer Review)mentioning
confidence: 99%
“…So far we only looked at understanding how the model abstracts data and uses its component to model the four tasks. However, to also provide 'decision understanding' 13 of what features are most relevant or impactful for a specific 6 https://deepai.org/machine-learning-glossary-and-terms/disentangled-representation-learning 7 Due to space limitations, we do not show frequency adjusted relevance, though this is easy in practice.…”
Section: Explainability For Per-patient 'Decision Understanding'mentioning
confidence: 99%
See 2 more Smart Citations
“…Accordingly, the problem arises that end-users in a system with FDD based on neural networks must rely on models that can make mistakes. In this situation, the end-user may lose trust [15].…”
Section: Related Workmentioning
confidence: 99%