Proceedings of the Web Conference 2021 2021
DOI: 10.1145/3442381.3450069
|View full text |Cite
|
Sign up to set email alerts
|

What do You Mean? Interpreting Image Classification with Crowdsourced Concept Extraction and Analysis

Abstract: Global interpretability is a vital requirement for image classification applications. Existing interpretability methods mainly explain a model behavior by identifying salient image patches, which require manual efforts from users to make sense of, and also do not typically support model validation with questions that investigate multiple visual concepts. In this paper, we introduce a scalable human-inthe-loop approach for global interpretability. Salient image areas identified by local interpretability methods… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
3
2

Relationship

4
4

Authors

Journals

citations
Cited by 19 publications
(11 citation statements)
references
References 27 publications
0
11
0
Order By: Relevance
“…TCAV [27], ACE [14]), or textual information (e.g. SECA [8]). We use these categorizations to identify the explanations relevant to include in our probe.…”
Section: Machine Learning Explainabilitymentioning
confidence: 99%
See 1 more Smart Citation
“…TCAV [27], ACE [14]), or textual information (e.g. SECA [8]). We use these categorizations to identify the explanations relevant to include in our probe.…”
Section: Machine Learning Explainabilitymentioning
confidence: 99%
“…Yet, limited efort has been devoted to investigating the debugging practices of computer vision practitioners. The machine learning community develops various explainability methods, often arguing their usefulness for model bugs identifcation [8,14,27,40,47]. However, few studies investigate their concrete uses in this process.…”
Section: Introductionmentioning
confidence: 99%
“…The most closely related work, as we discussed, is Lakkaraju et al [27] and Liu et al [29] that use HItL methods for unknown unknowns detection. Recent work that has directly inspired ours is Balayn et al [6] that propose to use human computation to interpret the behavior of image classifiers by attaching semantic concepts to the saliency maps of classification. We employ this method for unknown unknowns characterization in image recognition, and take a step further to show that by including human specified requirements of what a model should know, we can significantly improve unknown unknowns characterization.…”
Section: Related Workmentioning
confidence: 99%
“…For effective characterization of unknown unknowns, two types of knowledge are needed: knowledge of what a model has learned, that we henceforth refer to as REALLY-KNOWS, and what a model should have learned, referred to as SHOULD-KNOW. Recent work on human-in-the-loop machine learning interpretability [6] has shown the important role of humans as computational agents to describe REALLY-KNOWS, by annotating salient image areas in image recognition with semantic concepts. In this paper, we advocate another view to the role of humans as contributors who can shed light on SHOULD-KNOW.…”
Section: Introductionmentioning
confidence: 99%
“…Most issues are ultimately questions of ill-defined requirements. Developing methods to better identify the requirements of the systems prior to their development, and to test for such requirements, would allow to foresee such issues and possibly correct for them [23]. A recent study (not from the OCL domain) refers to adjacent problems as underspecification of machine learning models [76], i.e., models trained on the same dataset with the same architecture but various seemingly "unimportant" hyperparameters (e.g., initialization seed) provide similar performance on a test set, but diverging performance on the deployment data.…”
Section: Issuesmentioning
confidence: 99%