2020
DOI: 10.1177/1071181320641092
|View full text |Cite
|
Sign up to set email alerts
|

The Unreasonable Ineptitude of Deep Image Classification Networks

Abstract: The success of deep image classification networks has been met with enthusiasm and investment from both the academic community and industry. We hypothesize users will expect these systems to behave similarly to humans, and to succeed and fail in ways humans do. To investigate this, we tested six popular image classifiers on imagery from ten tool categories, examining how 17 visual transforms impacted both human and AI classification. Results showed that (1) none of the visual transforms we examined produced su… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
2
0
Order By: Relevance
“…In the two studies presented here, we report tests of comprehension and performance (in Study 1) of users interacting with explanations generated via CXAI, and qualitative assessments of satisfaction by human participant responses to the CXAI system or explanations generated by that tool (in Study 2). In both user studies, we compared the CXAI system (Mamun et al, 2021b) to a Visual Browser (Mueller et al, 2020) of an image classification database [see Figure 2] which enabled users to explore patterns and see the results of the image classifier. This visual browser was the same interface CXAI users had access to when creating CXAI entries.…”
Section: Study Overviewmentioning
confidence: 99%
“…In the two studies presented here, we report tests of comprehension and performance (in Study 1) of users interacting with explanations generated via CXAI, and qualitative assessments of satisfaction by human participant responses to the CXAI system or explanations generated by that tool (in Study 2). In both user studies, we compared the CXAI system (Mamun et al, 2021b) to a Visual Browser (Mueller et al, 2020) of an image classification database [see Figure 2] which enabled users to explore patterns and see the results of the image classifier. This visual browser was the same interface CXAI users had access to when creating CXAI entries.…”
Section: Study Overviewmentioning
confidence: 99%
“…The CXAI system 1 itself was developed using Laravel. To populate the system, we created an online browser that allowed users to explore how a popular commercial image classifier performed on a set of 50 images of ten hand tools under several image transforms (see Mueller et al, 2020, which examined the performance of the system). The overall system was developed collaboratively with a set of users including the design team and interested graduate students enrolled in a human factor graduate program as part of their coursework, who were asked to explore the AI system and use the CXAI system to identify errors, patterns, and other issues with the system.…”
Section: Assessment Of the Cxai System With Goodness Criteriamentioning
confidence: 99%