2021
DOI: 10.1007/s11023-021-09569-4
|View full text |Cite
|
Sign up to set email alerts
|

Two Dimensions of Opacity and the Deep Learning Predicament

Abstract: Deep neural networks (DNNs) have become increasingly successful in applications from biology to cosmology to social science. Trained DNNs, moreover, correspond to models that ideally allow the prediction of new phenomena. Building in part on the literature on ‘eXplainable AI’ (XAI), I here argue that these models are instrumental in a sense that makes them non-explanatory, and that their automated generation is opaque in a unique way. This combination implies the possibility of an unprecedented gap between dis… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 35 publications
(26 citation statements)
references
References 90 publications
0
26
0
Order By: Relevance
“…In a broader sense, the opacity of a method can be understood as its disposition to resist epistemic access, in particular understanding (Beisbart, 2021). This broader sense of opacity can be used to diagnose additional ways in which DNNs are opaque; for instance, we do not know what features they pick up on when they classify images (Boge, 2021; see also Boge & Grünke, forthcoming). To stress the added difficulties of understanding DNNs, Humphreys (forthcoming) argues that they are representationally opaque, that is, they do not represent the target system in a way that allows explicit scrutiny or understanding, since they provide what he calls extensional, implicit and distributed representations.…”
Section: Machine Learning and Its Interpretabilitymentioning
confidence: 99%
“…In a broader sense, the opacity of a method can be understood as its disposition to resist epistemic access, in particular understanding (Beisbart, 2021). This broader sense of opacity can be used to diagnose additional ways in which DNNs are opaque; for instance, we do not know what features they pick up on when they classify images (Boge, 2021; see also Boge & Grünke, forthcoming). To stress the added difficulties of understanding DNNs, Humphreys (forthcoming) argues that they are representationally opaque, that is, they do not represent the target system in a way that allows explicit scrutiny or understanding, since they provide what he calls extensional, implicit and distributed representations.…”
Section: Machine Learning and Its Interpretabilitymentioning
confidence: 99%
“…Human translators do not exhibit much better BLEU scores, however. In the original paper introducing the BLEU metric (Papineni et al, 2002), the BLEU scores reported for two human translators were 19.3 and 25.7, respectively. Transformers are uninterpretable (Boge, 2021), or that they tell us nothing about linguistic competencies (Dupre, 2021). Landgrebe and Smith (2021) argue that language understanding is unlearnable for Transformers, even with supervision.…”
Section: Handwritten Grammarsmentioning
confidence: 99%
“…
Landgrebe and Smith (Synthese 198(March):2061-2081, 2021 present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligenceperhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture.
…”
mentioning
confidence: 99%
“…Nonetheless, at the moment, there are very few attempts available to carry out a conceptual analysis of opacity. Those proposed by Burrell (2016), Creel (2020) and Boge (2021) are among the most relevant.…”
mentioning
confidence: 99%
“…"Algorithmic opacity" is related to the abstract specification level and concerns users' understanding of the algorithm describing the overall system's behaviour; "structural opacity" is related to the implementation level and concerns the users' understanding of the program (code) implementing the algorithm; "Run opacity" is related to the physical execution level and concerns users' understanding of the physical process executing the program. Both these taxonomies make some progress in characterizing the plural nature of "opacity" but still miss a dimension of opacity, which, instead, is recognized by Boge (2021) and concerns the fundamental distinction between understanding of a model and understanding with a model. In the context of scientific research, models are not generally interesting per-se but only as they allow scientists to understand something about the world.…”
mentioning
confidence: 99%