Scientific Understanding and Representation 2022
DOI: 10.4324/9781003202905-28
|View full text |Cite
|
Sign up to set email alerts
|

Understanding from Deep Learning Models in Context

Abstract: This paper places into context how the term model in machine learning (ML) contrasts with traditional usages of scientific models for understanding and we show how direct analysis of an estimator's learned transformations (specifically, the hidden layers of a deep learning model) can improve understanding of the target phenomenon and reveal how the model organizes relevant information. Specifically, three modes of understanding will be identified, the difference between implementation irrelevance and functiona… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
0
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 20 publications
0
0
0
Order By: Relevance
“…Since ML models engage in idealizations and can find patterns of interest in a way divorced from underlying real-world processes, like toy models, it seems like ML models do not actually represent their targets and that we should accept the ML representation hypothesis regarding their epistemic status. And indeed, in a recent paper, Tamir and Shech (2022) argue that ML models can fail to represent their targets, undermining their epistemic status. One example they highlight is the case of Esteva et al's (2017) melanoma classifier that reportedly does better at identifying melanoma compared to dermatologists.…”
Section: Representation and MLmentioning
confidence: 99%
See 1 more Smart Citation
“…Since ML models engage in idealizations and can find patterns of interest in a way divorced from underlying real-world processes, like toy models, it seems like ML models do not actually represent their targets and that we should accept the ML representation hypothesis regarding their epistemic status. And indeed, in a recent paper, Tamir and Shech (2022) argue that ML models can fail to represent their targets, undermining their epistemic status. One example they highlight is the case of Esteva et al's (2017) melanoma classifier that reportedly does better at identifying melanoma compared to dermatologists.…”
Section: Representation and MLmentioning
confidence: 99%
“…This is an example of what I am calling a data processing idealization. Tamir and Shech (2022) suggest that the lack of similarity resulting from data processing idealizations can undermine how well ML models represent phenomena.…”
Section: Representation and MLmentioning
confidence: 99%