2023
DOI: 10.1101/2023.11.15.566968
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Visual interpretability of image-based classification models by generative latent space disentanglement applied to in vitro fertilization

Oded Rotem,
Tamar Schwartz,
Ron Maor
et al.

Abstract: The success of deep learning in identifying complex patterns exceeding human intuition comes at the cost of interpretability. Non-linear entanglement of image features makes deep learning a “black box” lacking human meaningful explanations for the models’ decision. We present DISCOVER, a generative model designed to discover the underlying visual properties driving image-based classification models. DISCOVER learns disentangled latent representations, where each latent feature encodes a unique classification-d… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 91 publications
0
1
0
Order By: Relevance
“…Deep learning for feature extraction has been shown to be powerful in the context of cell biology, in particular, for analyzing images in 2D [61][62][63] . Despite its success, feature interpretability and generalizability to unseen image data continues to be a major challenge 64,65 . To alleviate some of these problems, it has been shown that imposing additional constraints corresponding to prior biological knowledge to models helps to reduce the space of admissible solutions and improve the likelihood that the learned features can be useful for scientific discovery 66 .…”
Section: Model Backgroundmentioning
confidence: 99%
“…Deep learning for feature extraction has been shown to be powerful in the context of cell biology, in particular, for analyzing images in 2D [61][62][63] . Despite its success, feature interpretability and generalizability to unseen image data continues to be a major challenge 64,65 . To alleviate some of these problems, it has been shown that imposing additional constraints corresponding to prior biological knowledge to models helps to reduce the space of admissible solutions and improve the likelihood that the learned features can be useful for scientific discovery 66 .…”
Section: Model Backgroundmentioning
confidence: 99%