2023
DOI: 10.1101/2023.09.13.557636
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Self-supervised deep learning uncovers the semantic landscape of drug-induced latent mitochondrial phenotypes

Parth Natekar,
Zichen Wang,
Mehul Arora
et al.

Abstract: Imaging-based high-content screening aims to identify substances that modulate cellular phenotypes. Traditional approaches screen compounds for their ability to shift disease phenotypes toward healthy phenotypes, but these end point-based screens lack an atlas-like mapping between phenotype and cell state that covers the full spectrum of possible phenotypic responses. In this study, we present MitoSpace: a novel mitochondrial phenotypic atlas that leverages self-supervised deep learning to create a semanticall… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 66 publications
0
1
0
Order By: Relevance
“…Alternatively, foundation models potentially offer consistent feature representation extraction across datasets, which would circumvent the alignment step. 10 For example, initiatives like "Mitospace", which focuses on extracting a common feature space of mitochondria 47 , the masked autoencoder Phenom-Beta, which is a vision transformer foundational model for embedding microscopy images 10 , "BioMorph", which links morphology to organelle processes 48 , and the Allen Cell Explorer, which uncovers cell phenotypes organelle-by-organelle 49,50 , illustrate promising future directions for annotating universal cell representations with generalizable single-cell phenotypes. Nevertheless, these foundation models still require phenotype interpretation to analyze cells and perturbations on a uniform, biologically-interpretable basis.…”
Section: Discussionmentioning
confidence: 99%
“…Alternatively, foundation models potentially offer consistent feature representation extraction across datasets, which would circumvent the alignment step. 10 For example, initiatives like "Mitospace", which focuses on extracting a common feature space of mitochondria 47 , the masked autoencoder Phenom-Beta, which is a vision transformer foundational model for embedding microscopy images 10 , "BioMorph", which links morphology to organelle processes 48 , and the Allen Cell Explorer, which uncovers cell phenotypes organelle-by-organelle 49,50 , illustrate promising future directions for annotating universal cell representations with generalizable single-cell phenotypes. Nevertheless, these foundation models still require phenotype interpretation to analyze cells and perturbations on a uniform, biologically-interpretable basis.…”
Section: Discussionmentioning
confidence: 99%