2022
DOI: 10.1101/2022.06.17.496550
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

3D single-cell shape analysis using geometric deep learning

Abstract: Aberrations in cell geometry are linked to cell signalling and disease. For example, metastatic melanoma cells alter their shape to invade tissues and drive disease. Despite this, there is a paucity of methods to quantify cell shape in 3D and little understanding of the shape-space cells explore. Currently, most descriptions of cell shape rely on predefined measurements of cell regions or points along a perimeter. The adoption of 3D tissue culture and imaging systems in medical research has recently created a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 93 publications
0
5
0
Order By: Relevance
“…On the other hand, we argue that the ability to consistently measure the contribution of cell shape to a variety of applications and datasets indicates a strong biological signal that goes beyond the noise levels introduced by the coarse segmentation. Indeed, high resolution and 3D cell shapes provide discriminative information regarding the cell’s microenvironment (Dent et al, 2024; Segal et al, 2022), molecular organization (Mazloom-Farsibaf et al, 2023; Viana et al, 2023) and treatment (De Vries et al, 2023). Also here, future technologies are expected to have improved resolution and maybe even 3D information leading to information-rich segmentations.…”
Section: Discussionmentioning
confidence: 99%
“…On the other hand, we argue that the ability to consistently measure the contribution of cell shape to a variety of applications and datasets indicates a strong biological signal that goes beyond the noise levels introduced by the coarse segmentation. Indeed, high resolution and 3D cell shapes provide discriminative information regarding the cell’s microenvironment (Dent et al, 2024; Segal et al, 2022), molecular organization (Mazloom-Farsibaf et al, 2023; Viana et al, 2023) and treatment (De Vries et al, 2023). Also here, future technologies are expected to have improved resolution and maybe even 3D information leading to information-rich segmentations.…”
Section: Discussionmentioning
confidence: 99%
“…Previous studies have introduced unsupervised representation learning approaches for cell images using autoencoders with geometric deep learning 44,45 . Our work complements these approaches in three ways: first, by incorporating the notion of orientation invariance into our intracellular structure morphologydependent framework for representation learning; second, by providing a systematic multi-task benchmark to evaluate the utility of each model that goes well beyond traditionally assessed reconstruction quality; third, by focusing our analysis on 3D multi-piece intracellular structures with complex morphology and spatial distribution.…”
Section: Many Current Techniques For Analyzing Single-molecule Locali...mentioning
confidence: 99%
“…Dynamic graphs are computed by constructing k-nearest neighbor graphs on points. We used k=20 based on previous works as a balance between computational complexity and local structure information 45 . We concatenated the cross-product of the neighbor features and input points as well as the input points themselves to the hidden representation.…”
Section: Point Cloud Modelsmentioning
confidence: 99%
See 1 more Smart Citation
“…Indeed, the advantage of time-lapse imaging is that temporal models of cell behavior can encode (or learn) the transitions of states over time (Held et al, 2010;Bove et al, 2017;Soelistyo et al, 2022;Gallusser et al, 2023). Incorporating features such as local (or collective) motion, neighbourhood embeddings or cell state classification can be used to generate rich representations (Bove et al, 2017;Driscoll et al, 2019;Andrews et al, 2021;Gradeci et al, 2021;De Vries et al, 2022;Ko et al, 2022;Malin-Mayor et al, 2022;Yamamoto et al, 2022;Viana et al, 2023). Increasingly, selfsupervised methods (such as variational autoencoders) are being used to learn explainable representations directly from the image data ( (Zaritsky et al, 2021;Soelistyo et al, 2022;Wu et al, 2022)).…”
Section: Learned Representationsmentioning
confidence: 99%