2021
DOI: 10.1038/s41592-021-01106-6
|View full text |Cite
|
Sign up to set email alerts
|

Geometric deep learning enables 3D kinematic profiling across species and environments

Abstract: Comprehensive descriptions of animal behavior require precise measurements of 3D whole-body movements. Although 2D approaches can track visible landmarks in restrictive environments, performance drops in freely moving animals, due to occlusions and appearance changes. Therefore, we designed DANNCE to robustly track anatomical landmarks in 3D across species and behaviors. DANNCE uses projective geometry to construct inputs to a convolutional neural network that leverages learned 3D geometric reasoning. We train… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

4
157
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2
1

Relationship

0
10

Authors

Journals

citations
Cited by 150 publications
(161 citation statements)
references
References 60 publications
4
157
0
Order By: Relevance
“…This suggests that behavioral readouts may indicate biological interventions at a level more sensitive than detectable even by the human eye (Wiltschko et al, 2020). Recent advances in deep learning for image processing have boosted the field to new heights, especially with the publication of DeepLabCut (Mathis et al, 2018), SLEAP (Pereira et al, 2019(Pereira et al, , 2020 and DANNCE (Dunn et al, 2021) for the estimation of body points. Follow-up modules B-Soid (Hsu and Yttri, 2019) and SimBA (Nilsson et al, 2020) are now available to train behavior classifiers on body-part pose data.…”
Section: Today and The Futurementioning
confidence: 99%
“…This suggests that behavioral readouts may indicate biological interventions at a level more sensitive than detectable even by the human eye (Wiltschko et al, 2020). Recent advances in deep learning for image processing have boosted the field to new heights, especially with the publication of DeepLabCut (Mathis et al, 2018), SLEAP (Pereira et al, 2019(Pereira et al, , 2020 and DANNCE (Dunn et al, 2021) for the estimation of body points. Follow-up modules B-Soid (Hsu and Yttri, 2019) and SimBA (Nilsson et al, 2020) are now available to train behavior classifiers on body-part pose data.…”
Section: Today and The Futurementioning
confidence: 99%
“…On one hand, the study of behavioural correlates linked to neural activity is giving previously unimaginable insights. The possibility to look with sub-second resolution at communities and transitions between different behavioral states allows to correlate single cell firing and oscillations to the animals' action with an unprecedented time resolution (Hong et al, 2015;Wei and Kording, 2018;Luxem et al, 2020;Dunn et al, 2021;Hausmann et al, 2021). Moreover, unsupervised approaches based on machine learning algorithms to score behaviour are replacing manual scoring, which is intrinsically prone to subjective biases and therefore produces results that are hard to compare between studies.…”
Section: Future Directions and Open Questionsmentioning
confidence: 99%
“…Some machine learning algorithms have shown high precision in object detection, such as deformable parts models (Felzenszwalb et al, 2010;Unger et al, 2017), R-CNN (Girshick et al, 2014), and deep neural networks (Geuther et al, 2019;Yoon et al, 2019). Using those algorithms, several toolboxes were developed for precisely calculating the postures of laboratory animals during movements, such as DeepLabCut (Mathis et al, 2018;Nath et al, 2019), LEAP (Wang et al, 2017;Pereira et al, 2019), DeepPoseKit (Graving et al, 2019), TRex (Walter and Couzin, 2021), and DANNCE (Dunn et al, 2021;Karashchuk et al, 2021), which greatly simplify and speed up the analysis of multiple behaviors. Although these algorithms may be used to track the gross movement of an animal, they are timeconsuming and insufficiently accurate for our purposes because the exact centers cannot be obtained in the process of creating a training dataset.…”
Section: Introductionmentioning
confidence: 99%