2019
DOI: 10.7554/elife.47994
|View full text |Cite
|
Sign up to set email alerts
|

DeepPoseKit, a software toolkit for fast and robust animal pose estimation using deep learning

Abstract: Quantitative behavioral measurements are important for answering questions across scientific disciplines—from neuroscience to ecology. State-of-the-art deep-learning methods offer major advances in data quality and detail by allowing researchers to automatically estimate locations of an animal’s body parts directly from images or videos. However, currently available animal pose estimation methods have limitations in speed and robustness. Here, we introduce a new easy-to-use software toolkit, DeepPoseKit, that … Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
315
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 420 publications
(316 citation statements)
references
References 93 publications
(345 reference statements)
0
315
0
1
Order By: Relevance
“…This is an attractive solution for real-time applications, but it is not quite as accurate. For various datasets, DeepPoseKit reports it is about three times as accurate as LEAP, yet similar to DeepLabCut (60). They also report about twice faster video processing compared to DeepLabCut and LEAP for batch-processing (on small frame sizes).…”
Section: Accuracy and Speedmentioning
confidence: 82%
See 1 more Smart Citation
“…This is an attractive solution for real-time applications, but it is not quite as accurate. For various datasets, DeepPoseKit reports it is about three times as accurate as LEAP, yet similar to DeepLabCut (60). They also report about twice faster video processing compared to DeepLabCut and LEAP for batch-processing (on small frame sizes).…”
Section: Accuracy and Speedmentioning
confidence: 82%
“…So far, the toolboxes have been tested on data from the same distribution (i.e. by splitting frames from videos into test and training data), which is important for assessing the performance (48,55,60), but did not directly tested out-of-domain robustness.…”
Section: Robustnessmentioning
confidence: 99%
“…For instance, DeepLabCut leverages a pre-trained deep learning model (based on ImageNet) to accurately localize body landmarks. These methods work for various organisms like flies, worms, and mice by learning from a larger number of images collected from a single view (17)(18)(19)(20)(21)(22). However, macaques present several problems that make current best markerless motion capture unworkable.…”
Section: Introductionmentioning
confidence: 99%
“…Networks trained on either version can be fully integrated into DLStream and used as needed. Additionally, DLStream is theoretically able to support positional information from other pose estimation networks 2,40 . Recent work from Graving et al 40 suggests that using a different network architecture similar result with faster processing speed can be achieved in offline analysis.…”
Section: Performance Improvements and Compatibilitymentioning
confidence: 99%
“…Additionally, DLStream is theoretically able to support positional information from other pose estimation networks 2,40 . Recent work from Graving et al 40 suggests that using a different network architecture similar result with faster processing speed can be achieved in offline analysis. Whether this can be directly translated into closed-loop performance needs to be tested, especially considering the quality of tracking.…”
Section: Performance Improvements and Compatibilitymentioning
confidence: 99%