2024
DOI: 10.1007/s11263-024-02074-y
|View full text |Cite
|
Sign up to set email alerts
|

3D-MuPPET: 3D Multi-Pigeon Pose Estimation and Tracking

Urs Waldmann,
Alex Hoi Hang Chan,
Hemal Naik
et al.

Abstract: Markerless methods for animal posture tracking have been rapidly developing recently, but frameworks and benchmarks for tracking large animal groups in 3D are still lacking. To overcome this gap in the literature, we present 3D-MuPPET, a framework to estimate and track 3D poses of up to 10 pigeons at interactive speed using multiple camera views. We train a pose estimator to infer 2D keypoints and bounding boxes of multiple pigeons, then triangulate the keypoints to 3D. For identity matching of individuals in … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 81 publications
0
5
0
Order By: Relevance
“…Our target species was great tits, although a smaller number of tagged blue tits were also present, so we extended the model to detect their keypoints. We adopted a top-down approach similar to [37] by: 1) localizing individual birds in each frame using YOLOv8 [67], 2) estimating 14 unique 2D keypoints (Figure 1C) with DeepLabCut [40,68], and 3) triangulating 2D postures into 3D (Figure 1D). For each unique individual recorded in the study (N = 57 great tits, N = 20 blue tits), we collated all 3D information and calculated the median head and body skeleton, then ensured the detections fit the median head skeletons by computing the estimated rotation translation of the skeleton in 3D space.…”
Section: D-socsmentioning
confidence: 99%
See 4 more Smart Citations
“…Our target species was great tits, although a smaller number of tagged blue tits were also present, so we extended the model to detect their keypoints. We adopted a top-down approach similar to [37] by: 1) localizing individual birds in each frame using YOLOv8 [67], 2) estimating 14 unique 2D keypoints (Figure 1C) with DeepLabCut [40,68], and 3) triangulating 2D postures into 3D (Figure 1D). For each unique individual recorded in the study (N = 57 great tits, N = 20 blue tits), we collated all 3D information and calculated the median head and body skeleton, then ensured the detections fit the median head skeletons by computing the estimated rotation translation of the skeleton in 3D space.…”
Section: D-socsmentioning
confidence: 99%
“…Computer vision enables the automatic quantification of fine-scaled behaviors from images, not only increasing the volume of behavioral data but also enhancing the objectivity of behavior coding. Although initial applications were primarily in controlled laboratory settings [34][35][36][37][38][39][40][41][42][43][44][45][46], computer vision is also being applied in the field. It has been used to monitor animals from drones [47][48][49], quantify courtship displays [50], analyze parental care behaviors [51], study underwater interactions [52], identify individuals [53,54], and more.…”
Section: Introductionmentioning
confidence: 99%
See 3 more Smart Citations