2019
DOI: 10.1101/571232
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A low-cost, open-source framework for tracking and behavioural analysis of animals in aquatic ecosystems

Abstract: Although methods for tracking animals underwater exist, they frequently involve costly infrastructure investment, or capture and manipulation of animals to affix or implant tags. These practical concerns limit the taxonomic coverage of aquatic movement ecology studies and implementation in areas where high infrastructure investment is impossible. Here we present a method based on deep-learning and structure-from-motion, with which we can accurately determine the 3D location of animals, the structure of the env… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 8 publications
(10 citation statements)
references
References 48 publications
0
10
0
Order By: Relevance
“…When combined with unmanned aerial vehicles (UAVs; Schiffman, 2014) or other field-based imaging (Francisco et al, 2019), applying these methods to the study of individuals and groups in the wild can provide high-resolution behavioral data that goes beyond the capabilities of current GPS and accelerometry-based technologies (Nagy et al, 2010; Nagy et al, 2013; Kays et al, 2015; Strandburg-Peshkin et al, 2015; Strandburg-Peshkin et al, 2017; Flack et al, 2018)—especially for species that are impractical to study with tags or collars. Additionally, by applying these methods in conjunction with 3-D habitat reconstruction—using techniques from photogrammetry (Strandburg-Peshkin et al, 2017; Francisco et al, 2019)—field-based studies can begin to integrate fine-scale behavioral measurements with the full 3-D environment in which the behavior evolved. Future advances will likely allow for the calibration and synchronizaton of imaging devices across multiple UAVs (e.g., Price et al, 2018; Saini et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…When combined with unmanned aerial vehicles (UAVs; Schiffman, 2014) or other field-based imaging (Francisco et al, 2019), applying these methods to the study of individuals and groups in the wild can provide high-resolution behavioral data that goes beyond the capabilities of current GPS and accelerometry-based technologies (Nagy et al, 2010; Nagy et al, 2013; Kays et al, 2015; Strandburg-Peshkin et al, 2015; Strandburg-Peshkin et al, 2017; Flack et al, 2018)—especially for species that are impractical to study with tags or collars. Additionally, by applying these methods in conjunction with 3-D habitat reconstruction—using techniques from photogrammetry (Strandburg-Peshkin et al, 2017; Francisco et al, 2019)—field-based studies can begin to integrate fine-scale behavioral measurements with the full 3-D environment in which the behavior evolved. Future advances will likely allow for the calibration and synchronizaton of imaging devices across multiple UAVs (e.g., Price et al, 2018; Saini et al, 2019).…”
Section: Discussionmentioning
confidence: 99%
“…However, because animals exhibit different behaviors depending on their surroundings (Strandburg-Peshkin et al, 2017; Francisco et al, 2019; Akhund-Zade et al, 2019), laboratory environments are often less than ideal for studying many natural behaviors. Most conventional computer vision methods are also limited in their ability to accurately track groups of individuals over time, but nearly all animals are social at some point in their life and exhibit specialized behaviors when in the presence of conspecifics (Strandburg-Peshkin et al, 2013; Rosenthal et al, 2015; Jolles et al, 2017; Klibaite et al, 2017; Klibaite and Shaevitz, 2019; Francisco et al, 2019; Versace et al, 2019). These methods also commonly track only the animal’s center of mass, which reduces the behavioral output of an individual to a two-dimensional or three-dimensional particle-like trajectory.…”
Section: Introductionmentioning
confidence: 99%
“…Several technologies are available to separate the foreground from the background (segmentation). Various machine learning algorithms are frequently used to great effect, even for the most complex environments ( Hughey et al 2018, Robie et al 2017, Francisco et al 2019 ). These more advanced approaches are typically beneficial for the analysis of field-data or organisms that are very hard to see in video (e.g.…”
Section: Methodsmentioning
confidence: 99%
“…We trained an implementation of a Mask and Region based Convolution Neural Network (Mask R-CNN) on a subset of manually labeled images to accurately detect and segment individual fish in the videos, resulting in pixel masks for each video frame and individual respectively (32,33). The masks were then skeletonized using morphological image transformations, allowing to estimate fish spine poses as seven equidistantly spaced points along the midline of each mask's long axis.…”
Section: Deep-learning Based Automated Tracking and Analysis Of Behaviormentioning
confidence: 99%
“…The first and second spine points represent head position and orientation, and were used to automatically reconstruct continuous fish trajectories using a simple, distance-based identity assignment approach. Accuracy and high detection frequency were visually verified with a Python-based GUI developed within the lab, that was also used to manually correct false identity assignments and losses (32).…”
Section: Deep-learning Based Automated Tracking and Analysis Of Behaviormentioning
confidence: 99%