2022
DOI: 10.21203/rs.3.rs-1371027/v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Computer Vision Reveals Fish Behaviour Through Structural Equation Modelling of Movement Patterns

Abstract: Background: Studying and quantifying fish behaviour is important to understand how fish interact with their environments. Yet much of fish behaviour in aquatic ecosystems remains hard to observe and time-consuming to manually document. Automated tracking through computer vision techniques can provide fine-scale movement data of many individuals across spatial and temporal scales. When used alongside statistical methodologies such as structural equation models (SEMs), these data can be used to infer underlying … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 42 publications
0
3
0
Order By: Relevance
“…Even for the two better performing frameworks, accuracy for counts per frame was below 80% (73.8% for Detectron, 69.3% for YOLOv5, as well as for MaxN (73% for Detectron, 65.3% for YOLOv5). Object detection performance on stationary cameras for this species has had a wide range of outcomes in recent studies, namely: 91% F1 value in a single species model (Lopez-Marcano et al, 2021), but also 75.0% in a three species model (Lopez-Marcano et al, 2022). For bream, we are unable to clearly state whether automation algorithms are performing differently on mobile than on stationary cameras.…”
Section: Discussionmentioning
confidence: 81%
See 1 more Smart Citation
“…Even for the two better performing frameworks, accuracy for counts per frame was below 80% (73.8% for Detectron, 69.3% for YOLOv5, as well as for MaxN (73% for Detectron, 65.3% for YOLOv5). Object detection performance on stationary cameras for this species has had a wide range of outcomes in recent studies, namely: 91% F1 value in a single species model (Lopez-Marcano et al, 2021), but also 75.0% in a three species model (Lopez-Marcano et al, 2022). For bream, we are unable to clearly state whether automation algorithms are performing differently on mobile than on stationary cameras.…”
Section: Discussionmentioning
confidence: 81%
“…CNN models in recent multispecies analyses of videos from underwater stationary cameras have produced overall performances (as % of object detection correct, similar to F1) of: 86.9% average on 18 fish species (Villon et al, 2018), and 78.0% on 20 species (with values for individual species ranging from 63 -99%, Villon et al, 2021). Single species CNN models for stationary cameras in local waters near the current survey location have produced a range of F1 values in object detection analyses: 87.6% and 92.3% over seagrass habitat (Ditria et al, 2020a;Ditria et al, 2020b); 83.0% and 90.6% over reef (Ditria et al, 2020b;Lopez-Marcano et al, 2022), albeit with much lower values where training did not include the habitat over which test videos were filmed (e.g. 58.4% and 73.3%, Ditria et al, 2020b).…”
Section: Discussionmentioning
confidence: 87%
“…Rugose habitats of reefs may lead to larger numbers of false negatives/positives in fish detections due to cryptic behavior and/or coloring and mottling that resembles complex habitat (e.g., lionfish). Fish species of different size classes and with different swimming or schooling behaviors may be harder to detect or classify than others (Lopez-Marcano et al, 2022), especially at variable distances from a camera at a fixed position (mobile cameras face their own challenges). Regardless of the source of error, the main challenge is that the annotation phase of postprocessing is likely impacted by detection and identification differences arising from variable environmental conditions in which video is collected, and therefore great care has to be taken to ensure that time-series remain stable relative to changes made in post-processing methods.…”
Section: Introductionmentioning
confidence: 99%