2021
DOI: 10.1002/ece3.7344
|View full text |Cite
|
Sign up to set email alerts
|

Automated location invariant animal detection in camera trap images using publicly available data sources

Abstract: This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(17 citation statements)
references
References 44 publications
0
15
0
2
Order By: Relevance
“…Another approach to enhance generalisation is to infuse camera trap data with imagery from other sources, such as Flickr 1 and iNaturalist 2 . Shepley et al [20] details this work, showing that infusing camera trap data with up to 15% of imagery from other sources shows mAP performance increases of 3.66% to 18.20%, although improvements plateaued or decreased when infusion proportions increased further.…”
Section: B Object Detection and Image Classificationmentioning
confidence: 75%
See 2 more Smart Citations
“…Another approach to enhance generalisation is to infuse camera trap data with imagery from other sources, such as Flickr 1 and iNaturalist 2 . Shepley et al [20] details this work, showing that infusing camera trap data with up to 15% of imagery from other sources shows mAP performance increases of 3.66% to 18.20%, although improvements plateaued or decreased when infusion proportions increased further.…”
Section: B Object Detection and Image Classificationmentioning
confidence: 75%
“…In a bid to improve model performance (and indirectly overcome the generalisation problem), research progressed to training object detection models to first localise an animal within an image, then classify said animal [2], [16], [20]. This approach appeared to improve performance over whole image classifiers, with Tabak et al [2] showing top-1 classification accuracies (the predicted label with the highest confidence being correct) improving from 79.18% to 91.86% for full image and object detection bounding box cropped images respectively -testing was conducted on the same dataset.…”
Section: B Object Detection and Image Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Gomez et al [24] used ResNet101 to achieve a binary (birds vs. no birds) accuracy of 97.5% and multi-class (bird species) accuracy of 90.23%. Others, like Beery et al [25] and Shepley et al [26], used object detection techniques to eliminate non-animal images before classification. An object detection approach proposed by Wei et al [27] outperformed MLWIC [16].…”
Section: Animal Image Classification Using Convolutional Neural Networkmentioning
confidence: 99%
“…In addition, with the wide application of camera trap surveys, the size of datasets increases rapidly, and the data preprocessing obstacle brought by images with no wildlife in them becomes more and more prominent [ 19 , 20 ]. Cost-effective technologies are urgently needed to aid in ecological monitoring [ 21 , 22 ].…”
Section: Introductionmentioning
confidence: 99%