2020
DOI: 10.1007/978-3-030-59830-3_39
|View full text |Cite
|
Sign up to set email alerts
|

CycleGAN-Based Image Translation for Near-Infrared Camera-Trap Image Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 14 publications
0
3
0
Order By: Relevance
“…Suggested future research would be to explore using different datasets to create these classes, creating a more diverse pool of imagery for the Background class and using camera trap imagery and/or UK species to inform the Animal Other class. Other areas for extended research include: investigating whether the masked boundary around a segmented image has a positive or negative effect on classification, the potential to use style transfer (Generative Adversarial Networks) to map from the captive imagery data domain to that of the wild (to assist generalisation) [40], [41], and the use of monte carlo dropout during testing to estimate confidence of the network [42].…”
Section: Discussionmentioning
confidence: 99%
“…Suggested future research would be to explore using different datasets to create these classes, creating a more diverse pool of imagery for the Background class and using camera trap imagery and/or UK species to inform the Animal Other class. Other areas for extended research include: investigating whether the masked boundary around a segmented image has a positive or negative effect on classification, the potential to use style transfer (Generative Adversarial Networks) to map from the captive imagery data domain to that of the wild (to assist generalisation) [40], [41], and the use of monte carlo dropout during testing to estimate confidence of the network [42].…”
Section: Discussionmentioning
confidence: 99%
“…Additionally, in recent years the combination of camera trapping and of artificial intelligence (AI), especially of deep learning approaches, has emerged as a breakthrough in the field of wildlife research and conservation [ 3 , 7 , 11 , 12 ]. However, many deep learning approaches are trained for and can benefit from colored images [ 13 ] like humans do [ 10 ]. This raises the question if deep learning approaches can also benefit from such artificially colored images.…”
Section: Introductionmentioning
confidence: 99%
“…It uses ResNet [ 24 ] as the generator network [ 23 ]. Gao et al [ 13 ] trained CycleGAN on a wildlife dataset and showed improved recognition results on the generated images compared to the NIR images. Mehri and Sappa [ 14 ] proposed a version of CycleGAN, specifically designed for the task of colorizing NIR images that incorporates enhanced loss functions and utilizes U-Net as a generator.…”
Section: Introductionmentioning
confidence: 99%