2018
DOI: 10.1101/351643
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Automatic vegetation identification in Google Earth images using a convolutional neural network: A case study for Japanese bamboo forests

Abstract: Classifying and mapping vegetation are very important in environmental science or natural resource management. However, these tasks are not easy because conventional methods such as field survey are highly labor intensive. Automatic identification of the objects from visual data is one of the most promising way to reduce the cost for vegetation mapping. Although deep learning has become the new solution for image recognition and classification recently, in general, detection of ambiguous objects such as vegeta… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 23 publications
0
1
0
Order By: Relevance
“…The current revolution in deep learning algorithms for computer vision also provides opportunities to improve analysis of remote sensing data. Numerous studies have been published on the classification of medium (e.g., Watanabe et al, 2018;Gallwey et al, 2020;Rai et al, 2020;Virnodkar et al 2020) and high resolution (e.g. Flood et al, 2019;Schiefer et al, 2020;Zhang et al, 2020) satellite images using deep learning methods (Kattenborn et al, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…The current revolution in deep learning algorithms for computer vision also provides opportunities to improve analysis of remote sensing data. Numerous studies have been published on the classification of medium (e.g., Watanabe et al, 2018;Gallwey et al, 2020;Rai et al, 2020;Virnodkar et al 2020) and high resolution (e.g. Flood et al, 2019;Schiefer et al, 2020;Zhang et al, 2020) satellite images using deep learning methods (Kattenborn et al, 2021).…”
Section: Introductionmentioning
confidence: 99%