2022
DOI: 10.3390/rs14030688
|View full text |Cite
|
Sign up to set email alerts
|

Active Fire Mapping on Brazilian Pantanal Based on Deep Learning and CBERS 04A Imagery

Abstract: Fire in Brazilian Pantanal represents a serious threat to biodiversity. The Brazilian National Institute of Spatial Research (INPE) has a program named Queimadas, which estimated from January 2020 to October 2020, a burned area in Pantanal of approximately 40,606 km2. This program also provides daily data of active fire (fires spots) from a methodology that uses MODIS (Aqua and Terra) sensor data as reference satellites, which presents limitations mainly when dealing with small active fires. Remote sensing res… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
2

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 31 publications
(13 citation statements)
references
References 41 publications
0
11
0
2
Order By: Relevance
“…However, ultrahigh temporal resolution data such as GOES-16 ABI is thus far poorly explored in this field. The authors of [50] proposed an approach based on object detection methods to map AF in the Brazilian Pantanal biome. For that, the authors used deep learning (a subset of ML-based on neural networks) and CBERS 4A (China Brazil Earth Resources Satellite) imagery.…”
Section: Discussionmentioning
confidence: 99%
“…However, ultrahigh temporal resolution data such as GOES-16 ABI is thus far poorly explored in this field. The authors of [50] proposed an approach based on object detection methods to map AF in the Brazilian Pantanal biome. For that, the authors used deep learning (a subset of ML-based on neural networks) and CBERS 4A (China Brazil Earth Resources Satellite) imagery.…”
Section: Discussionmentioning
confidence: 99%
“…The models that performed the best on the validation set were chosen independently to make inferences on the data from the test set. For the inference findings, we employed KBS ( Figure 4B ) to decode them rather than non-maximum suppression (NMS) ( 32 ). First, the category corresponding to the prediction result was divided into the key bone and development grade, and that grade’s confidence level and prediction box were recorded.…”
Section: Methodsmentioning
confidence: 99%
“…In the first part, Both the PP-PicoDet and NanoDet models are anchor-free models, while the YOLOv5 model employed the Kmeans method to obtain anchors such as [[23,24, 27,28, 26,34], [32,33,31,41,37,38], [38,48,54,58,66,69]]. The images were preprocessed before model training, including resizing the images to correspond to the size required by the model (640x640 for YOLOv5, 416x416 for PP-PicoDet, and 416x416 for NanoDet) and normalizing the images to a range of pixel values of (0, 1).…”
Section: Training Modelmentioning
confidence: 99%
“…Coupled with the ability of the SAR systems of acquiring data regardless of illumination and atmospheric conditions, deep learning methods have shown great potential for extracting patterns of changes in images (Ban et al, 2020;Higa et al, 2022;Zhang et al, 2021), particularly semantic segmentation networks, such as the U-Net, proposed by Ronneberger et al (2015). U-Net is a convolutional neural network architecture that performs semantic segmentation of images.…”
Section: Introductionmentioning
confidence: 99%