2022
DOI: 10.5194/gmd-2022-195
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

AttentionFire_v1.0: interpretable machine learning fire model for burned area predictions over tropics

Abstract: Abstract. African and South American (ASA) wildfires account for more than 70 % of global burned areas and have strong connection to local climate for sub-seasonal to seasonal wildfire dynamics. However, representation of the wildfire-climate relationship remains challenging, due to spatiotemporally heterogenous responses of wildfires to climate variability and human influences. Here, we developed an interpretable Machine Learning (ML) fire model (AttentionFire_v1.0) to resolve the complex spatial- heterogenou… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
13
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(13 citation statements)
references
References 45 publications
0
13
0
Order By: Relevance
“…The AttentionFire model is implemented with Python under Python 3 environment. The model is open access at https: //doi.org/10.5281/zenodo.7416437 (Li et al, 2022b) under Creative Commons Attribution 4.0 International license. Detailed code and descriptions are included in the repository, including loading datasets, model initialization, training, predicting, saving parameters, and loading the trained model (see more details in "Code availability" section).…”
Section: Attentionfire Modelmentioning
confidence: 99%
See 1 more Smart Citation
“…The AttentionFire model is implemented with Python under Python 3 environment. The model is open access at https: //doi.org/10.5281/zenodo.7416437 (Li et al, 2022b) under Creative Commons Attribution 4.0 International license. Detailed code and descriptions are included in the repository, including loading datasets, model initialization, training, predicting, saving parameters, and loading the trained model (see more details in "Code availability" section).…”
Section: Attentionfire Modelmentioning
confidence: 99%
“…Li et al: AttentionFire_v1.0 Code availability. The source code of AttentionFire_v1.0 and all baseline machine learning models is archived at Zenodo repository: https://doi.org/10.5281/zenodo.7416437 (Li et al, 2022b) under Creative Commons Attribution 4.0 International license, with four zip files: data, data_preparation, model, and example. The "data" file contains the links to all raw datasets used to drive the model (e.g., burned areas, climate forcing).…”
mentioning
confidence: 99%
“…While effective at capturing interannual fire dynamics at a regional scale (Abatzoglou and Williams 2016;Higuera and Abatzoglou 2021), these models often fall short in representing sub-regional heterogeneity and intra-annual variations of fire risk at a relatively higher spatial resolution (Kondylatos et al 2022;Li et al 2020b;Wang et al 2021). The limitations in accuracy may stem from imperfect parameterization of the emergent climate-fire relationships (Li et al 2023;Littell et al 2016), and inadequate representation of complex interactions between fires and their drivers, such as climate, fuel availability (Parks et al 2014), topography (Alizadeh et al 2023), and human activities (Fusco et al 2016). To address these challenges, recent studies have employed advanced machine learning (ML) models along with critical socio-environmental factors to predict fires in the US (Gray et al 2018;Li et al 2020b;Wang and Wang 2020;Wang et al 2021).…”
Section: Introductionmentioning
confidence: 99%
“…To address these challenges, recent studies have employed advanced machine learning (ML) models along with critical socio-environmental factors to predict fires in the US (Gray et al 2018;Li et al 2020b;Wang and Wang 2020;Wang et al 2021). Albeit the accuracy improvement, most ML models operate as 'black boxes' and lack transparency in their decision-making processes (Arrieta et al 2020;Kondylatos et al 2022;Li et al 2023). For example, the opacity inherent in neural network or deep-learning models diminishes their interpretability (Li et al 2023;Zhu et al 2022).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation