2022
DOI: 10.1016/j.jag.2022.102869
|View full text |Cite
|
Sign up to set email alerts
|

Explainable AI for earth observation: A review including societal and regulatory perspectives

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
23
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
2
1
1

Relationship

0
10

Authors

Journals

citations
Cited by 45 publications
(23 citation statements)
references
References 63 publications
0
23
0
Order By: Relevance
“…For stakeholders, users, application and decision making, digital twins need to be transparent and explainable [ 13 , 31 , 32 ]. Addressing this problem, a deeper understanding of the origin of the anomalies is important.…”
Section: Resultsmentioning
confidence: 99%
“…For stakeholders, users, application and decision making, digital twins need to be transparent and explainable [ 13 , 31 , 32 ]. Addressing this problem, a deeper understanding of the origin of the anomalies is important.…”
Section: Resultsmentioning
confidence: 99%
“…Lezine et al 2021). GANs and other artificial intelligence models used in earth observation also suffer from a lack of explainability (Gevaert 2022), which can make users less likely to evaluate their uncertainties. While the ethical consequences of misplaced Arctic-boreal lakes shown here are innocuous, both type I and type II errors in other applications of SR object detection, such as intelligence gathering (e.g.…”
Section: Ethical Considerations Of Super Resolution Object Detectionmentioning
confidence: 99%
“…In many earth system science applications, the computational efficiency of traditional tools is a significant bottleneck and available training data is voluminous. These models have a major drawback, however: models are a black box and it is therefore often not clear how they are generating their predictions (Gevaert, 2022). Using testing and validation data sets can provide some level of confidence in the models by demonstrating their level of accuracy on data not used for training.…”
Section: Explainable Aimentioning
confidence: 99%