2019 IEEE Applied Imagery Pattern Recognition Workshop (AIPR) 2019
DOI: 10.1109/aipr47015.2019.9174594
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Segmentation of Clouds in Satellite Imagery Using Deep Pre-trained U-Nets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(10 citation statements)
references
References 10 publications
0
10
0
Order By: Relevance
“…The values in the confusion matrix were labeled as true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Since our study had an imbalanced class distribution, it was necessary to select evaluation metrics which punish false predictions and therefore measure the relevance and completeness of the model predictions [12]. For this reason, we used the metrics precision and recall, called respectively the complement of commission errors and the complement of emission errors by the remote sensing community.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…The values in the confusion matrix were labeled as true positive (TP), true negative (TN), false positive (FP), and false negative (FN). Since our study had an imbalanced class distribution, it was necessary to select evaluation metrics which punish false predictions and therefore measure the relevance and completeness of the model predictions [12]. For this reason, we used the metrics precision and recall, called respectively the complement of commission errors and the complement of emission errors by the remote sensing community.…”
Section: Evaluation Metricsmentioning
confidence: 99%
“…In the field of remote sensing, the use of new artificial intelligence techniques is also an active topic (Yuan et al 2020;Boukabara et al 2019). One prominent example is satellite-based cloud property retrieval (Weng et al 2018;Xie et al 2017;Gonzales and Sakla 2019;Lee et al 2020).…”
Section: Introductionmentioning
confidence: 99%
“…The algorithm provides the image directly to the convolutionary neural networks, and the algorithm removes the most important features of the image [10]. In the findings indicate that CNN functionality extracted from profound learning must be taken into account in the most visual recognition tasks [11]. To identify cloud image classifications, priority knowledge is needed, which is learned through identified cloud image types with a similar composition.…”
Section: Introductionmentioning
confidence: 99%
“…To identify cloud image classifications, priority knowledge is needed, which is learned through identified cloud image types with a similar composition. The data sets of the CCSN (Cirrus Cumulus Stratus Nimbus) divides into 11…”
Section: Introductionmentioning
confidence: 99%