2018
DOI: 10.3390/rs10060822
|View full text |Cite
|
Sign up to set email alerts
|

Multimodal Ground-Based Cloud Classification Using Joint Fusion Convolutional Neural Network

Abstract: The accurate ground-based cloud classification is a challenging task and still under development. The most current methods are limited to only taking the cloud visual features into consideration, which is not robust to the environmental factors. In this paper, we present the novel joint fusion convolutional neural network (JFCNN) to integrate the multimodal information for ground-based cloud classification. To learn the heterogeneous features (visual features and multimodal features) from the ground-based clou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
38
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
5
2

Relationship

2
5

Authors

Journals

citations
Cited by 44 publications
(39 citation statements)
references
References 39 publications
1
38
0
Order By: Relevance
“…In addition, since cloud types change over time, appropriate fusion of multi-modal information and cloud visual information could improve the classification performance. The JFCNN [32] achieved excellent performance with the accuracy of 93.37% by learning ground-based cloud images and multi-modal information jointly. However, the dataset used in [32] only contains 3711 labeled cloud samples, and it is randomly split into the training set and the test set with the ratio of 2:1, which means there may exist high dependence between training and test samples.…”
Section: Overall Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…In addition, since cloud types change over time, appropriate fusion of multi-modal information and cloud visual information could improve the classification performance. The JFCNN [32] achieved excellent performance with the accuracy of 93.37% by learning ground-based cloud images and multi-modal information jointly. However, the dataset used in [32] only contains 3711 labeled cloud samples, and it is randomly split into the training set and the test set with the ratio of 2:1, which means there may exist high dependence between training and test samples.…”
Section: Overall Discussionmentioning
confidence: 99%
“…The comparison results between the proposed MMFN and other methods, such as [32,62,63], are summarized in Table 2. Firstly, most results in the right part of the table are more competitive than those in the left part, which indicates that the multi-modal information contains useful information for ground-based cloud recognition.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Cloud height, cloud coverage, and cloud type are three major aspects of cloud observation and have been extensively studied (Davies, ; Fu et al, ; Liu et al, ; Zhang et al, ; Zhou et al, ). However, due to the variability and diversity of cloud appearances, cloud type classification is extremely challenging and still under development.…”
Section: Introductionmentioning
confidence: 99%