2022
DOI: 10.1109/tgrs.2022.3210980
|View full text |Cite
|
Sign up to set email alerts
|

Diversity Consistency Learning for Remote-Sensing Object Recognition With Limited Labels

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2025
2025

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(1 citation statement)
references
References 46 publications
0
1
0
Order By: Relevance
“…Inception v3 [32], DenseNet121 [33], MobileNet [35], and Xception [37] are general CNNs and classification is carried out by extracting high-level features from images; their main limitation is that only global features are available, while detailed features are ignored. To address the problem of limited samples, FDN [5], DCL [31], and B-CNN [34] improve the recognition accuracy of remote sensing targets through multi-feature fusion and pseudotag training; however, they do not consider the fusion of different receptive field features, resulting in a low utilization rate of local information. ME-CNN [6] combines the CNN, Gabor filter, LBP operator, and other means, in extracting multiple features, providing more information than the FDN.…”
Section: Comparisons With Other Methodsmentioning
confidence: 99%
“…Inception v3 [32], DenseNet121 [33], MobileNet [35], and Xception [37] are general CNNs and classification is carried out by extracting high-level features from images; their main limitation is that only global features are available, while detailed features are ignored. To address the problem of limited samples, FDN [5], DCL [31], and B-CNN [34] improve the recognition accuracy of remote sensing targets through multi-feature fusion and pseudotag training; however, they do not consider the fusion of different receptive field features, resulting in a low utilization rate of local information. ME-CNN [6] combines the CNN, Gabor filter, LBP operator, and other means, in extracting multiple features, providing more information than the FDN.…”
Section: Comparisons With Other Methodsmentioning
confidence: 99%