2022
DOI: 10.1109/tgrs.2020.3048024
|View full text |Cite
|
Sign up to set email alerts
|

A Lightweight and Robust Lie Group-Convolutional Neural Networks Joint Representation for Remote Sensing Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
43
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 57 publications
(43 citation statements)
references
References 97 publications
0
43
0
Order By: Relevance
“…As the results are shown in Table 1, using three feature maps still achieves the best performance. Regarding the model complexity, using the proposed Multihead attention layer with three feature maps increases the model footprint from 4.6 M to 9.4 M. To meet the constrain of maximum 5 M of trainable parameters, we apply the quantization technique which [43] 89.4 91.7 MG-CAP (Bilinear) (55.99 M) [43] 89.4 93.0 MG-CAP (Sqrt-E) (55.99 M) [43] 90.8 93.0 EfficientNet-B0-aux (≈ 5.3𝑀) [2] 90.0 92.9 EfficientNet-B3-aux (≈ 13𝑀) [2] 91.1 93.8 VGG-16 + MTL (≈ 138.4 M) [62] -91.5 ResNeXt-50 + MTL (≈ 25 M) [62] -93.8 ResNeXt-101 + MTL (≈ 88.79 M) [62] 91.9 94.2 SE-MDPMNet (5.17 M) [54] 91.8 94.1 LGRIN (4.63 M) [49] 91.9 94.4 Transformer (46.3 M) [57] 93.1 95.6 Our systems (2.4 M) 91.0 93.8…”
Section: Resultsmentioning
confidence: 99%
“…As the results are shown in Table 1, using three feature maps still achieves the best performance. Regarding the model complexity, using the proposed Multihead attention layer with three feature maps increases the model footprint from 4.6 M to 9.4 M. To meet the constrain of maximum 5 M of trainable parameters, we apply the quantization technique which [43] 89.4 91.7 MG-CAP (Bilinear) (55.99 M) [43] 89.4 93.0 MG-CAP (Sqrt-E) (55.99 M) [43] 90.8 93.0 EfficientNet-B0-aux (≈ 5.3𝑀) [2] 90.0 92.9 EfficientNet-B3-aux (≈ 13𝑀) [2] 91.1 93.8 VGG-16 + MTL (≈ 138.4 M) [62] -91.5 ResNeXt-50 + MTL (≈ 25 M) [62] -93.8 ResNeXt-101 + MTL (≈ 88.79 M) [62] 91.9 94.2 SE-MDPMNet (5.17 M) [54] 91.8 94.1 LGRIN (4.63 M) [49] 91.9 94.4 Transformer (46.3 M) [57] 93.1 95.6 Our systems (2.4 M) 91.0 93.8…”
Section: Resultsmentioning
confidence: 99%
“…The coefficient vectors A and C are used to calculate Eqs. (10) and (11). where a value is linearly reduced from 2 to 0 over the iterations and random number r in the range of [0, 1].…”
Section: Whale Optimization Algorithmmentioning
confidence: 99%
“…LGRIN [56], SCViT [53] and ET-GSNet [57] are single-branch networks. Specifically, the D-CNN [34] network leverages the metric learning scheme to learn discriminative features.…”
Section: Comparison With State Of the Artsmentioning
confidence: 99%