2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8545591
|View full text |Cite
|
Sign up to set email alerts
|

Pre-trained VGGNet Architecture for Remote-Sensing Image Scene Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
35
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 75 publications
(35 citation statements)
references
References 17 publications
0
35
0
Order By: Relevance
“…This CNN architecture is made up of three connected layers (the FC6, FC7, and FC8layer) in addition to thirteen convolutional layers [26]. Having a combination of two 3x3 convolutional layers creates a 5x5 receptive field that has a number of kernels or learnable filters whereby each layer unit receives input from units located in the previous layer.…”
Section: Vggnet (Geometry Group Network)mentioning
confidence: 99%
See 1 more Smart Citation
“…This CNN architecture is made up of three connected layers (the FC6, FC7, and FC8layer) in addition to thirteen convolutional layers [26]. Having a combination of two 3x3 convolutional layers creates a 5x5 receptive field that has a number of kernels or learnable filters whereby each layer unit receives input from units located in the previous layer.…”
Section: Vggnet (Geometry Group Network)mentioning
confidence: 99%
“…Executing these integrated layers requires a rectified linear units (ReLU), which has an average or max-pooling operation, which is critical for multi-layer networks [27]. The representation's spatial size is decreased by pooling layers while ReLU, which in this case is the half-wave rectifier function f(x)=max(x,0), accelerates the training phase and prohibits case of overfitting [26]. The final output layer is made up of the fully-connected layers, whereby a neuron from one layer is connected to numbers in the previous volume.…”
Section: Vggnet (Geometry Group Network)mentioning
confidence: 99%
“…Deep learning can learn high-level features in data by using structures composed of multiple non-linear transformations. In view of this, we test our model on deep features, which are trained from two kinds of deep learning-based CNN features: AlexNet [46] and VGGNet [47] for its superiority performance in feature learning and classification. The details of these models are tabulated in Table 8.…”
Section: Nwpu-resisc45 Datasetmentioning
confidence: 99%
“…The final classification result was obtained by averaging all model outputs. A four-layer feature representation was constructed with two convolutional layers and two full connected layers of a VGGNet model [41]. Then, canonical correlation analysis (CCA) was used for feature fusion.…”
Section: Introductionmentioning
confidence: 99%