2016
DOI: 10.3390/rs8050436
|View full text |Cite
|
Sign up to set email alerts
|

Hierarchical Coding Vectors for Scene Level Land-Use Classification

Abstract: Land-use classification from remote sensing images has become an important but challenging task. This paper proposes Hierarchical Coding Vectors (HCV), a novel representation based on hierarchically coding structures, for scene level land-use classification. We stack multiple Bag of Visual Words (BOVW) coding layers and one Fisher coding layer to develop the hierarchical feature learning structure. In BOVW coding layers, we extract local descriptors from a geographical image with densely sampled interest point… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
33
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
8
1

Relationship

1
8

Authors

Journals

citations
Cited by 47 publications
(33 citation statements)
references
References 30 publications
0
33
0
Order By: Relevance
“…Whereas, SIFT descriptor and HOG feature are local features that are used for the representations of local structure [108] and shape information [109]. To represent an entire scene image, they are generally used as building blocks to construct global image features, such as the well-known bag-of-visual-words (BoVW) models [6,8,9,14,19,29,36,38,39,55,93,101,122,123] and HOG feature-based part models [22,23,27,103]. In addition, a number of improved feature encoding/pooling methods have also been proposed in the past few years, such as Fisher vector coding [10,14,84,86], spatial pyramid matching (SPM) [124], and probabilistic topic model (PTM) [11,40,42,43,92,123], etc.…”
Section: A Handcrafted Feature Based Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Whereas, SIFT descriptor and HOG feature are local features that are used for the representations of local structure [108] and shape information [109]. To represent an entire scene image, they are generally used as building blocks to construct global image features, such as the well-known bag-of-visual-words (BoVW) models [6,8,9,14,19,29,36,38,39,55,93,101,122,123] and HOG feature-based part models [22,23,27,103]. In addition, a number of improved feature encoding/pooling methods have also been proposed in the past few years, such as Fisher vector coding [10,14,84,86], spatial pyramid matching (SPM) [124], and probabilistic topic model (PTM) [11,40,42,43,92,123], etc.…”
Section: A Handcrafted Feature Based Methodsmentioning
confidence: 99%
“…Owing to its simplicity, k-means clustering method is widely used for unsupervised feature learning-based scene image classification. The most representative example is BoVW-based methods [8,9,14,19,29,86,87,89,91,122,123,132,140] where the visual dictionaries (codebooks) are generated by performing k-means clustering on the set of local features.…”
Section: B Unsupervised Feature Learning Based Methodsmentioning
confidence: 99%
“…It is worthy to mention that our approach is complementary to (Scenario II: Conv features) method[27] and combining the two approaches can be expected to provide further gain in the classification performance.On the RSSCN7 dataset, the deep learning based feature selection approach (DBN)[83] achieves a mean recognition rate of 77.0%. The hierarchical coding vectors based classification approach[90] achieves a classification result of 86.4%. The deep filter banks approach[93] provides a classification performance of 90.4%.…”
mentioning
confidence: 99%
“…However, it is computationally expensive to directly train effective DNNs for visual terrain classification. For a good trade-off between effectiveness and efficiency, the BOVW framework [17,24,25] is used to generate a compact semantic representation with low-level descriptors for visual terrain classification, obtaining good accuracy and nice robustness. This visual terrain classification algorithm has been successfully applied in the small quadruped robot Littledog as a necessary function module [24].…”
Section: Journal Of Sensorsmentioning
confidence: 99%