2018
DOI: 10.1016/j.neunet.2017.10.001
|View full text |Cite
|
Sign up to set email alerts
|

Deep neural networks for texture classification—A theoretical analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
29
0
2

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4

Relationship

1
8

Authors

Journals

citations
Cited by 85 publications
(31 citation statements)
references
References 10 publications
0
29
0
2
Order By: Relevance
“…We experimentally show that our framework surpasses all existing stateof-the-art algorithms for high-resolution satellite imagery classification on both SAT-4 and SAT-6 datasets, including the original DeepSAT (Basu et al 2015a), MLP (Zscore) , SatCNN (both Z -score and linear) , TradCNN (Z -score) , triplet networks (Liu and Huang 2018), D-DSML-Caffenet , and contrastive loss (Simo-Serra et al 2015). It has been shown theoretically in (Basu et al 2018(Basu et al , 2016 CNNs, by themselves, are not able to learn representations of Haralick features from data. By augmenting CNNs with the handcrafted features, we are enhancing the discriminative power of CNNs for satellite imagery.…”
Section: Introductionmentioning
confidence: 69%
“…We experimentally show that our framework surpasses all existing stateof-the-art algorithms for high-resolution satellite imagery classification on both SAT-4 and SAT-6 datasets, including the original DeepSAT (Basu et al 2015a), MLP (Zscore) , SatCNN (both Z -score and linear) , TradCNN (Z -score) , triplet networks (Liu and Huang 2018), D-DSML-Caffenet , and contrastive loss (Simo-Serra et al 2015). It has been shown theoretically in (Basu et al 2018(Basu et al , 2016 CNNs, by themselves, are not able to learn representations of Haralick features from data. By augmenting CNNs with the handcrafted features, we are enhancing the discriminative power of CNNs for satellite imagery.…”
Section: Introductionmentioning
confidence: 69%
“…Other authors provided either a theoretical analysis or visualizing analysis in a context of an application. For example, Basu et al [12] published a theoretical analysis for texture classification whilst Minematsu et al [134,135] provided a visualizing analysis for background subtraction. Despite these first valuable investigation, the understanding of DNNs remains still shallows.…”
Section: Theoretical Aspectsmentioning
confidence: 99%
“…The common methods which are used for texture classification are namely parametric statistical model-based methods, structural methods, empirical second order statistical methods and various other transform methods. Deep learningbased techniques for the classification of texture have been proposed in [3][4][5][6].…”
Section: Texture Classificationmentioning
confidence: 99%