2016
DOI: 10.1016/j.patrec.2016.08.016
|View full text |Cite
|
Sign up to set email alerts
|

Using filter banks in Convolutional Neural Networks for texture classification

Abstract: Deep learning has established many new state of the art solutions in the last decade in areas such as object, scene and speech recognition. In particular Convolutional Neural Network (CNN) is a category of deep learning which obtains excellent results in object detection and recognition tasks. Its architecture is indeed well suited to object analysis by learning and classifying complex (deep) features that represent parts of an object or the object itself. However, some of its features are very similar to text… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
162
0
4

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 244 publications
(168 citation statements)
references
References 24 publications
2
162
0
4
Order By: Relevance
“…Our images were acquired at high-resolution, partially using an unmanned aerial vehicle (UAV) to gain close-range access, and feature varying scale and context. We evaluate a variety of best-practice CNN architectures [21,32,1,38,16] in the literature on the CODEBRIM's multi-target defect recognition task. We show that meta-learned neural architectures achieve equivalent or better accuracies, while being more parameter efficient, by investigating and comparing two reinforcement learning neural architecture search approaches: MetaQNN [2] and "efficient neural architecture search" (ENAS) [27].…”
Section: Introductionmentioning
confidence: 99%
“…Our images were acquired at high-resolution, partially using an unmanned aerial vehicle (UAV) to gain close-range access, and feature varying scale and context. We evaluate a variety of best-practice CNN architectures [21,32,1,38,16] in the literature on the CODEBRIM's multi-target defect recognition task. We show that meta-learned neural architectures achieve equivalent or better accuracies, while being more parameter efficient, by investigating and comparing two reinforcement learning neural architecture search approaches: MetaQNN [2] and "efficient neural architecture search" (ENAS) [27].…”
Section: Introductionmentioning
confidence: 99%
“…The architecture of the Texture CNN is based on the T-CNN proposed by Andrearczyk and Whelan [8] which includes an energy layer that pools the feature maps of the last convolutional layer by calculating the average of its rectified activation output. This results in one single value per feature map, similar to an energy response to a filter bank.…”
Section: Texture Cnn Architecturementioning
confidence: 99%
“…That means that the flattened output of the energy layer is redirected directly after the last pooling layer, in the concatenation layer. This concatenation generates a new flattened vector containing information from the shape of the image and the texture, which are then propagated through the full connected layers [8].…”
Section: Texture Cnn Architecturementioning
confidence: 99%
See 1 more Smart Citation
“…Therefore we propose to match the recovered semantic regions on the basis of texture rather than standard features obtained from ResNet [17] or VGGNet [18]. Texture has been a well-studied problem in computer vision where both traditional hand tuned [19,20] as well as deep learned features [21,22] have been proposed. We use the texture encoding layer proposed by Zhang et.…”
Section: Related Workmentioning
confidence: 99%