2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298685
|View full text |Cite
|
Sign up to set email alerts
|

The application of two-level attention models in deep convolutional neural network for fine-grained image classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
61
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 284 publications
(62 citation statements)
references
References 11 publications
1
61
0
Order By: Relevance
“…AlexNet [68] and VGG-16 [73]. The experimental results and conclusions are basically consistent with other studies on hierarchical models [8], [16], [55], [72], [74]. Parallelization and grading of neural networks is one of the developmental trends for deep learning.…”
Section: Discussionsupporting
confidence: 86%
See 1 more Smart Citation
“…AlexNet [68] and VGG-16 [73]. The experimental results and conclusions are basically consistent with other studies on hierarchical models [8], [16], [55], [72], [74]. Parallelization and grading of neural networks is one of the developmental trends for deep learning.…”
Section: Discussionsupporting
confidence: 86%
“…Recently, a number of studies have been conducted on finegrained classification methods, and most of them provide promising performance in certain fields. Inspired by the design conceptions of parallel networks (e.g., Part-based CNN [8], Two-level Attention CNN [16], MCNN [55], GoogLeNet [72], ResNet [74], and Hypercolumn CNN [90]), we proposed a novel hybrid CNN structure codenamed M-bCNN, which leverages convolutional kernel matrixes to effectively increase the data streams, neurons, and link channels. The matrix-based architecture played an important role and the expected accuracy gains from it were delivered in the fine-grained image classification of wheat leaf diseases.…”
Section: Discussionmentioning
confidence: 99%
“…In order to apply fine-grained classification methods to practical applications, many researchers turn to studying how to accurately locate discriminative regions under weakly supervised conditions, and then use CNN to extract features from these regions. Xiao et al designed the first two-level attention model [16] of a weakly supervised classification algorithm, where object-level attention was adopted to select a relevant bounding box of a certain object, while part-level attention was used to locate discriminative components of the object, which achieved 69.7% accuracy on the CUB-200-2011 dataset [17]. Mei et al proposed the recurrent attention convolutional neural network (RA-CNN) [18], which recursively learns discriminative-region attention and feature representation of this region on multiple scales in a mutually reinforcing way, but this method adds computational overhead.…”
Section: Related Workmentioning
confidence: 99%
“…The attention model is able to process candidate regions for classification with different resolution and reduce processing cost by focusing on a restricted set of regions. With the help of the attention models, discriminatory power could be focused on the specific parts of the input data, which helps to classify input data [23].…”
Section: Introductionmentioning
confidence: 99%