2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition 2018
DOI: 10.1109/cvpr.2018.00105
|View full text |Cite
|
Sign up to set email alerts
|

Towards Faster Training of Global Covariance Pooling Networks by Iterative Matrix Square Root Normalization

Abstract: Global covariance pooling in convolutional neural networks has achieved impressive improvement over the classical first-order pooling. Recent works have shown matrix square root normalization plays a central role in achieving state-of-the-art performance. However, existing methods depend heavily on eigendecomposition (EIG) or singular value decomposition (SVD), suffering from inefficient training due to limited support of EIG and SVD on GPU. Towards addressing this problem, we propose an iterative matrix squar… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

5
307
1

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 272 publications
(313 citation statements)
references
References 33 publications
5
307
1
Order By: Relevance
“…which is a novel self-supervision mechanism to effectively localize informative regions without the need of bounding-box/part annotations. • iSQRT-COV [18]: Towards faster training of global covariance pooling networks by iterative matrix square root normalization. Implementation: We used open-sourced MXNet [4] as our code-base, and trained all the models on 8 Tesla P-100 GPUs.…”
Section: Experiments Setupmentioning
confidence: 99%
“…which is a novel self-supervision mechanism to effectively localize informative regions without the need of bounding-box/part annotations. • iSQRT-COV [18]: Towards faster training of global covariance pooling networks by iterative matrix square root normalization. Implementation: We used open-sourced MXNet [4] as our code-base, and trained all the models on 8 Tesla P-100 GPUs.…”
Section: Experiments Setupmentioning
confidence: 99%
“…However, due to the limited representative capacity of hand-crafted features, the performance of this type of method is moderate. In recent years, deep neural networks have developed advanced abilities in the feature extraction and function approximation [28]- [34], bringing significant progress in the fine-grained image classification task [6]- [10], [24], [35], [35]- [46].…”
Section: A Fine-grained Object Classificationmentioning
confidence: 99%
“…Bilinear CNN model (BCNN) [8] is the first work that adopts matrix outer product operation on the initial embedded features to generate a second-order representation for fine-grained classification. Li et al [10] (iSQRT-COV) further improve the navie bilinear model by using covariance matrices over the last convolutional features as fine-grained features. iSQRT-COV obtains state-of-the-art performance on both generic and fine-grained datasets.…”
Section: A Fine-grained Object Classificationmentioning
confidence: 99%
See 1 more Smart Citation
“…Bilinear pooling. Bilinear pooling (or second-order pooling) is widely used in fine-grained visual classification [7][8][9][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33], visual questioning answering [34][35][36], feature fusion and disentangling [22,23,[37][38][39][40], action recognition [39][40][41][42][43][44] and other tasks. In deep neural nets, bilinear pooling is mostly used only once before the classification layer, e.g.…”
Section: Related Workmentioning
confidence: 99%