Procedings of the British Machine Vision Conference 2016 2016
DOI: 10.5244/c.30.24
|View full text |Cite
|
Sign up to set email alerts
|

Boosted Convolutional Neural Networks

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
90
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 113 publications
(91 citation statements)
references
References 0 publications
1
90
0
Order By: Relevance
“…Compared with the baselines of pooling-based model B-CNN [17], CBP [6] and LRBP [12], the superior result that we achieve mainly benefits from Anno. Accuracy SPDA-CNN [35] √ 85.1 B-CNN [17] √ 85.1 PN-CNN [2] √ 85.4 STN [9] 84.1 RA-CNN [5] 85.3 MA-CNN [37] 86.5 B-CNN [17] 84.0 CBP [6] 84.0 LRBP [12] 84.2 HIHCA [3] 85.3 Improved B-CNN [16] 85.8 BoostCNN [22] 86.2 KP [4] 86.2 FBP(relu5 3) 85.7 CBP(relu5 3 + relu5 2) 86.7 HBP(relu5 3 + relu5 2 + relu5 1) 87.1…”
Section: Configurations Of Hierarchical Bilinear Poolingmentioning
confidence: 99%
See 1 more Smart Citation
“…Compared with the baselines of pooling-based model B-CNN [17], CBP [6] and LRBP [12], the superior result that we achieve mainly benefits from Anno. Accuracy SPDA-CNN [35] √ 85.1 B-CNN [17] √ 85.1 PN-CNN [2] √ 85.4 STN [9] 84.1 RA-CNN [5] 85.3 MA-CNN [37] 86.5 B-CNN [17] 84.0 CBP [6] 84.0 LRBP [12] 84.2 HIHCA [3] 85.3 Improved B-CNN [16] 85.8 BoostCNN [22] 86.2 KP [4] 86.2 FBP(relu5 3) 85.7 CBP(relu5 3 + relu5 2) 86.7 HBP(relu5 3 + relu5 2 + relu5 1) 87.1…”
Section: Configurations Of Hierarchical Bilinear Poolingmentioning
confidence: 99%
“…The classification accuracy on FGVC-Aircraft is summarized in Table 6. Still, our model achieves the highest √ 92.6 FCAN [18] 89.1 RA-CNN [5] 92.5 MA-CNN [37] 92.8 B-CNN [17] 90.6 LRBP [12] 90.9 HIHCA [3] 91.7 Improved B-CNN [16] 92.0 BoostCNN [22] 92.1 KP [4] 92.4 HBP 93.7…”
Section: Configurations Of Hierarchical Bilinear Poolingmentioning
confidence: 99%
“…Our method achieves better performance than those [4,18,30,31,32] using ground-truth bounding boxes or part annotations during training or testing time on three datasets. Compared with BCNN-based methods [8,20,25] our method obtains comparable or even better performance due to accurate attention localization and part sequence modeling. To further enhance the capacity of REAPS, as RA-CNN [9] does, we incorporate one more PSN into our framework, and the 2nd PSN is based on the attended region of the 1st PSN.…”
Section: Experiments and Analysismentioning
confidence: 97%
“…There exist different works that proposed the use of ensemble methods for deep neural networks. Remarkable results are reported in [26], where the authors adopted a boosting strategy on image classification, but also in CoopNet [27] which combines multiple precision models to improve accuracy and inference latency. Even more interesting, the concept of ensemble learning can be found in the internal architecture of the most recent CNN models.…”
Section: Ensembles Learningmentioning
confidence: 99%