2020
DOI: 10.1007/978-3-030-58565-5_10
|View full text |Cite
|
Sign up to set email alerts
|

Fine-Grained Visual Classification via Progressive Multi-granularity Training of Jigsaw Patches

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
150
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 286 publications
(150 citation statements)
references
References 34 publications
0
150
0
Order By: Relevance
“…2) Comparison of different fine-grained algorithms . We select resnet50 [ 25 ], BCNN [ 4 ], RA-CNN [ 5 ], MA-CNN [ 26 ], WS-DAN [ 8 ], and PMG [ 27 ] to compare with the proposed method, and test them on three public fine-grained data sets. The experimental results are shown in Tables 5 and 6 .…”
Section: Methodsmentioning
confidence: 99%
“…2) Comparison of different fine-grained algorithms . We select resnet50 [ 25 ], BCNN [ 4 ], RA-CNN [ 5 ], MA-CNN [ 26 ], WS-DAN [ 8 ], and PMG [ 27 ] to compare with the proposed method, and test them on three public fine-grained data sets. The experimental results are shown in Tables 5 and 6 .…”
Section: Methodsmentioning
confidence: 99%
“…They report state of-the-art results in BoxCars and competent results in CompCars. In [23], Du et al proposed a novel method that adds new layers in each training step exploiting information of the last step and a jigsaw puzzle generator to enhance network input by forming images that contain information from different granularity levels. They report results on several fine-grained classification datasets obtaining stateof-of-the-art results on Cars-196.…”
Section: B Fine-grained Vehicle Classificationmentioning
confidence: 99%
“…Du et al [21] approached the problem of fine-grained visual classification from a rather unconventional perspective -they do not explicitly nor implicitly mine for object parts, instead they show fine-grained features can be extracted by learning across granularities and effectively fusing multi-granularity features. The method can be trained end-to-end without additional manual annotations other than category labels, and only needs one network with one feedforward pass during testing.…”
Section: Related Workmentioning
confidence: 99%