2015
DOI: 10.1007/978-3-319-24947-6_39
|View full text |Cite
|
Sign up to set email alerts
|

Fast Approximate GMM Soft-Assign for Fine-Grained Image Classification with Large Fisher Vectors

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2018
2018

Publication Types

Select...
1
1

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(4 citation statements)
references
References 15 publications
0
4
0
Order By: Relevance
“…This discrepancy is due to soft-assign stage being 33 times slower in the case of traffic signs. Our detectors may be accellerated further by using the fast approximation of the GMM soft-assignment, as proposed in [36]. We leave this, however, for future work.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…This discrepancy is due to soft-assign stage being 33 times slower in the case of traffic signs. Our detectors may be accellerated further by using the fast approximation of the GMM soft-assignment, as proposed in [36]. We leave this, however, for future work.…”
Section: Discussionmentioning
confidence: 99%
“…The complexity of patch scoring can be subdivided into the following three stages with similar computational complexity: i) computing the soft-assign p(k|x), ii) computing the FV, and iii) determining the patch contribution by (6), (7), or (9). In this paper, we consider efficient implementation of the latter two stages for CPU architectures (efficient soft-assign is addressed in [36]). In the naive implementation, both of these two stages are O(NKD) where N is the number of patches, K is the number of components, while D is the raw feature dimensionality.…”
Section: Efficient Patch Scoring With a Sparse Modelmentioning
confidence: 99%
“…We do not match the cascading approach of [36] or [12] for a single class scenario, but our approach might scale better in the multi-class case because different classes may share features through a shared GMM. We could further speed-up the process by using fast soft assign as proposed in [20] or by using the random decision forests as a generative model for FV [3]. An additional speed-up could be achieved by using the C implementation instead of Python.…”
Section: Methodsmentioning
confidence: 99%
“…Due to generative front-end, we have a better sharing potential than pure discriminative approaches used in [27]. In contrast with [21], we use block-sparsity [20], the normalized score gradient and the spatial model of the pairwise layout.…”
Section: Related Workmentioning
confidence: 99%