2017
DOI: 10.48550/arxiv.1702.02549
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Backpropagation Training for Fisher Vectors within Neural Networks

Abstract: Fisher-Vectors (FV) encode higher-order statistics of a set of multiple local descriptors like SIFT features. They already show good performance in combination with shallow learning architectures on visual recognitions tasks. Current methods using FV as a feature descriptor in deep architectures assume that all original input features are static. We propose a framework to jointly learn the representation of original features, FV parameters and parameters of the classifier in the style of traditional neural net… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 15 publications
0
3
0
Order By: Relevance
“…Further details for the derivation of these gradients can be found in the supplementary material. Note that similar formulas are also provided by [36].…”
Section: New Fisher Vector Encoding Layer (Fve-layer)mentioning
confidence: 92%
See 2 more Smart Citations
“…Further details for the derivation of these gradients can be found in the supplementary material. Note that similar formulas are also provided by [36].…”
Section: New Fisher Vector Encoding Layer (Fve-layer)mentioning
confidence: 92%
“…In contrast to the aforementioned methods that compute an FVE of local features extracted separately from the image, Wieshollek et al [36] and Tang et al [31] deploy the FVE directly in a neural network. As a result, the features are learned jointly with the parameters for both the classification and the mixture model.…”
Section: Variants Of Deep Fisher Vector Encoding (Deep Fve)mentioning
confidence: 99%
See 1 more Smart Citation