2018 IEEE Winter Conference on Applications of Computer Vision (WACV) 2018
DOI: 10.1109/wacv.2018.00047
|View full text |Cite
|
Sign up to set email alerts
|

A Generative Approach to Zero-Shot and Few-Shot Action Recognition

Abstract: We present a generative framework for zero-shot action recognition where some of the possible action classes do not occur in the training data. Our approach is based on modeling each action class using a probability distribution whose parameters are functions of the attribute vector representing that action class. In particular, we assume that the distribution parameters for any action class in the visual space can be expressed as a linear combination of a set of basis vectors where the combination weights are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
94
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 134 publications
(94 citation statements)
references
References 34 publications
0
94
0
Order By: Relevance
“…The best existing approach for GZSL action recognition, GGM [24], employs a generative approach to synthesize unseen class data and utilizes unlabelled real features (C3D) from the unseen classes to rectify the bias of the learned parameters towards seen classes. Particularly, for the UCF101 dataset and manual attributes combination, the proposed approach, CEWGAN-OD, achieves gains of 5.1% and 25.8% (in terms of accuracy) over the CLSWGAN [33] and GGM [24], respectively. Further, for the word2vec embedding, the proposed CEWGAN-OD achieves gains of 16% and 19.8% over the best existing approach, GGM [24], for the HMDB51 and UCF101 datasets, respectively.…”
Section: State-of-the-art Comparisonmentioning
confidence: 99%
See 1 more Smart Citation
“…The best existing approach for GZSL action recognition, GGM [24], employs a generative approach to synthesize unseen class data and utilizes unlabelled real features (C3D) from the unseen classes to rectify the bias of the learned parameters towards seen classes. Particularly, for the UCF101 dataset and manual attributes combination, the proposed approach, CEWGAN-OD, achieves gains of 5.1% and 25.8% (in terms of accuracy) over the CLSWGAN [33] and GGM [24], respectively. Further, for the word2vec embedding, the proposed CEWGAN-OD achieves gains of 16% and 19.8% over the best existing approach, GGM [24], for the HMDB51 and UCF101 datasets, respectively.…”
Section: State-of-the-art Comparisonmentioning
confidence: 99%
“…Particularly, for the UCF101 dataset and manual attributes combination, the proposed approach, CEWGAN-OD, achieves gains of 5.1% and 25.8% (in terms of accuracy) over the CLSWGAN [33] and GGM [24], respectively. Further, for the word2vec embedding, the proposed CEWGAN-OD achieves gains of 16% and 19.8% over the best existing approach, GGM [24], for the HMDB51 and UCF101 datasets, respectively. ZSL performance comparison: In Tab.…”
Section: State-of-the-art Comparisonmentioning
confidence: 99%
“…In the standard GZSL setting, we improve by 9.3 and 4.9 in over the non-generative model SADLE in [28], for HMDB51 and UCF101, respectively. Generativemodel driven methods GGM [20], f-CLSWGAN [35] and CEWGAN [16] uses unseen class prototypes during training to generate unseen class visual samples. We still outperform GGM, f-CLSWGAN, and deliver comparable performance with CEWGAN.…”
Section: Methodsmentioning
confidence: 99%
“…We focus on inductive ZSL in which test data is fully unknown at training time. There exists a body of literature on transductive ZSL [1,33,54,55,59,58,60], where test images or videos are available during training but test labels are not. We do not discuss the transductive approach in this work.…”
Section: Related Workmentioning
confidence: 99%
“…To our knowledge, all current ZSL methods for video recognition use pretrained visual embeddings [1,4,18,33,35,54,55,58,59,60,61,64]. This provides a good tradeoff between training efficiency and using prior knowledge.…”
Section: Introductionmentioning
confidence: 99%