2013
DOI: 10.1109/tip.2013.2277825
|View full text |Cite
|
Sign up to set email alerts
|

Learning Component-Level Sparse Representation for Image and Video Categorization

Abstract: A novel component-level dictionary learning framework that exploits image/video group characteristics based on sparse representation is introduced in this paper. Unlike the previous methods that select the dictionaries to best reconstruct the data, we present an energy minimization formulation that jointly optimizes the learning of both sparse dictionary and component-level importance within one unified framework to provide a discriminative and sparse representation for image/video groups. The importance measu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 8 publications
(4 citation statements)
references
References 38 publications
(59 reference statements)
0
4
0
Order By: Relevance
“…This latter method is more biologically-plausible, but less accurate [71]. It has been found that the performance of sparse dictionary-based classifiers is improved by the supervised learning of more discriminative dictionaries [75, 82, 92, 94–96]. Such learning might potentially also improve the performance of the proposed algorithm.…”
Section: Resultsmentioning
confidence: 99%
“…This latter method is more biologically-plausible, but less accurate [71]. It has been found that the performance of sparse dictionary-based classifiers is improved by the supervised learning of more discriminative dictionaries [75, 82, 92, 94–96]. Such learning might potentially also improve the performance of the proposed algorithm.…”
Section: Resultsmentioning
confidence: 99%
“…Therefore, efforts have been made to encode discriminative features. Yang et al [59] used sparse representations to encode the features, leading to improvements in dictionary learning for multi-class classification with large numbers of classes [8], [16], [18], [50], [62]. More advanced methods have emerged over recent years such as localconstrained linear coding [55], super-vector coding [63], and Fisher vectors [45].…”
Section: B Image Representationmentioning
confidence: 99%
“…This is obviously time consuming. However, the N-best path algorithm in (8) does not compute the whole edge-scoring function, thereby reducing the computational complexity to K + (L − 1)QK ≈ LQK classifiers and (L − 1)Q multiplications for the path probability, which is much lower than the traditional best path algorithm. The proposed algorithm always carries out the search over the Q best candidate branches.…”
Section: B N-best Path For Hierarchical Learningmentioning
confidence: 99%
“…wavelet decomposition in image processing; Christopoulos et al, 2000), these bases can also be learned from exemplar data via dictionary learning algorithms (Olshausen and Field, 1996;Aharon et al). This sparsity model can be a useful model for signals of interest, such as video signals, where similar sparse decompositions have been used for action recognition (Guha and Ward, 2012) and video categorization (Chiang et al, 2013). With this model, we will use a generalized notion of the coherence parameter used in (Charles et al, 2014):…”
Section: Sparse Multiple Inputsmentioning
confidence: 99%