2015
DOI: 10.1016/j.neucom.2014.11.067
|View full text |Cite
|
Sign up to set email alerts
|

Multiple graph regularized sparse coding and multiple hypergraph regularized sparse coding for image representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 28 publications
(19 citation statements)
references
References 38 publications
0
19
0
Order By: Relevance
“…In the first step, Euclidean distance is often used to compute the pairwise similarity. The second step uses the feature-sign search algorithm [25] for sparse decomposition and the Lagrange dual algorithm [5], [8], [9], [21] for dictionary learning. However, this avenue of GSC has the following drawbacks.…”
Section: A Our Motivationsmentioning
confidence: 99%
See 1 more Smart Citation
“…In the first step, Euclidean distance is often used to compute the pairwise similarity. The second step uses the feature-sign search algorithm [25] for sparse decomposition and the Lagrange dual algorithm [5], [8], [9], [21] for dictionary learning. However, this avenue of GSC has the following drawbacks.…”
Section: A Our Motivationsmentioning
confidence: 99%
“…Hence, a hypergraph could capture high-order relationships among the samples and has been developed for image classification [19] and dimensionality reduction [20]. In [21], ensemble manifold regularization [22] integrates multiple Face images TSC GSC LogSC graphs to avoid hype-parameter selections. In [23], a hypergraph incidence consistency term was introduced into multihypergraph sparse coding.…”
Section: Introductionmentioning
confidence: 99%
“…. Using a similar approach where we substitute C with GZ T , and then replace G with the right hand side of (13) in (7) we can find the optimal selection for step-size c G as c κ+1…”
Section: Gradient Descent Based Algorithmmentioning
confidence: 99%
“…Many research efforts tried to use sparse representation in visual content analysis and promising results are reported [25], [32], [54], [60]. For example, in [60], Yuan et al addressed the problem of visual classification with multiple features and proposed a multitask joint sparse representation model to combine the strengths of multiple features for recognition.…”
Section: Related Workmentioning
confidence: 99%
“…Motivated by the fact that kernel trick can capture the nonlinear similarity of features, Gao et al [15] proposed a kernel sparse representation method to represent high-dimensional features by mapping them with an implicit kernel function. In [25], to learn both optimal intrinsic manifold and sparse code jointly, Jin et al represented images by sparse coding with a graph regularizer. In [14], to address the problem of automatically uncovering the underlying group structure from images, Feng et al proposed a novel auto-grouped sparse representation method which could group semantically correlated feature elements together by optimally fusing their multiple sparse representations.…”
Section: Related Workmentioning
confidence: 99%