2013
DOI: 10.1007/978-3-642-40994-3_16
|View full text |Cite
|
Sign up to set email alerts
|

Learning Exemplar-Represented Manifolds in Latent Space for Classification

Abstract: Abstract. Intrinsic manifold structure of a data collection is valuable information for classification task. By considering the manifold structure in the data set for classification and with the sparse coding framework, we propose an algorithm to: (1) find exemplars from each class to represent the class-specific manifold structure, in which way the object-space dimensionality is reduced; (2) simultaneously learn a latent feature space to make the mapped data more discriminative according to the class-specific… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2016
2016
2018
2018

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…In this section we introduce and analyze the loss we use for learning pixel embeddings. This problem is broadly related to supervised distance metric learning [86,48,50] and clustering [49] but adapted to the specifics of instance labeling where the embedding vectors are treated as labels for a variable number of objects in each image.…”
Section: Pairwise Loss For Pixel Embeddingsmentioning
confidence: 99%
“…In this section we introduce and analyze the loss we use for learning pixel embeddings. This problem is broadly related to supervised distance metric learning [86,48,50] and clustering [49] but adapted to the specifics of instance labeling where the embedding vectors are treated as labels for a variable number of objects in each image.…”
Section: Pairwise Loss For Pixel Embeddingsmentioning
confidence: 99%
“…Their method, known as sparse embedding (SE), can be extended to a non-linear version via kernel tricks and also adopts a novel classification schema leading to great performance. However, it fails to consider the discrimination power among the separately learned classspecific dictionaries, such that it is not guaranteed to produce improved classification performance [24]. Ptucha et al [13] integrated manifold-based DR and sparse representation within a single framework and presented a variant of the K-SVD algorithm by exploiting a linear extension of graph embedding (LGE).…”
Section: Related Workmentioning
confidence: 99%
“…Their method, known as sparse embedding (SE) can be extended to a non-linear version via kernel tricks and also adopts a novel classification schema leading to great performance. Nevertheless, it fails to consider the discrimination power among the separately learned class-specific dictionaries, such that it is not guaranteed to produce improved classification performance [11]. Ptucha et al [4] integrated manifold-based DR and sparse representation within a single framework and presented a variant of the K-SVD algorithm by exploiting a linear extension of graph embedding (LGE).…”
Section: Introductionmentioning
confidence: 99%