2015
DOI: 10.1109/tnnls.2014.2329240
|View full text |Cite
|
Sign up to set email alerts
|

Graph Embedded Nonparametric Mutual Information for Supervised Dimensionality Reduction

Abstract: Abstract-In this paper, we propose a novel algorithm for dimensionality reduction that uses as a criterion the mutual information (MI) between the transformed data and their corresponding class labels. The MI is a powerful criterion that can be used as a proxy to the Bayes error rate. Furthermore, recent quadratic nonparametric implementations of MI are computationally efficient and do not require any prior assumptions about the class densities. We show that the quadratic nonparametric MI can be formulated as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(11 citation statements)
references
References 29 publications
0
11
0
Order By: Relevance
“…Here, we should note that several other methods which employ pairwise similarity/distance measures, e.g. [8], [13], [15]- [20], can be formulated using the Graph Embedding framework.…”
Section: A Graph Embeddingmentioning
confidence: 99%
“…Here, we should note that several other methods which employ pairwise similarity/distance measures, e.g. [8], [13], [15]- [20], can be formulated using the Graph Embedding framework.…”
Section: A Graph Embeddingmentioning
confidence: 99%
“…For instance, in the picture sharing community Flickr, there are billions of images and each can be annotated with textual labels selected from millions of candidates. In the community of neural networks and related learning systems, to handle the challenges, some works like [22]- [26] focus on feature dimension reduction or model simplification, while others like LI-MLC [21] focus on shrinking the label space. Here we follow the latter one.…”
Section: Responses To Reviewsmentioning
confidence: 99%
“…The effect of the curse of dimensionality on proximity-based algorithms, such as k-means clustering or k-NN classification, has been extensively studied in the past (Aggarwal et al 2001;Beyer et al 1999). The observation that distances become increasingly meaningless as the dimensionality of data increases has lead researchers to the development of more robust proximity measures, such as metric-learning approaches (Xing et al 2003), as well as towards dimensionality reduction methods (Bouzas et al 2015;Nikitidis et al 2014;Passalis and Tefas 2017).…”
Section: Related Workmentioning
confidence: 99%