2019
DOI: 10.1016/j.eswa.2018.12.051
|View full text |Cite
|
Sign up to set email alerts
|

A kernel semi-supervised distance metric learning with relative distance: Integration with a MOO approach

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
8
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…The genetic algorithm developed is a modified version of the elitist non-dominated sorting genetic algorithm (NSGA) II (Deb et al (2002). NSGA II is efficient and used widely by many researchers in various disciplines (Alirezaei et al 2019;Avci and Selim 2017;Lahsasna and Seng 2017;Sanodiya et al 2019;Soui et al 2019;Wang et al 2018). It maintains diversity by seeking an even distribution of the non-dominated solutions in the objective space using the crowding distance (Deb et al 2002), a measure of the spatial density of the solutions in the objective space, based on the average distance between a solution and its nearest neighbours.…”
Section: Penalty-free Multi-objective Genetic Algorithmmentioning
confidence: 99%
“…The genetic algorithm developed is a modified version of the elitist non-dominated sorting genetic algorithm (NSGA) II (Deb et al (2002). NSGA II is efficient and used widely by many researchers in various disciplines (Alirezaei et al 2019;Avci and Selim 2017;Lahsasna and Seng 2017;Sanodiya et al 2019;Soui et al 2019;Wang et al 2018). It maintains diversity by seeking an even distribution of the non-dominated solutions in the objective space using the crowding distance (Deb et al 2002), a measure of the spatial density of the solutions in the objective space, based on the average distance between a solution and its nearest neighbours.…”
Section: Penalty-free Multi-objective Genetic Algorithmmentioning
confidence: 99%
“…Machine learning algorithms can be used to further enhance a knowledge-intensive system [18]. This is because machine learning is applied in a multitude of tasks [19][20][21][22][23].…”
Section: Introductionmentioning
confidence: 99%
“…The distances between data points in ML constraints should be minimum, while the distances between those in CL constraints must be large. But, according to [25,26], the relative-distance constraints like inequality constraints set (C neq ) and equality constraintsset (C eq ) are particularly effective in expressing structures at a finer level of detail when compared to ML and CL constraints. Therefore, in our proposed work, we consider relative distance constraints as side information additionally to the feature vector for preserving the discriminate information.…”
Section: Introductionmentioning
confidence: 99%