2016
DOI: 10.1016/j.eswa.2016.01.021
|View full text |Cite
|
Sign up to set email alerts
|

Unsupervised probabilistic feature selection using ant colony optimization

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 81 publications
(24 citation statements)
references
References 50 publications
0
24
0
Order By: Relevance
“…While choosing features, it will consider the classification performance. Differently, on the other hand, UPFS [Dadaneh, Markid and Zakerolhosseini (2016)] and ANFS [Luo, Nie, Chang et al (2018)] are belong to unsupervised method. The labels of the dataset are masked while using these two methods.…”
Section: Related Workmentioning
confidence: 99%
See 3 more Smart Citations
“…While choosing features, it will consider the classification performance. Differently, on the other hand, UPFS [Dadaneh, Markid and Zakerolhosseini (2016)] and ANFS [Luo, Nie, Chang et al (2018)] are belong to unsupervised method. The labels of the dataset are masked while using these two methods.…”
Section: Related Workmentioning
confidence: 99%
“…Hou et al first combined the embedding learning and feature ranking together, and proposed joint embedding learning and sparse regression (JELSR) to perform feature selection [Hou, Nie, Li et al (2014)]. Dadaneh proposed unsupervised probabilistic feature selection using ant colony optimization (UPFS) [Dadaneh, Markid and Zakerolhosseini (2016)]. They utilized the inter-feature information that showed the similarity between the features that led the algorithm to decreased redundancy in the final set.…”
Section: Upfsmentioning
confidence: 99%
See 2 more Smart Citations
“…A number of works can be traced in recent years addressing the problem of text classification through feature selection. Feature selection algorithms such as chisquare, information gain, and mutual information (Yang and Pedersen., 1997) though seem to be powerful techniques for text data, a number of novel feature selection algorithms based on genetic algorithm (Bharti and Singh., 2016;Ghareb et al, 2016), ant colony optimization (Dadaneh et al, 2016;Moradi and Gholampour., 2016;Uysal., 2016;Meena et al, 2012), Bayesian principle Zhang et al, 2016;Feng et al, 2012;Fenga et al, 2015;Sarkar et al, 2014), clustering of features (Bharti and Singh., 2015), global information gain (Shang et al, 2013), adaptive keyword (Tasci and Gungor., 2013), global ranking (Pinheiro et al, 2012;Pinheiro et al, 2015) are proposed.…”
Section: Related Workmentioning
confidence: 99%