2018
DOI: 10.1016/j.neucom.2017.11.077
|View full text |Cite
|
Sign up to set email alerts
|

Feature selection in machine learning: A new perspective

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

2
703
0
6

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
3
1

Relationship

0
10

Authors

Journals

citations
Cited by 1,542 publications
(711 citation statements)
references
References 91 publications
2
703
0
6
Order By: Relevance
“…Reliability of hashtags and pragmatic features had to be ensured, and the accuracy was about 72.36%. Cai et al [19] developed a framework based on ensemble text feature selection method for detecting sarcasm, irony, and satire from reviews and news articles. e logistic regression method was found to achieve higher accuracy values of 91.46% and 88.86% in detection of satiric and ironic reviews, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…Reliability of hashtags and pragmatic features had to be ensured, and the accuracy was about 72.36%. Cai et al [19] developed a framework based on ensemble text feature selection method for detecting sarcasm, irony, and satire from reviews and news articles. e logistic regression method was found to achieve higher accuracy values of 91.46% and 88.86% in detection of satiric and ironic reviews, respectively.…”
Section: Related Workmentioning
confidence: 99%
“…In general, the grouping-based method classifies the selecting features by taking three main stages [11,12] as described in Fig. 1 [13]. Those are: designing the structure of a feature space obtained from the domain of the distance of each feature; grouping features based on a clustering method; and the representing features of each cluster, which are selected to produce the selection results.…”
Section: Related Workmentioning
confidence: 99%
“…whereκ z is defined in (7) and T 1 is defined in (4).κ * adopts an entropy estimator with an exponentially decaying bias to improve the performance in estimating κ * and capturing the associations when the sample size is not sufficiently large. Furthermore, we expect involving the sample coverage would separate and drop situation 2 and 3 features under small samples.…”
Section: Estimationmentioning
confidence: 99%