2018
DOI: 10.1016/j.eswa.2018.07.028
|View full text |Cite
|
Sign up to set email alerts
|

Selection of the most relevant terms based on a max-min ratio metric for text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
34
0
1

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
2
1

Relationship

2
7

Authors

Journals

citations
Cited by 43 publications
(35 citation statements)
references
References 30 publications
0
34
0
1
Order By: Relevance
“…Uysal proposes an improved global feature selection scheme (IGFSS) that marks features based on their ability to distinguish between classes and uses these marks when generating feature sets [29]. In 2018, Rehman proposed the MMR (Max-Min Ratio) algorithm, which performs feature selection based on the product of max-min ratio of the true positives and false positives and their difference [30].…”
Section: A Overview Of Feature Selection Methods For Text Classificamentioning
confidence: 99%
“…Uysal proposes an improved global feature selection scheme (IGFSS) that marks features based on their ability to distinguish between classes and uses these marks when generating feature sets [29]. In 2018, Rehman proposed the MMR (Max-Min Ratio) algorithm, which performs feature selection based on the product of max-min ratio of the true positives and false positives and their difference [30].…”
Section: A Overview Of Feature Selection Methods For Text Classificamentioning
confidence: 99%
“…In Micro‐ F ‐measure, first precision, and recall are calculated globally, and then F ‐measure is calculated with these new values [48 ] Precisionormalnμ=kprecisionormalnk Recalnormallμ=krecalnormallk where μ indicates micro‐averaging.…”
Section: Empirical Evaluationmentioning
confidence: 99%
“…Equation (8 ) provides the expressions for Micro F ‐measure [48 ]. Fnormal_measuerthickmathspaceμ= 2×Precisionormalnμ×RecalnormallμPrecisionormalnμ+Recalnormallμ The results of these two methods may be quite different.…”
Section: Empirical Evaluationmentioning
confidence: 99%
“…Broadly, feature selection approaches can be divided into three classes namely wrapper, embedded, and filter [8], [9]. In recent years, researchers have proposed various filter based feature selection methods to raise the performance of document text classification [34].…”
Section: Related Workmentioning
confidence: 99%