2019
DOI: 10.1016/j.engappai.2019.07.003
|View full text |Cite
|
Sign up to set email alerts
|

Global Filter–Wrapper method based on class-dependent correlation for text classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8

Relationship

0
8

Authors

Journals

citations
Cited by 14 publications
(3 citation statements)
references
References 49 publications
0
3
0
Order By: Relevance
“…Basiri et al [12] focused on bias detection, while Li et al [21] excelled in text classification. Kermani et al [20] conducted mood analysis, suggesting potential for advanced methodologies. Ceron Andrea et al [7] demonstrated supervised sentiment analysis effectiveness in electoral monitoring.…”
Section: ░ 2 Literature Reviewmentioning
confidence: 99%
“…Basiri et al [12] focused on bias detection, while Li et al [21] excelled in text classification. Kermani et al [20] conducted mood analysis, suggesting potential for advanced methodologies. Ceron Andrea et al [7] demonstrated supervised sentiment analysis effectiveness in electoral monitoring.…”
Section: ░ 2 Literature Reviewmentioning
confidence: 99%
“…Moreover, feature selection is the selection of a subset of relevant features that are highly related to the criterion measure. Feature selection techniques can be classified as filter (Bommert et al, 2020), wrapper (Gokalp et al, 2020) and hybrid (Zarisfi Kermani et al, 2019). Filter methods measure the correlation (Karegowda et al, 2010), Chi-Square (Alshaer et al, 2021) and Information Gain.…”
Section: Text Feature Manipulationmentioning
confidence: 99%
“…The mutual information feature selection measures the common information that is found between the terms and the labels (Kermani, et al, 2019;Lim et al, 2017). The common information MI (t, c) is found in between the class c, while the term t is distinct on the level of co-occurrence between a feature f j and a class c i (Li et al, 2017;Lim et al, 2017).…”
Section: Normalized Pointwise Mutual Information Features Selectionmentioning
confidence: 99%