2021
DOI: 10.1109/access.2021.3058428
|View full text |Cite
|
Sign up to set email alerts
|

Empirical Comparison of the Feature Evaluation Methods Based on Statistical Measures

Abstract: One of the most important classification problems is selecting proper features, i.e. features that describe the classified object in the most straightforward way possible. Then, one of the biggest challenges of the feature selection is the evaluation of the feature's quality. There is a plethora of feature evaluation methods in the literature. This paper presents the results of a comparison between nine selected feature evaluation methods, both existing in literature and newly defined. To make a comparison, fe… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2025
2025

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 12 publications
(4 citation statements)
references
References 46 publications
0
4
0
Order By: Relevance
“…This will be carried out in future studies. Frequency range maps [ 49 ] could also be obtained using independent feature evaluation coefficients of the classification algorithm [ 50 ].…”
Section: Discussionmentioning
confidence: 99%
“…This will be carried out in future studies. Frequency range maps [ 49 ] could also be obtained using independent feature evaluation coefficients of the classification algorithm [ 50 ].…”
Section: Discussionmentioning
confidence: 99%
“…, q * K ) = ξ wW * M,X ∈ int(Ω Ω) and W * = 1 M M j=1 W * j . The probability distribution S defined on R n is the Kullback-Leibler projection of ζ ⊗n on the class of all probability distributions on R n which satisfy (235). We prove that lim inf n→∞ S ξ wV n,X ∈ Ω Ω > 0.…”
Section: Appendix G Further Details and Proofs For Subsection Vi-amentioning
confidence: 95%
“…There is a vast literature on very recent applications of the Bhattacharyya coefficient, for instance it appears exemplarily in Peng & Li [289] for object tracking from successive video frames, Ayed et al [26] for efficient graph cut algorithms, Patra et al [287] for collaborative filtering in sparse data, El Merabet et al [119] for region classification in intelligent transport systems in order to compensate the lack of performance of Global Navigation Satellites Systems, Chiu et al [86] for the design of interactive mobile augmented reality systems, Noh et al [274] for dimension reduction in interacting fluid flow models, Bai et al [29] for material defect detection through ultrasonic array imaging, Dixit & Jain [115] for the design of recommender systems on highly sparse context aware datasets, Guan et al [143] for visible light positioning methods based on image sensors, Lin et al [220] for probabilistic representation of color image pixels, Chen et al [80] for distributed compressive video sensing, Jain et al [162] for the enhancement of multistage user-based collaborative filtering in recommendation systems, Pascuzzo et al [285] for brain-diffusion-MRI based early diagnosis of the sporadic Creutzfeldt-Jakob disease, Sun et al [351] for the design of automatic detection methods multitemporal (e.g. landslide) point clouds, Valpione et al [377] for the investigation of T cell dynamics in immunotherapy, Wang et al [387] for the tracking and prediction of downbursts from meteorological data, Xu et al [403] for adaptive distributed compressed video sensing for coal mine monitoring, Zhao et al [424] for the shared sparse machine learning of the affective content of images, Chen et al [82] for image segmentation and domain partitioning, De Oliveira et al [105] for the prediction of cell-penetrating peptides, Eshaghi et al [122] for the identification of multiple sclerosis subtypes through machine learning of brain MRI scans, Feng et al [125] for improvements of MRI-based detection of epilepsy-causing cortical malformations, Hanli et al [153] for designing pilot protection schemes for transmission lines, Jiang et al [170] for flow-assisted visual tracking through event cameras, Lysiak & Szmajda …”
Section: ) Construction Principle For the Estimation Of The Minimum D...mentioning
confidence: 99%
“…The semination mainly depends on the statistical pattern recognition. For this purpose, leaving-one-out resampling technique is used [14,15].…”
Section: Classification Neural Networkmentioning
confidence: 99%