2016
DOI: 10.1016/j.patcog.2016.06.028
|View full text |Cite
|
Sign up to set email alerts
|

Incremental relevance sample-feature machine: A fast marginal likelihood maximization approach for joint feature selection and classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 21 publications
(17 citation statements)
references
References 34 publications
0
17
0
Order By: Relevance
“…The presence of 19 noise features in the Waveform dataset increases the hardness of the classification problem. Ideal feature selection methods should select the relevant features (features 1-21) and simultaneously remove the irrelevant noise features (features [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. To evaluate the stability and robustness of feature selection of PFCVM LP with noise features, we choose wave 1 vs. wave 2 from the Waveform as the experimental data, which includes 3,345 samples.…”
Section: Waveform Dataset: Stability and Robustness Against Noisementioning
confidence: 99%
See 3 more Smart Citations
“…The presence of 19 noise features in the Waveform dataset increases the hardness of the classification problem. Ideal feature selection methods should select the relevant features (features 1-21) and simultaneously remove the irrelevant noise features (features [22][23][24][25][26][27][28][29][30][31][32][33][34][35][36][37][38][39][40]. To evaluate the stability and robustness of feature selection of PFCVM LP with noise features, we choose wave 1 vs. wave 2 from the Waveform as the experimental data, which includes 3,345 samples.…”
Section: Waveform Dataset: Stability and Robustness Against Noisementioning
confidence: 99%
“…Feature selection, as a dimensionality reduction technique, has been extensively studied in machine learning and data mining, and various feature selection methods have been proposed [5, 26, 29, 34, 36-38, 40, 47, 48, 50, 52-57]. Feature selection methods can be divided into three groups: filter methods [20,37,38,40], wrapper methods [50], and embedded methods [5,26,29,33,34,36]. Filter methods independently select the subset of features from the classifier learning.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations
“…SVM is one of the most popular classifiers used for decoding analysis. 7,9,15,22,23 In spite of using kernel method to increase data separability and its reasonable performance in neural decoding, it remains susceptible to the curse of dimensionality. [24][25][26][27] Researchers have applied both supervised and unsupervised dimensionality reduction methods to map neuroimaging data into a lower feature space prior to the classification phase.…”
Section: Introductionmentioning
confidence: 99%