2017
DOI: 10.1007/s10462-017-9581-3
|View full text |Cite
|
Sign up to set email alerts
|

A survey of feature selection methods for Gaussian mixture models and hidden Markov models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
23
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 48 publications
(23 citation statements)
references
References 106 publications
0
23
0
Order By: Relevance
“…Feature selection is a concept in machine learning where a subset of collected features are selected as inputs into an algorithm [1,14,17,43,50]. While feature selection's primary benefit is to eliminate noisy features and improve the performance of a model, feature selection can also help reduce the cost of a predictive model by limiting the data that must be collected, stored, and processed.…”
Section: Introductionmentioning
confidence: 99%
“…Feature selection is a concept in machine learning where a subset of collected features are selected as inputs into an algorithm [1,14,17,43,50]. While feature selection's primary benefit is to eliminate noisy features and improve the performance of a model, feature selection can also help reduce the cost of a predictive model by limiting the data that must be collected, stored, and processed.…”
Section: Introductionmentioning
confidence: 99%
“…However, many irrelevant and redundant features will reduce the accuracy of classification and raise the complexity of dimensions. erefore, feature selection is a very effective solution [15]. Feature selection and classification methods are widely used in high-dimensional and multiclass data sets [16,17], which can improve the accuracy of model prediction by removing irrelevant and redundant features.…”
Section: Introductionmentioning
confidence: 99%
“…However, the noisy data and small 59 sample size pose a great challenge for many modelling problems in bioinformatics 60 making necessary to use adequate evaluation criteria or stable and robust FS models [7]. 61 In general, FS techniques can be classified into three main categories: filters, 62 wrappers, and embedded [7] [8]. Filters take as input all the features and reduce them 63 into a relevant subset independent of the model parameters.…”
mentioning
confidence: 99%
“…The most common parametric latent variable models 77 are the Gaussian mixture models (GMM) and hidden Markov models (HMM). The 78 mixture model is often used to model multimodal data, while the HMM is often used for 79 modeling time series data [8].…”
mentioning
confidence: 99%
See 1 more Smart Citation