2020
DOI: 10.1016/j.ress.2019.106706
|View full text |Cite
|
Sign up to set email alerts
|

A machine learning approach to circumventing the curse of dimensionality in discontinuous time series machine data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(26 citation statements)
references
References 34 publications
0
26
0
Order By: Relevance
“…A well-known problem with genetics data is that the number of attributes is much larger than the number of samples, introducing a challenge not only for ML algorithms but for DL and hence the general statistical approaches [78]. To avoid the curse of dimensionality, feature selection and feature extraction are often used [79,80]. Some researchers [81][82][83][84] try using multiple datasets to provide a larger number of samples to balance the number of features and samples.…”
Section: A the Curse Of Dimensionalitymentioning
confidence: 99%
“…A well-known problem with genetics data is that the number of attributes is much larger than the number of samples, introducing a challenge not only for ML algorithms but for DL and hence the general statistical approaches [78]. To avoid the curse of dimensionality, feature selection and feature extraction are often used [79,80]. Some researchers [81][82][83][84] try using multiple datasets to provide a larger number of samples to balance the number of features and samples.…”
Section: A the Curse Of Dimensionalitymentioning
confidence: 99%
“…With this approach, the AL aims to learn as much information as possible by querying as less unlabelled data as possible, particularly in the presence of sparse data. There are many other approaches developed in the SL (or ML) literature that particularly address the data sparsity phenomenon (Aremu, Hyland-Wood, & McAree, 2020;Brajard, Carrassi, Bocquet, & Bertino, 2020;Liu & Bellet, 2019;Nasiri, Minaei, & Sharifi, 2017;Suresh, Saraswathi, & Sundararajan, 2010;Wu, Zhang, Liu, Cai, & Cai, 2018).…”
Section: Mcda and Statistical Learningmentioning
confidence: 99%
“…Many algorithms, techniques, models, and tools are proposed to deal with this data. Feature selection research challenges related to feature selection are summarized in Table 1 [33][34][35].…”
Section: Related Workmentioning
confidence: 99%