2016
DOI: 10.7717/peerj.2721
|View full text |Cite
|
Sign up to set email alerts
|

A methodology for the design of experiments in computational intelligence with multiple regression models

Abstract: The design of experiments and the validation of the results achieved with them are vital in any research study. This paper focuses on the use of different Machine Learning approaches for regression tasks in the field of Computational Intelligence and especially on a correct comparison between the different results provided for different methods, as those techniques are complex systems that require further study to be fully understood. A methodology commonly accepted in Computational intelligence is implemented… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
37
0
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 35 publications
(38 citation statements)
references
References 38 publications
0
37
0
1
Order By: Relevance
“…The design of our experiments was based on a novel methodology for the development of experimental designs in regression problems with multiple machine learning regression algorithms [32]. For each model described above, the optimal set of parameters was sought using hyperparameter optimisation.…”
Section: Computational Modelsmentioning
confidence: 99%
“…The design of our experiments was based on a novel methodology for the development of experimental designs in regression problems with multiple machine learning regression algorithms [32]. For each model described above, the optimal set of parameters was sought using hyperparameter optimisation.…”
Section: Computational Modelsmentioning
confidence: 99%
“…The Pool dataset was used with the RRegrs methodology 33 , 37 taking into account all the SWCNT nanodescriptos (V00–V17) to find the best regression model that predicts FEB values for the SWCNT-VDAC interactions. The initial dataset was normalized using R scripts.…”
Section: Model Constructionmentioning
confidence: 99%
“…The following regression methods were tested: Multiple Linear regression (LM), Generalized Linear Model with Stepwise Feature Selection (GLM), Lasso regression (Lasso), Partial Least Squares Regression (PLS), Elastic Net regression (ENET), Neural Networks regression (NN), Random Forest (RF), and Random Forest Recursive Feature Elimination (RF-RFE) 33 . Standard RRegrs parameters and methodology were applied: dataset automatically devided by RRegrs using 10 splits (train and test subsets) 33 . The selection of the best regression models used R 2 test (regression coefficient for test subset) and RMSE test (root-mean-square errors for test subset) values Figshare 48 (Fig.…”
Section: Model Constructionmentioning
confidence: 99%
See 2 more Smart Citations