2022
DOI: 10.1016/j.compbiolchem.2021.107619
|View full text |Cite
|
Sign up to set email alerts
|

Tuning hyperparameters of machine learning algorithms and deep neural networks using metaheuristics: A bioinformatics study on biomedical and biological cases

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
41
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 122 publications
(41 citation statements)
references
References 45 publications
0
41
0
Order By: Relevance
“…In the follow-up work, we will consider constructing a corpus with topical information and dig deeper into the important emotional cause of topic events. Aiming at solving the problems of feature selection and hyperparameter optimization involved in emotion cause extraction, swarm intelligence approaches will be considered [24][25][26][27][28].…”
Section: Discussionmentioning
confidence: 99%
“…In the follow-up work, we will consider constructing a corpus with topical information and dig deeper into the important emotional cause of topic events. Aiming at solving the problems of feature selection and hyperparameter optimization involved in emotion cause extraction, swarm intelligence approaches will be considered [24][25][26][27][28].…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, as the number of hyperparameters and the range of values increase, it becomes quite difficult to manage [21]. To overcome the drawbacks of manual search, automatic search algorithms have been proposed, such as grid search [22][23][24][25]. Mainly, grid search trains machine learning models with different values of hyperparameters in the training set and compares the performance according to evaluation metrics.…”
Section: Hyperparameter Optimization For Machine Learning Modelsmentioning
confidence: 99%
“…Nematzadeh, Kiani, Torkamanian-Afshar, and Aydin (2022) [25] Proposed a method to tune hyperparameters for machine learning algorithms using Grey Wolf Optimization (GWO) and Genetic algorithm (GA) metaheuristics.…”
Section: Scheme Achievement Limitationsmentioning
confidence: 99%
“…It is well-known that every ML model has to be tuned for the specific dataset [2]. This also implies the no free lunch (NFL) theorem, which states that there is no universal approach, nor set of parameters' values that can render satisfying results for all practical problems.…”
Section: Introductionmentioning
confidence: 99%