2021 3rd International Conference on Control Systems, Mathematical Modeling, Automation and Energy Efficiency (SUMMA) 2021
DOI: 10.1109/summa53307.2021.9632070
|View full text |Cite
|
Sign up to set email alerts
|

A Study of Biology-inspired Algorithms Applied to Long Short-Term Memory Network Training for Time Series Forecasting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(3 citation statements)
references
References 22 publications
0
3
0
Order By: Relevance
“…Demidova [12], in her study, discussed a biology-inspired approach to Long short term memory (LSTM) loss function optimization and compared the performance of different LSTM networks trained with backpropagation and using biology-inspired algorithms including Genetic Algorithm, Particle Swarm optimization, and Fish School Search (FSS). In the end, she evidenced that the exponential step distortion (ETFSS) and FSS algorithms have given superior results compared to GA and PSO in most of the optimization problems.…”
Section: Related Workmentioning
confidence: 99%
“…Demidova [12], in her study, discussed a biology-inspired approach to Long short term memory (LSTM) loss function optimization and compared the performance of different LSTM networks trained with backpropagation and using biology-inspired algorithms including Genetic Algorithm, Particle Swarm optimization, and Fish School Search (FSS). In the end, she evidenced that the exponential step distortion (ETFSS) and FSS algorithms have given superior results compared to GA and PSO in most of the optimization problems.…”
Section: Related Workmentioning
confidence: 99%
“…Neural networks are typically trained iteratively using either backpropagation and gradient-based methods [17], or population-based algorithms [18,19], allowing high accuracy in both classification and regression problems. However, the iterative training process is time consuming.…”
Section: Introductionmentioning
confidence: 99%
“…In addition, we compare the time spent by each algorithm during training and prediction phases. Fish school search showed superior performance in neural network structure optimization [19,32], as differential evolution did [31,34], so we employed these algorithms in order to further improve the best ELM model obtained with the grid search. The experimental results show that ELM is the most computationally efficient algorithm among the considered ones; this applies both to training and making predictions.…”
Section: Introductionmentioning
confidence: 99%