2019 16th International Conference on Electrical Engineering/Electronics, Computer, Telecommunications and Information Technolo 2019
DOI: 10.1109/ecti-con47248.2019.8955176
|View full text |Cite
|
Sign up to set email alerts
|

Combinatorial Artificial Bee Colony Optimization with Reinforcement Learning Updating for Travelling Salesman Problem

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 7 publications
0
2
0
Order By: Relevance
“…K-means clustering was either used to add diversity to the population or to detect the clusters in the optima [18], [31], [32]. Reinforcement learning has also been used in several studies to improve the searching process of the ABC [33], [34]. This method is mostly used in the onlooker and employed phases of the ABC algorithm.…”
Section: Studies Employing Learning Techniques On Abcmentioning
confidence: 99%
“…K-means clustering was either used to add diversity to the population or to detect the clusters in the optima [18], [31], [32]. Reinforcement learning has also been used in several studies to improve the searching process of the ABC [33], [34]. This method is mostly used in the onlooker and employed phases of the ABC algorithm.…”
Section: Studies Employing Learning Techniques On Abcmentioning
confidence: 99%
“…are optimized. In today's complex and varied production processes, dynamic events such as machine breakdown or the change of the processing time and machine order of jobs are inevitable to be considered, necessitating the remarkable results on various combinatorial optimization problems such as traveling salesman problem (TSP) [12], the vehicle routing problem (VRP) [13] and JSP [14]. Existing research has shown that using reinforcement learning to solve DJSP has at least four advantages: 1) RL doesn't require the complete mathematical model or the large labeled datasets of the scheduling environment, but can learn from the interaction with the environment and store the learned knowledge to achieve "offine learning and online application" [15].…”
Section: Introductionmentioning
confidence: 99%