2018
DOI: 10.1371/journal.pone.0200738
|View full text |Cite
|
Sign up to set email alerts
|

Reinforcement learning for solution updating in Artificial Bee Colony

Abstract: In the Artificial Bee Colony (ABC) algorithm, the employed bee and the onlooker bee phase involve updating the candidate solutions by changing a value in one dimension, dubbed one-dimension update process. For some problems which the number of dimensions is very high, the one-dimension update process can cause the solution quality and convergence speed drop. This paper proposes a new algorithm, using reinforcement learning for solution updating in ABC algorithm, called R-ABC. After updating a solution by an em… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 13 publications
(9 citation statements)
references
References 18 publications
0
9
0
Order By: Relevance
“…Hence, there is no existing benchmark problem in the literature. To evaluate the performance of the proposed method, we compare the quality of the solutions obtained from the R-ABC algorithm [29]- [30] with those from the ABC algorithm [35] and two state-of-the-art ABC algorithms, which are aABC [17] and TMABC [18], as well as the randomly generating solutions (referred as Random).…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…Hence, there is no existing benchmark problem in the literature. To evaluate the performance of the proposed method, we compare the quality of the solutions obtained from the R-ABC algorithm [29]- [30] with those from the ABC algorithm [35] and two state-of-the-art ABC algorithms, which are aABC [17] and TMABC [18], as well as the randomly generating solutions (referred as Random).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Specifically, universal sizes close to the dimensions that frequently give smaller trim loss have higher chances to be sampled through the reinforcement learning mechanism of the algorithm. This mechanism tends to perform well in high-dimensional problems [29]. Every newly generated solution must be checked for its feasibility.…”
Section: Proposed Algorithmmentioning
confidence: 99%
See 2 more Smart Citations
“…In the experiments, H-MOABC is used to solve the two scalable real-world RNP instances in the hierarchical decoupling manner. Fairee et al (2018) proposed a new ABC algorithm by using reinforcement learning in solution updating. According to whether the new solution produced by an employed bee is better or worse, a positive or negative reinforcement applied to the solution to be used by the onlooker bees.…”
Section: Using Reinforcement In Abcmentioning
confidence: 99%