Multi Agent Systems - Strategies and Applications 2020
DOI: 10.5772/intechopen.88484
|View full text |Cite
|
Sign up to set email alerts
|

A Q-Learning-Based Approach for Simple and Multi-Agent Systems

Abstract: This study proposes different machine learning-based solutions to both single and multi-agent systems, took place on a 2-D simulation platform, namely, Robocode. This dynamic and programmable platform allows agents to interact with the environment and each other by employing a variety of battling strategies. Q-Learning is one of the leading and popular machine learning-based solutions to be applied to such a problem. However, especially for continued spaces, the control problem gets deeper. Essentially, one of… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
5
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 15 publications
0
5
0
Order By: Relevance
“…Strategies are the most critical guidelines for how to act in a particular situation. Effective strategies to be implemented in the arena are of great importance for international tournaments that are organized annually for the RoboCode war simulator [13]. It should be noted that, while a robot can perform well in a one-on-one battle, other adaptive strategies may be required for close combat.…”
Section: Battlingmentioning
confidence: 99%
See 4 more Smart Citations
“…Strategies are the most critical guidelines for how to act in a particular situation. Effective strategies to be implemented in the arena are of great importance for international tournaments that are organized annually for the RoboCode war simulator [13]. It should be noted that, while a robot can perform well in a one-on-one battle, other adaptive strategies may be required for close combat.…”
Section: Battlingmentioning
confidence: 99%
“…Q-learning is one of the leading off-policy RL algorithms, preferred in another recent study due to its efficiency and popularity. However, in this study, an artificial neural network is designed to approximate Q values instead of trying to keep them in a "Qtable," which is essentially not possible for such a continuous space problem [8]. e neural network has a very modest structure, involving only two layers.…”
Section: Battlingmentioning
confidence: 99%
See 3 more Smart Citations