2006
DOI: 10.1007/11780519_61
|View full text |Cite
|
Sign up to set email alerts
|

Performance Evaluation of an Evolutionary Method for RoboCup Soccer Strategies

Abstract: This paper proposes an evolutionary method for acquiring team strategies of RoboCup soccer agents. The action of an agent in a subspace is specified by a set of action rules. The antecedent part of action rules includes the position of the agent and the distance to the nearest opponent. The consequent part indicates the action that the agent takes when the antecedent part of the action rule is satisfied. The action of each agent is encoded into an integer string that represents the action rules. A chromosome i… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
2

Year Published

2009
2009
2018
2018

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 4 publications
0
4
0
2
Order By: Relevance
“…First, there is team learning, where a single learning algorithm is used to optimize the behaviors of an entire team. Some of this work has involved evolutionary computation methods to develop joint team behaviors (such as [15,10]); reinforcement learning papers have instead usually developed a single homogeneous behavior (for example [7,22]). In contrast the concurrent learning literature, where separate learners are applied per-agent, has largely applied multiagent reinforcement learning (such as [12,21]).…”
Section: Machine Learning At Robocupmentioning
confidence: 99%
“…First, there is team learning, where a single learning algorithm is used to optimize the behaviors of an entire team. Some of this work has involved evolutionary computation methods to develop joint team behaviors (such as [15,10]); reinforcement learning papers have instead usually developed a single homogeneous behavior (for example [7,22]). In contrast the concurrent learning literature, where separate learners are applied per-agent, has largely applied multiagent reinforcement learning (such as [12,21]).…”
Section: Machine Learning At Robocupmentioning
confidence: 99%
“…Miconi [14] and Nakashima et al [53] used 1-point and 2-point crossover to recombine the teams' genotypes. The npoint crossover might be considered as a competitive way to implement RAS.…”
Section: Appendix a 1-point And 2-point Crossovermentioning
confidence: 99%
“…The family of uniform crossovers was introduced by Syswerda [50] and analytically studied by Eshelman et al [51] and De Jong and Spears [52], but not in the context of team evolution. In addition, Miconi [14] and Nakashima et al [53] used a simple 1-point or 2-point crossover to recombine the teams' genotypes. This approach leans itself to the category of restricted crossovers.…”
Section: Introductionmentioning
confidence: 99%
“…Research groups have applied a variety of different machine learning methods to many aspects of autonomously soccer playing multirobot systems. Examples include evolutionary algorithms for gait optimization (Chernova and Veloso 2004; Röfer et al 2004) or optimization of team tactics (Nakashima et al 2005), unsupervised and supervised learning in computer vision tasks (Kaufmann et al 2004;Li et al 2003;Treptow and Zell 2004) and lower level control tasks (Oubbati et al 2005). RL methods have been used to learn cooperative behaviors in the simulation league (Ma et al 2008) as well as for real robots (Asada et al 1999) and to learn walking patterns on humanoid robots (Ogino et al 2004).…”
Section: Related Workmentioning
confidence: 99%