2006
DOI: 10.1007/s10458-006-7449-z
|View full text |Cite
|
Sign up to set email alerts
|

Coach planning with opponent models for distributed execution

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 37 publications
0
4
0
Order By: Relevance
“…In terms of guiding agents with high-level commands, our work is similar to the coaching system in RoboCup soccer simulation where a coach who gets an overview of the whole game sends commands to the players of its own team. However, the coach is also a software agent (there is no human in the loop) and its decision is mainly on recognizing and selecting the opponent model [12,17]. For planning with human guidance, MAPGEN [1], the planning system for Mars rover missions, allows operators to define constraints and rules for a plan, which are subsequently enforced by automated planners to produce the plan.…”
Section: Related Workmentioning
confidence: 99%
“…In terms of guiding agents with high-level commands, our work is similar to the coaching system in RoboCup soccer simulation where a coach who gets an overview of the whole game sends commands to the players of its own team. However, the coach is also a software agent (there is no human in the loop) and its decision is mainly on recognizing and selecting the opponent model [12,17]. For planning with human guidance, MAPGEN [1], the planning system for Mars rover missions, allows operators to define constraints and rules for a plan, which are subsequently enforced by automated planners to produce the plan.…”
Section: Related Workmentioning
confidence: 99%
“…Visual information is sent six or seven times per second. During a standard 10-minute game, this gives 6,000 action chances and 4,000 receipts of visual information [12]. Based on the scene-information, each agent selects an action with the following steps: 1) evaluating the subjective state described in world model; 2) making a decision using the strategy algorithm; and 3) generating and refining basic commands (such as dashing in a given direction with certain power, turning the body or neck at an angle, kicking the ball at an angle with specified power, or slide-tackling the ball).…”
Section: Introductionmentioning
confidence: 99%
“…Visual information is sent six or seven times per second. During a standard 10-minute game, this gives 6,000 action chances and 4,000 receipts of visual information [12].…”
Section: Introductionmentioning
confidence: 99%
“…General replanning algorithms are popular within case-based learning [8], [9], although they do not scale well in control problems with multiple actors and increasing numbers of environmental features. Replanning within spatial domains is frequently considered in the RoboCup domain [10], [11]. Plan adaptation can be seen as an 'off-line' version of cooperative control [12], where research efforts are concentrating on maintaining a specific aspect of team behavior (like the team formation, see e.g.…”
Section: Introductionmentioning
confidence: 99%