This paper proposes an evolutionav method for acquiring team sfrategies of RoboCup soccer agents. The action of an agent in a subarea is specified by a set of action rules. The antecedent part of action rules includes the position of the agent and the relation to the nearest opponent. The consequent part indicates the action the agent has to take when the antecedent part of the action rule is satisfied The action of each agent is encoded by o integer string that represents the action rules. A chromosome is the concatenated string of integer strings for all the ogents. The main genetic operator in our evolutionary method i s mutation where a value of each bit is changed with a prespecified probobility. Through computer simulations, we show the effectiveness of the proposed method as well as fiture researchdirections.
This paper proposes an evolutionary method for acquiring team strategies of RoboCup soccer agents. The action of an agent in a subspace is specified by a set of action rules. The antecedent part of action rules includes the position of the agent and the distance to the nearest opponent. The consequent part indicates the action that the agent takes when the antecedent part of the action rule is satisfied. The action of each agent is encoded into an integer string that represents the action rules. A chromosome is the concatenated string of integer strings for all agents. We employ an ES-type generation update scheme after producing new integer strings by using crossover and mutation. Through computer simulations, we show the effectiveness of the proposed method.
Abstract. In this paper, we propose a reinforcement learning method called a fuzzy Q-learning where an agent determines its action based on the inference result by a fuzzy rule-based system. We apply the proposed method to a soccer agent that intercepts a passed ball by another agent. In the proposed method, the state space is represented by internal information the learning agent maintains such as the relative velocity and the relative position of the ball to the learning agent. We divide the state space into several fuzzy subspaces. A fuzzy if-then rule in the proposed method represents a fuzzy subspace in the state space. The consequent part of the fuzzy if-then rules is a motion vector that suggests the moving direction and velocity of the learning agent. A reward is given to the learning agent if the distance between the ball and the agent becomes smaller or if the agent catches up with the ball. It is expected that the learning agent finally obtains the efficient positioning skill.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.