IEEE Congress on Evolutionary Computation 2010
DOI: 10.1109/cec.2010.5586191
|View full text |Cite
|
Sign up to set email alerts
|

Learning to overtake in TORCS using simple reinforcement learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
48
0
3

Year Published

2011
2011
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 71 publications
(51 citation statements)
references
References 17 publications
0
48
0
3
Order By: Relevance
“…Nonetheless, it is worth mentioning that in [10] a simple TDL method was implemented to learn only the overtaking behavior. In that system, all other tasks were adopted from an off-the-shelf controller.…”
Section: Introductionmentioning
confidence: 99%
“…Nonetheless, it is worth mentioning that in [10] a simple TDL method was implemented to learn only the overtaking behavior. In that system, all other tasks were adopted from an off-the-shelf controller.…”
Section: Introductionmentioning
confidence: 99%
“…In Ref. 8, reinforcement learning techniques are used to achieve two complex racing behaviors: overtaking a fast opponent on a straight and overtaking on a tight bend.…”
Section: Introductionmentioning
confidence: 99%
“…More importantly, it also includes several pre-programmed bot drivers, and additional bots can be created essentially by defining a custom C function (Wymann, 2006). This makes it a convenient platform for research on autonomous vehicle control that has recently become quite popular (Munoz et al, 2009;Cardamone et al, 2009b;2009a;2010;Loiacono et al, 2010).…”
Section: Torcs Environmentmentioning
confidence: 99%
“…Of those, particularly relevant to this article are all studies on learning by imitation (Chambers and Michie, 1969;Urbancic and Bratko, 1994;Atkeson and Schaal, 1997;Bratko et al, 1998;D'Este et al, 2003;Sammut et al, 1992), especially those addressing the vehicle control task, either in the TORCS environment (Munoz et al, 2009;Cardamone et al 2009b;2009a;2010) or another simulated or real environment (Pomerleau, 1988;Togelius et al, 1996;Baluja, 1996). Other approaches to this task that do not follow the imitation learning scenario, including those based on reinforcement learning (Krödel and Kuhnert, 2002;Forbes, 2002;Loiacono et al, 2010), even if adopt substantially different assumptions about the available training information and use different learning algorithms, need to face the same crucial issues of state information and control action representation. In these respects, this work borrows substantially from many of those prior solutions.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation