“…Of those, particularly relevant to this article are all studies on learning by imitation (Chambers and Michie, 1969;Urbancic and Bratko, 1994;Atkeson and Schaal, 1997;Bratko et al, 1998;D'Este et al, 2003;Sammut et al, 1992), especially those addressing the vehicle control task, either in the TORCS environment (Munoz et al, 2009;Cardamone et al 2009b;2009a;2010) or another simulated or real environment (Pomerleau, 1988;Togelius et al, 1996;Baluja, 1996). Other approaches to this task that do not follow the imitation learning scenario, including those based on reinforcement learning (Krödel and Kuhnert, 2002;Forbes, 2002;Loiacono et al, 2010), even if adopt substantially different assumptions about the available training information and use different learning algorithms, need to face the same crucial issues of state information and control action representation. In these respects, this work borrows substantially from many of those prior solutions.…”