<p class="Abstract">The present study focused on vision-based
end-to-end reinforcement learning in relation to<strong>
</strong>vehicle control problems such as lane following and collision
avoidance. The controller policy presented in this paper is able to control
a small-scale robot to follow the right-hand lane of a real two-lane road,
although its training has only been carried out in a simulation. This model,
realised by a simple, convolutional network, relies on images of a
forward-facing monocular camera and generates continuous actions that
directly control the vehicle. To train this policy, proximal policy
optimization was used, and to achieve the generalisation capability required
for real performance, domain randomisation was used. A thorough analysis of
the trained policy was conducted by measuring multiple performance metrics
and comparing these to baselines that rely on other methods. To assess the
quality of the simulation-to-reality transfer learning process and the
performance of the controller in the real world, simple metrics were
measured on a real track and compared with results from a matching
simulation. Further analysis was carried out by visualising salient object
maps.</p>