Advanced Driver Assistance Systems (ADAS) are only applied to relatively simple scenarios, such as highways. If there is an emergency while driving, the driver should take control of the car to deal properly with the situation at any time. Obviously, this incurs the uncertainty of safety. Recently, in the literature, several studies have been proposed for the above-mentioned issue via Artificial Intelligence (AI). The achievement is exactly the aim that we look forward to, i.e., the autonomous vehicle. In this paper, we realize the autonomous driving control via Deep Reinforcement Learning (DRL) based on the CARLA (Car Learning to Act) simulator. Specifically, we use the ordinary Red-Green-Blue (RGB) camera and semantic segmentation camera to observe the view in front of the vehicle while driving. Then, the captured information is utilized as the input for different DRL models so as to evaluate the performance, where the DRL models include DDPG (Deep Deterministic Policy Gradient) and RDPG (Recurrent Deterministic Policy Gradient). Moreover, we also design an appropriate reward mechanism for these DRL models to realize efficient autonomous driving control. According to the results, only the RDPG strategies can finish the driving mission with the scenario that does not appear/include in the training scenario, and with the help of the semantic segmentation camera, the RDPG control strategy can further improve its efficiency.