One of the main paths towards the reduction of traffic accidents is the increase in vehicle safety through driver assistance systems or even systems with a complete level of autonomy. In these types of systems, tasks such as obstacle detection and segmentation, especially the Deep Learning-based ones, play a fundamental role in scene understanding for correct and safe navigation. Besides that, the wide variety of sensors in vehicles nowadays provides a rich set of alternatives for improvement in the robustness of perception in challenging situations, such as navigation under lighting and weather adverse conditions. Despite the current focus given to the subject, the literature lacks studies on radar-based and radar-camera fusionbased perception. Hence, this work aims to carry out a study on the current scenario of camera and radar-based perception for ADAS and autonomous vehicles. Concepts and characteristics related to both sensors, as well as to their fusion, are presented. Additionally, we give an overview of the Deep Learning-based detection and segmentation tasks, and the main datasets, metrics, challenges, and open questions in vehicle perception.