The interest in fisheye cameras has recently risen in the autonomous vehicles field, as they are able to reduce the complexity of perception systems while improving the management of dangerous driving situations. However, the strong distortion inherent to these cameras makes the usage of conventional computer vision algorithms difficult and has prevented the development of these devices. This paper presents a methodology that provides real-time semantic segmentation on fisheye cameras leveraging only synthetic images. Furthermore, we propose some Convolutional Neural Networks(CNN) architectures based on Efficient Residual Factorized Network(ERFNet) that demonstrate notable skills handling distortion and a new training strategy that improves the segmentation on the image borders. Our proposals are compared to similar state-of-the-art works showing an outstanding performance and tested in an unknown real world scenario using a fisheye camera integrated in an open-source autonomous electric car, showing a high domain adaptation capability.
Automated Driving Systems (ADSs) require robust and scalable control systems in order to achieve a safe, efficient and comfortable driving experience. Most global planners for autonomous vehicles provide as output a sequence of waypoints to be followed. This paper proposes a modular and scalable waypoint tracking controller for Robot Operating System (ROS)-based autonomous guided vehicles. The proposed controller performs a smooth interpolation of the waypoints and uses optimal control techniques to ensure robust trajectory tracking even at high speeds in urban environments (up to 50 km/h). The delays in the localization system and actuators are compensated in the control loop to stabilize the system. Forward velocity is adapted to path characteristics using a velocity profiler. The controller has been implemented as an ROS package providing scalability and exportability to the system in order to be used with a wide variety of simulators and real vehicles. We show the results of this controller using the novel and hyper realistic CARLA Simulator and carrying out a comparison with other standard and state-of-art trajectory tracking controllers.
Autonomous Driving (AD) promises an efficient, comfortable and safe driving experience. Nevertheless, fatalities involving vehicles equipped with Automated Driving Systems (ADSs) are on the rise, especially those related to the perception module of the vehicle. This paper presents a real-time and power-efficient 3D Multi-Object Detection and Tracking (DAMOT) method proposed for Intelligent Vehicles (IV) applications, allowing the vehicle to track $$360^{\circ }$$ 360 ∘ surrounding objects as a preliminary stage to perform trajectory forecasting to prevent collisions and anticipate the ego-vehicle to future traffic scenarios. First, we present our DAMOT pipeline based on Fast Encoders for object detection and a combination of a 3D Kalman Filter and Hungarian Algorithm, used for state estimation and data association respectively. We extend our previous work ellaborating a preliminary version of sensor fusion based DAMOT, merging the extracted features by a Convolutional Neural Network (CNN) using camera information for long-term re-identification and obstacles retrieved by the 3D object detector. Both pipelines exploit the concepts of lightweight Linux containers using the Docker approach to provide the system with isolation, flexibility and portability, and standard communication in robotics using the Robot Operating System (ROS). Second, both pipelines are validated using the recently proposed KITTI-3DMOT evaluation tool that demonstrates the full strength of 3D localization and tracking of a MOT system. Finally, the most efficient architecture is validated in some interesting traffic scenarios implemented in the CARLA (Car Learning to Act) open-source driving simulator and in our real-world autonomous electric car using the NVIDIA AGX Xavier, an AI embedded system for autonomous machines, studying its performance in a controlled but realistic urban environment with real-time execution (results).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.