Assisted and Automated Driving functions are increasingly deployed to support improved safety, efficiency, and enhance driver experience. However, there are still key technical challenges that need to be overcome as the degradation of perception sensor data due to noise factors. The quality of data being generated by sensors can directly impact the planning and control of the vehicle, which in turn can affect the vehicle safety. A framework to analyse noise factor effects on automotive environmental perception sensors has been recently proposed and applied to study the effects of noise factors on LiDAR sensors. This work builds on this framework, and deploys it to camera sensors, focusing on the specific disturbed sensor outputs via a detailed analysis and classification of camera specific noise sources. Moreover, the noise factor analysis has been used to identify two omnipresent and independent noise factors (i.e. obstruction and windshield distortion). These noise factors have been modelled to generate noisy camera data; their impact on the perception step, based on deep neural networks, has been evaluated when the noise factors are applied independently and simultaneously. It is demonstrated that the performance degradation from the combination of noise factors is not simply the accumulated performance degradation from each single factor, which raises the importance of including combination noise factor modelling and testing for performance analysis. Thus, through the findings here, the framework can enhance the use of simulation for development and testing of automated vehicles through careful consideration of the noise factors affecting camera data.
Nowadays, multirotors are versatile systems that can be employed in several scenarios, where their increasing autonomy allows them to achieve complex missions without human intervention. This paper presents a framework for autonomous missions with low-cost Unmanned Aerial Vehicles (UAVs) in Global Navigation Satellite System-denied (GNSS-denied) environments. This paper presents hardware choices and software modules for localization, perception, global planning, local re-planning for obstacle avoidance, and a state machine to dictate the overall mission sequence. The entire software stack has been designed exploiting the Robot Operating System (ROS) middleware and has been extensively validated in both simulation and real environment tests. The proposed solution can run both in simulation and in real-world scenarios without modification thanks to a small sim-to-real gap with PX4 software-in-the-loop functionality. The overall system has competed successfully in the Leonardo Drone Contest, an annual competition between Italian Universities with a focus on low-level, resilient, and fully autonomous tasks for vision-based UAVs, proving the robustness of the entire system design.
Autonomous exploration of unknown environments usually focuses on maximizing the volumetric exploration of the surroundings. Object-oriented exploration, on the other hand, tries to minimize the time spent on the localization of some given objects of interest. While the former problem equally considers map growths in any free direction, the latter fosters exploration towards objects of interest partially seen and not yet accurately identified.The proposed work relates to a novel algorithm that focuses on an object-oriented exploration of unknown environments for aerial robots, able to generate volumetric representations of surroundings, semantically enhanced by labels for each object of interest.As a case study, this method is applied both in a simulated environment and in real-life experiments on a small aerial platform.
Given the promising advances in the field of Assisted and Automated Driving, it is expected that the roads of the future will be populated by vehicles driven by computers, partially or fully replacing human drivers. In this scenario, the first stage of the perception-decision-actuation pipeline will likely rely on Deep Neural Networks for understanding the scene around the vehicle. Typical tasks for Deep Neural Networks are object detection and instance segmentation, tasks relying on supervised learning and annotated datasets. As one can imagine, the quality of the labelled dataset strongly affects the performance of the network, and this aspect is investigated in this paper. Annotation quality should be a primary concern in safety-critical tasks, such as Assisted and Automated Driving. This work addresses and classifies some of the mistakes found in a popular automotive dataset. Moreover, some experiments with a Deep Neural Network model were performed to test the effect of these mistakes on network predictions. A set of criteria was established to support the relabelling of the testing dataset which was compared to the original dataset.
Calibrating intrinsic and extrinsic camera parameters is a fundamental problem that is a preliminary task for a wide variety of applications, from robotics to computer vision to surveillance and industrial tasks. With the advent of Internet of Things (IoT) technology and edge computing capabilities, the ability to track motion activities in large outdoor areas has become feasible. The proposed work presents a network of IoT camera nodes and a dissertation on two possible approaches for automatically estimating their poses. One approach follows the Structure from Motion (SfM) pipeline, while the other is marker-based. Both methods exploit the correspondence of features detected by cameras on synchronized frames. A preliminary indoor experiment was conducted to assess the performance of the two methods compared to ground truth measurements, employing a commercial tracking system of millimetric precision. Outdoor experiments directly compared the two approaches on a larger setup. The results show that the proposed SfM pipeline more accurately estimates the pose of the cameras. In addition, in the indoor setup, the same methods were used for a tracking application to show a practical use case.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.