Perception of the surrounding is a crucial task in most of the autonomous driving scenarios. For this reason most vehicles are equipped with a broad range of sensors like lidar, radar, cameras and ultrasound to sense the space around the car. On the other end, planning algorithms need a simple and usable representation of the obstacle around. One of the biggest drawbacks of such a wide range of sensors is the need to resolve conflicting information and identify false positives. What we propose in this paper is an effective framework for sensor fusion and occupancy grid creation capable of retrieving a uniform representation of the ambient around the vehicle and able to handle conflictual information from different sensors.
In this paper we present a simple stand-alone system performing the autonomous acquisition of multiple pictures all around large objects, i.e., objects that are too big to be photographed from any side just with a camera held by hand. In this approach, a camera carried by a drone (an off-the-shelf quadcopter) is employed to carry out the acquisition of an image sequence representing a valid dataset for the 3D reconstruction of the captured scene. Both the drone flight and the choice of the viewpoints for shooting a picture are automatically controlled by the developed application, which runs on a tablet wirelessly connected to the drone, and controls the entire process in real time. The system and the acquisition workflow have been conceived with the aim to keep the user intervention minimal and as simple as possible, requiring no particular skill to the user. The system has been experimentally tested on several subjects of different shapes and sizes, showing the ability to follow the requested trajectory with good robustness against any flight perturbations. The collected images are provided to a scene reconstruction software, which generates a 3D model of the acquired subject. The quality of the obtained reconstructions, in terms of accuracy and richness of details, have proved the reliability and efficacy of the proposed system.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Vehicle state estimation represents a prerequisite for ADAS (Advanced Driver-Assistant Systems) and, more in general, for autonomous driving. In particular, algorithms designed for path or trajectory planning require the continuous knowledge of some data such as the lateral velocity and heading angle of the vehicle, together with its lateral position with respect to the road boundaries. Vehicle state estimation can be assessed by means of extended and unscented Kalman filters (EKF and UKF, respectively), that have been well treated in the literature. Referring to an experimental case study, the presented work deals with the design and the real time implementation of two different adaptive Kalman filters for vehicle sideslip and positioning estimation. Accuracy have been assessed by means of an automotive optical sensor.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.