2019 IEEE Intelligent Vehicles Symposium (IV) 2019
DOI: 10.1109/ivs.2019.8813899
|View full text |Cite
|
Sign up to set email alerts
|

SafeVRU: A Research Platform for the Interaction of Self-Driving Vehicles with Vulnerable Road Users

Abstract: This paper presents our research platform SafeVRU for the interaction of self-driving vehicles with Vulnerable Road Users (VRUs, i.e., pedestrians and cyclists). The paper details the design (implemented with a modular structure within ROS) of the full stack of vehicle localization, environment perception, motion planning, and control, with emphasis on the environment perception and planning modules. The environment perception detects the VRUs using a stereo camera and predicts their paths with Dynamic Bayesia… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
28
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(28 citation statements)
references
References 21 publications
(30 reference statements)
0
28
0
Order By: Relevance
“…Our real-world dataset contains ∼ 1 hour of driving in urban environment with our demonstrator vehicle [26]. We recorded both the target-level and low-level output of our radar, a Continental 400 series mounted behind the front bumper.…”
Section: Datasetmentioning
confidence: 99%
“…Our real-world dataset contains ∼ 1 hour of driving in urban environment with our demonstrator vehicle [26]. We recorded both the target-level and low-level output of our radar, a Continental 400 series mounted behind the front bumper.…”
Section: Datasetmentioning
confidence: 99%
“…The accompanying video shows the results where the autonomous vehicle successfully avoids the moving obstacles, while staying within the road limits. We refer the reader to [32] for more details and results.…”
Section: E Applicability To An Autonomous Carmentioning
confidence: 99%
“…We can say that during all this time the problem of detection is concentrated in the form of a convolutional neural network, concatenating the output data with the inputs from the original image by parsing the location of the characteristics. Therefore, we consider that the detection frames of those faces depend on the received coordinates and outline the last prediction frame for all frames in the created grid, so the coordination values of the anchor box are randomly reset and the network is fully examined to obtain a punctual and concrete analysis between the detection distance and the predictability area, as opposed to the standard defined by Equation (6), where IOU are characterized by the ratio between the intersections of unique feature sets and the real area.…”
Section: Description Emotion Drivers Setup and Practical Scenariosmentioning
confidence: 99%
“…The literature includes studies and analysis in this field since the 1970s, especially the Oriented Gradient Histogram (HOG), this being an efficient way to extract important features from an image, then obtaining a model that can be classified and later recognized as a series of objects, and here we can refer to some of the works of Badler and Smoilar [4]. Later approaches and future directions appeared in Badler's article [5] and that of Gavrila [6,7]. In addition, collaborations in the matter of analyzes and practical demonstrations by Mikolajczyk et al [8] later adapted other elements, in terms of detection, from Dalal and Triggs [9,10].…”
Section: Introductionmentioning
confidence: 99%