2021
DOI: 10.1016/j.robot.2020.103687
|View full text |Cite
|
Sign up to set email alerts
|

Learning to see through the haze: Multi-sensor learning-fusion System for Vulnerable Traffic Participant Detection in Fog

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

3
5

Authors

Journals

citations
Cited by 16 publications
(8 citation statements)
references
References 20 publications
0
8
0
Order By: Relevance
“…Finally, while the experiments indicate that the technology is ready to be deployed in buildings or small residential clusters, complex urban scenarios require more advanced, socially-aware navigation (Kucner et al, 2017;Vintr et al, 2020), capable to deal with low visibility (Broughton et al, 2020). These capabilities have not been addressed in the proposed approach and will require further future investigation.…”
Section: Lesson Learnedmentioning
confidence: 97%
“…Finally, while the experiments indicate that the technology is ready to be deployed in buildings or small residential clusters, complex urban scenarios require more advanced, socially-aware navigation (Kucner et al, 2017;Vintr et al, 2020), capable to deal with low visibility (Broughton et al, 2020). These capabilities have not been addressed in the proposed approach and will require further future investigation.…”
Section: Lesson Learnedmentioning
confidence: 97%
“…During adverse weather conditions, a learning-based sensor fusion scheme was designed to make the system more rely on the radar to achieve higher accuracy of pedestrian detection and localization. George et al [57] further involved PointNet [75] in the online learning framework of sensor fusion to benefit from the current progress of deep learning methods of point cloud processing. PointNet was used to add a class label to each point of radar data, which was classified as clusters and finally sent to the sensor fusion module.…”
Section: E Sensor Fusionmentioning
confidence: 99%
“…It can undoubtedly save humans from tedious offline training tasks, and in mobile robotics, this learning paradigm further enables robots to learn on-site in their deployment place to adapt to changes in the environment. This paradigm has been used not only for LiDAR [36], [21], but also for millimeter wave radar with similar data representation [128], [57], not only for indoor service robots [36], but also for autonomous driving perception in urban environments [21].…”
Section: E Sensor Fusionmentioning
confidence: 99%
See 1 more Smart Citation
“…Pioneering work in this field can be traced back more than two decades [19], in which a lifelong learning perspective for mobile robot control is presented. With the rapid development of various related technologies including hardware and algorithms, in recent years, research on robotic online learning has become more and more extensive [13], [20], [14], [15], [21], [22]. In particular, Teichman and Thrun [14] presented a semisupervised learning to the problem of track classification in 3D LiDAR data based on Expectation Maximization (EM) algorithm, which illustrated that learning of dynamic objects can benefit from tracking system.…”
Section: Related Workmentioning
confidence: 99%