2022
DOI: 10.1109/tiv.2021.3103695
|View full text |Cite
|
Sign up to set email alerts
|

Vision-Cloud Data Fusion for ADAS: A Lane Change Prediction Case Study

Abstract: With the rapid development of intelligent vehicles and Advanced Driver-Assistance Systems (ADAS), a new trend is that mixed levels of human driver engagements will be involved in the transportation system. Therefore, necessary visual guidance for drivers is vitally important under this situation to prevent potential risks. To advance the development of visual guidance systems, we introduce a novel vision-cloud data fusion methodology, integrating camera image and Digital Twin information from the cloud to help… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

1
8

Authors

Journals

citations
Cited by 39 publications
(9 citation statements)
references
References 41 publications
0
9
0
Order By: Relevance
“…Visualization of the Digital Twin information from the cloud remains a challenging issue, where Liu et al developed a datafusion methodology to overlay the Digital Twin information for the driver's field of view with the help of cameras (RGB and depth) images, assisting the driver to make lane change prediction of the neighboring vehicles [39]. On top of this study, Wang et al [40] designed a cooperative driving system for connected vehicles, where non-line-of-sight vehicles are visualized as "Digital Twin slots" on the augmented realitybased head-up display of the ego vehicle, guiding it to cross non-signalized intersections without any collision or unnecessary full stop.…”
Section: B Digital Twins For Connected Vehiclesmentioning
confidence: 99%
“…Visualization of the Digital Twin information from the cloud remains a challenging issue, where Liu et al developed a datafusion methodology to overlay the Digital Twin information for the driver's field of view with the help of cameras (RGB and depth) images, assisting the driver to make lane change prediction of the neighboring vehicles [39]. On top of this study, Wang et al [40] designed a cooperative driving system for connected vehicles, where non-line-of-sight vehicles are visualized as "Digital Twin slots" on the augmented realitybased head-up display of the ego vehicle, guiding it to cross non-signalized intersections without any collision or unnecessary full stop.…”
Section: B Digital Twins For Connected Vehiclesmentioning
confidence: 99%
“…A well-established DDT system is expected to involve multimodal fusion of massive volumes of data on various drivers, in addition to real-time, historical, virtual, and physical data. This requires multiple techniques including data cleaning, conversion, calibration, and mining, among others [314][315][316][317][318]. The related intelligent algorithm and method should be improved to handle the iteration and optimization of the massive data [319].…”
Section: Multi-modal Sensor Fusionmentioning
confidence: 99%
“…A series of studies have laid the groundwork on the application of the DT concept and paradigm to vehicle security and safety. Barosan et al proposed a DT model of autonomously driving trucks for a distributed auto-driving system, which offers excellent performance in testing and validating duties in various automotive scenarios [184] . An advanced driver assistance system featuring lane change prediction was realized by adopting both camera images and DT based auxiliary information from the cloud.…”
Section: Transportation Applicationsmentioning
confidence: 99%