2018 IEEE International Conference on Multimedia &Amp; Expo Workshops (ICMEW) 2018
DOI: 10.1109/icmew.2018.8551565
|View full text |Cite
|
Sign up to set email alerts
|

Multimedia Fusion at Semantic Level in Vehicle Cooperactive Perception

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
13
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 25 publications
(13 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…Eckelmann et al [98] uses high-resolution three-dimensional point cloud data provided by Velodyne VLP-16 lidar, and then locates traffic objects through V2X communication for target recognition and tracking. Xiao et al [99] proposed a vehicle perception fusion framework based on V2V, vision and GPS. By fusing multimedia information of vision, GPS and digital map, the method of in-depth learning is used to deeply understand and extract the key information in the vision system, which significantly improves the vehicle perception ability and perception range, and eliminates most of the perception blind areas.…”
Section: Research On Traffic Conflict Based On Intelligent Vehiclmentioning
confidence: 99%
“…Eckelmann et al [98] uses high-resolution three-dimensional point cloud data provided by Velodyne VLP-16 lidar, and then locates traffic objects through V2X communication for target recognition and tracking. Xiao et al [99] proposed a vehicle perception fusion framework based on V2V, vision and GPS. By fusing multimedia information of vision, GPS and digital map, the method of in-depth learning is used to deeply understand and extract the key information in the vision system, which significantly improves the vehicle perception ability and perception range, and eliminates most of the perception blind areas.…”
Section: Research On Traffic Conflict Based On Intelligent Vehiclmentioning
confidence: 99%
“…In [35], the results of the awareness of other vehicles are integrated into the ego-vehicle’s perception system as virtual sensors to achieve perception enhancement. In [36], a multi-vehicle perception framework combining image and semantic features is proposed, and experiments have proved that the problem of front-vehicle occlusion can be solved. In these studies, the problems of self-positioning and localization of other targets were considered separately, which rendered the effect of the fusion very sensitive to their relative positioning.…”
Section: Introductionmentioning
confidence: 99%
“…Current dynamic map fusion techniques can be classified as data level [4], [9], feature level [5]- [8], and object level [9]- [15]. Those data-level and feature-level methods likely incur high communication overheads and most object-level methods do not take uncertainties into account.…”
mentioning
confidence: 99%
“…Those data-level and feature-level methods likely incur high communication overheads and most object-level methods do not take uncertainties into account. On the other hand, various deep and federated learning schemes have been used to improve the quality of feature models [8], [14], [15], but few of them use point-cloud data. Besides, as most cloud-based model training schemes assume that datasets have been properly labeled [4]- [9], [11], [14], [15], knowledge distillation (KD) methods [16]- [18] need further investigation for label generation among networked vehicles.…”
mentioning
confidence: 99%
See 1 more Smart Citation