2022
DOI: 10.1109/tits.2020.3028424
|View full text |Cite
|
Sign up to set email alerts
|

Cooperative Perception for 3D Object Detection in Driving Scenarios Using Infrastructure Sensors

Abstract: Please refer to published version for the most recent bibliographic citation information. If a published version is known of, the repository item page linked to above, will contain details on accessing it.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
102
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
4
1

Relationship

1
8

Authors

Journals

citations
Cited by 171 publications
(102 citation statements)
references
References 34 publications
0
102
0
Order By: Relevance
“…Some researchers also evaluated the realization of featurelevel sensor data sharing in real vehicles [15]. Furthermore, object(track)-level cooperative perception has been realized using infrastructure involving multiple sensors and multiple transmitters [16], [17]. In [18], the researchers attempted to examine the trustworthiness of cooperative perception by quantifying the confidence in the correctness of data by using the Bayes theory.…”
Section: Related Workmentioning
confidence: 99%
“…Some researchers also evaluated the realization of featurelevel sensor data sharing in real vehicles [15]. Furthermore, object(track)-level cooperative perception has been realized using infrastructure involving multiple sensors and multiple transmitters [16], [17]. In [18], the researchers attempted to examine the trustworthiness of cooperative perception by quantifying the confidence in the correctness of data by using the Bayes theory.…”
Section: Related Workmentioning
confidence: 99%
“…This is a key problem in many downstream applications including 3D scene reconstruction [1], localisation [2] and SLAM [3]. Recent applications such as Augmented Reality (AR) [4], cooperative (multi-agent) perception for autonomous vehicles [5] and multi-agent SLAM [6] introduce new challenges to this problem. Specifically, these applications require registration methods that are robust to point clouds with low overlap, e.g.…”
Section: Introductionmentioning
confidence: 99%
“…SLAM [3] and lidar odometry [7], where pairs of point clouds are obtained sequentially in adjacent time steps by a single vehicle navigating in a driving environment. On the other hand, applications such as cooperative perception [5] and multi-agent SLAM [6] require registering point clouds obtained simultaneously from a pair of sensors on two different vehicles that are potentially far apart, and thus, may have low field-of-view overlap, e.g. Figure 1.…”
Section: Introductionmentioning
confidence: 99%
“…However, the algorithm is still rough in target detection and classification. [10][11][12] On this basis, this research has been optimized and improved, hoping to propose a vehicle automatic driving target recognition algorithm with excellent performance.…”
Section: Introductionmentioning
confidence: 99%