2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC) 2017
DOI: 10.1109/apsipa.2017.8282301
|View full text |Cite
|
Sign up to set email alerts
|

LiDAR/camera sensor fusion technology for pedestrian detection

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 39 publications
(27 citation statements)
references
References 3 publications
0
27
0
Order By: Relevance
“…In our experiment, this was achieved by applying a modulo operator to the state of a 32-bit LFSR. The modulo was implemented as a bit-mask on the last W m LSBs of the LFSR, and the width of the bit-mask was selectable in W m ∈ [1,2,3,4,5,6]. The periodicity of the resulting sequence is determined by the length L of the LFSR, while the maximum applicable delay T d,max is defined by the bit-mask width W m and the system clock frequency F clk according to the relation Figure 12 reports the suppression ratio for a victim using a long LFSR (L = 32) as a function of the bit-mask width W m .…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In our experiment, this was achieved by applying a modulo operator to the state of a 32-bit LFSR. The modulo was implemented as a bit-mask on the last W m LSBs of the LFSR, and the width of the bit-mask was selectable in W m ∈ [1,2,3,4,5,6]. The periodicity of the resulting sequence is determined by the length L of the LFSR, while the maximum applicable delay T d,max is defined by the bit-mask width W m and the system clock frequency F clk according to the relation Figure 12 reports the suppression ratio for a victim using a long LFSR (L = 32) as a function of the bit-mask width W m .…”
Section: Resultsmentioning
confidence: 99%
“…The use of LiDAR as a complement to existing technologies, such as radar, ultrasonic range finding, thermal imaging, and image processing, is rapidly changing the landscape of advanced automotive sensor systems. The joint use of heterogeneous sensors, known as sensor fusion, promises to facilitate the safe implementation of level 3 and level 4 autonomy [2][3][4], and to be an enabling factor of fully autonomous driving [5]. Even outside the automotive field, where the reliable and safe navigation of autonomous or semi-autonomous machines is needed (e.g., advanced robotics, automated guided vehicles, factory automation, simultaneous localization and mapping, and drones), LiDAR is proving itself to be a valid complement to existing navigation sensors [6,7].…”
Section: Introductionmentioning
confidence: 99%
“…Feature fusion methods generally involve projecting the LiDAR point cloud into a 2D space and then processing both the image and the projection with some sort of CNN in order for the learned features that are extracted from both mediums to complement one another [27][28][29][30][31]. The other group of fusion algorithms is decision level fusions, which performs independent detections in both mediums and then combine both sets of detections together in order to output a single set of superior detections [32,33]. The idea behind fusion methods is to utilize the advantages of each type of data to augment one another in order to provide superior detection quality over each independent medium alone.…”
Section: Related Workmentioning
confidence: 99%
“…Development of sensor fusion techniques for automotive purposes has been on the rise in recent years. This is because these techniques have been able to provide a higher level of accuracy in detection [35]. Multispectral pedestrian and cyclist detection can be one of three categories: pixel-level (Early Fusion) (see Fig.…”
Section: Sensor Fusionmentioning
confidence: 99%