2022
DOI: 10.5194/isprs-archives-xliii-b1-2022-45-2022
|View full text |Cite
|
Sign up to set email alerts
|

Real-Time Foreground Segmentation for Surveillance Applications in NRCS Lidar Sequences

Abstract: Abstract. In this paper, we propose a point-level foreground-background separation technique for the segmentation of measurement sequences of a Non-repetitive Circular Scanning (NRCS) Lidar sensor, which is used as a 3D surveillance camera mounted in a fixed position. We show that by applying the NRCS Lidar technology, we can overcome various limitations of rotating multi-beam Lidar sensors, such as low vertical measurement resolution, which is disadvantageous in surveillance applications. As the main challeng… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…The human pose estimation task can be applied in surveillance applications, which demand real-time solutions. To address this need, our approach involves transforming the representation of the NRCS lidar point cloud from 3D Cartesian coordinates to a spherical polar coordinate system, similar to our previous works in [ 39 , 40 ]. We generate a 2D pixel grid by discretizing the horizontal and vertical FoVs, where each 3D point’s distance from the sensor is mapped to a pixel determined by corresponding azimuth and elevation values.…”
Section: Proposed Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…The human pose estimation task can be applied in surveillance applications, which demand real-time solutions. To address this need, our approach involves transforming the representation of the NRCS lidar point cloud from 3D Cartesian coordinates to a spherical polar coordinate system, similar to our previous works in [ 39 , 40 ]. We generate a 2D pixel grid by discretizing the horizontal and vertical FoVs, where each 3D point’s distance from the sensor is mapped to a pixel determined by corresponding azimuth and elevation values.…”
Section: Proposed Methodsmentioning
confidence: 99%
“…More specifically, while the center of the field of view is scanned in every rotation of the pattern, outer regions are sampled less frequently, as demonstrated in Figure 2 . This particular, inhomogeneous point density distribution makes it difficult to apply existing lidar point cloud processing approaches on NRCS lidar measurement sequences [ 39 ]. Note that apart from depth data, the sensor also records the reflection intensity of the laser beams in the range 0–100% according to the Lambertian reflection model [ 33 ].…”
Section: Introductionmentioning
confidence: 99%
“…Range images are widely used, compact representations of Lidar-based depth measurements [1], [3], [16], which enable to adopt 2D convolution operations and effective image-based neural network architectures [4], [6] during processing.…”
Section: A Range Image Generationmentioning
confidence: 99%
“…In our experiments, we exploit the parameters of the Livox AVIA state-of-the-art NRCS Lidar sensor [3]. The sensor's FoV is mapped onto a 400×400 pixel lattice, which resolution (5.6 px/ • ) yields both high spatial accuracy and reasonable computational requirements.…”
Section: A Range Image Generationmentioning
confidence: 99%
See 1 more Smart Citation