2022
DOI: 10.1109/lra.2022.3166544
|View full text |Cite
|
Sign up to set email alerts
|

Automatic Labeling to Generate Training Data for Online LiDAR-Based Moving Object Segmentation

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 62 publications
(26 citation statements)
references
References 37 publications
0
26
0
Order By: Relevance
“…Our method can outperform all baselines with an IoU MOS of 65.2%, which demonstrates the effectiveness of our approach. Our performance is also better than LMNet+AutoMOS+Extra [6], which additionally uses automatically generated moving object labels for training. This emphasizes the strength of our result.…”
Section: B Moving Object Segmentation Performancementioning
confidence: 94%
See 4 more Smart Citations
“…Our method can outperform all baselines with an IoU MOS of 65.2%, which demonstrates the effectiveness of our approach. Our performance is also better than LMNet+AutoMOS+Extra [6], which additionally uses automatically generated moving object labels for training. This emphasizes the strength of our result.…”
Section: B Moving Object Segmentation Performancementioning
confidence: 94%
“…The back-projection from these 2D representations to the 3D space often requires post-processing like k-nearest neighbor (kNN) clustering [5], [9], [12], [25] to avoid labels bleeding into points that are close in the image space but distant in 3D. Other approaches can identify objects that have moved in 3D space directly during mapping [1] or with a clustering and tracking approach [6]. Nevertheless, these offline methods often rely on having access to all LiDAR observations in the sequence.…”
Section: Non-moving Movingmentioning
confidence: 99%
See 3 more Smart Citations