The 23rd IEEE International Symposium on Robot and Human Interactive Communication 2014
DOI: 10.1109/roman.2014.6926346
|View full text |Cite
|
Sign up to set email alerts
|

People detection and distinction of their walking aids in 2D laser range data based on generic distance-invariant features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
37
2

Year Published

2015
2015
2023
2023

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 38 publications
(39 citation statements)
references
References 12 publications
0
37
2
Order By: Relevance
“…Another solution is similar to the approach widely used in computer vision by extracting hand-crafted features and training classifiers. Image-based object detection is very popular in current autonomous driving research, such as road obstacles detection [6], mobility aids [7] and vehicle detection [8]. However, the sparse point data from a 2D Li-DAR are usually insufficient for reliable object identification using this kind of method within a single scan.…”
Section: Introductionmentioning
confidence: 99%
“…Another solution is similar to the approach widely used in computer vision by extracting hand-crafted features and training classifiers. Image-based object detection is very popular in current autonomous driving research, such as road obstacles detection [6], mobility aids [7] and vehicle detection [8]. However, the sparse point data from a 2D Li-DAR are usually insufficient for reliable object identification using this kind of method within a single scan.…”
Section: Introductionmentioning
confidence: 99%
“…Although Weinrich et al [34] provide their datasets, their recordings are limited to scenes with a single wheelchair or walker. While this might suffice for learning the few parameters of fairly constrained features and classifiers, we want to learn features from scratch and thus require more general and varied data.…”
Section: A Datasetmentioning
confidence: 99%
“…We trained our model on our training set and computed hyperparameters on our validation set as described in the previous section. In order to evaluate real-world performance of DROW, we now look at the precision and recall curves on our own test set and on the publicly available Reha test set of [34] 4 recorded with a similar robot. Recall that our test set was recorded in a never before seen part of the care facility, thus a method which learns location priors or background models will fail.…”
Section: Approach Evaluationmentioning
confidence: 99%
See 2 more Smart Citations