2017 European Conference on Mobile Robots (ECMR) 2017
DOI: 10.1109/ecmr.2017.8098711
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Monte-Carlo localization in changing environments using RGB-D cameras

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…A VPR system is generally modeled as a ranking function, which can work with arbitrary VPR systems (e.g., Bayes filter [1], image retrieval [6], deep neural network [21]). It evaluates the likelihood of the robot being located at each predefined place class, given a query scene.…”
Section: Approachmentioning
confidence: 99%
See 2 more Smart Citations
“…A VPR system is generally modeled as a ranking function, which can work with arbitrary VPR systems (e.g., Bayes filter [1], image retrieval [6], deep neural network [21]). It evaluates the likelihood of the robot being located at each predefined place class, given a query scene.…”
Section: Approachmentioning
confidence: 99%
“…In this paper, we aim to train a next-best-view (NBV) planner for active cross-domain self-localization. Given a landmark map built in a past domain (e.g., weather, season, times of the day), the goal of self-localization is to localize the robot-itself, using relative measurements from on-board landmark sensor and odometory [1]- [3]. This cross-domain self-localization problem becomes a challenging one due to appearance/removal of landmarks as well as perceptual aliasing.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The MCL approaches also have begun to use semantic information, not only as a single landmark to localize around but also as a sensor model input. A system with an RGB-D camera was localized in a changing warehouse environment by combining the beam model using distance and bearing angle measurements and an a priori correlation table for objects [19]. The measurements were matched to an annotated grid map.…”
Section: Related Workmentioning
confidence: 99%
“…Visual robot self-localization is one of most important issues in mobile robotics and has been studied in many different contexts, including multi-hypothesis pose tracking [9], map matching [10], image retrieval [11], view-sequence matching [3], etc. Our self-localization scenario is most closely related to the view-sequence matching scenario, which takes a short-term live view-sequence as query and searches for corresponding part in the map view-sequence.…”
Section: Related Workmentioning
confidence: 99%