2019 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2019
DOI: 10.1109/iros40897.2019.8967855
|View full text |Cite
|
Sign up to set email alerts
|

Data Association Aware Semantic Mapping and Localization via a Viewpoint-Dependent Classifier Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(22 citation statements)
references
References 13 publications
0
22
0
Order By: Relevance
“…is the prior over object poses newly observed at time k. As opposed to [7], this formulation also supports an increasing number of objects known at each time step, with both X o,r k and C r k increasing in dimension. Note that in general b r k is different for each class realization, as models (1) are different for each class.…”
Section: Local Hybrid Belief Maintenancementioning
confidence: 94%
See 3 more Smart Citations
“…is the prior over object poses newly observed at time k. As opposed to [7], this formulation also supports an increasing number of objects known at each time step, with both X o,r k and C r k increasing in dimension. Note that in general b r k is different for each class realization, as models (1) are different for each class.…”
Section: Local Hybrid Belief Maintenancementioning
confidence: 94%
“…Feldman and Indelman [6] proposed a sequential object classification that utilizes a viewpoint dependent classifier with known relative poses a-priori. Tchuiev et al [7] maintained a hybrid belief with a viewpoint dependent classifier to disambiguate between data association realizations. These works, [7], address only sequential classification and do not consider the coupled problem with SLAM.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Graphs are a natural representation for these types of representations. The nodes of the graph are usually given by feature or landmarks detected in the observation, from which a kernel can be applied to extract a descriptor to be matched against other observations [10], [11]. GLARE [12], its rotation invariant extension GLAROT [13], and its 3D extension [14] first compute a descriptor for each observation based on the relative distances and angles between landmarks.…”
Section: Introductionmentioning
confidence: 99%