2018 Latin American Robotic Symposium, 2018 Brazilian Symposium on Robotics (SBR) and 2018 Workshop on Robotics in Education (W 2018
DOI: 10.1109/lars/sbr/wre.2018.00018
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Map Augmentation for Robot Navigation: A Learning Approach Based on Visual and Depth Data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2

Relationship

1
7

Authors

Journals

citations
Cited by 17 publications
(16 citation statements)
references
References 15 publications
0
12
0
Order By: Relevance
“…For the door transition points, the pixels are searched on the depth map obtained from the image with deep learning approaches. Bersan, Martins, Campos, and Nascimento (2018) used 2D CNN-based YOLO (Redmon, Divvala, Girshick, and Farhadi, 2016) object identifier and 3D model-based segmentation algorithm for door object during semantic metric mapping of the environment. The point cloud was obtained using the depth information of the pixels in the bounding box with YOLO, and only the points belonging to the door were extracted using RANSAC.…”
Section: Learning-based Approachesmentioning
confidence: 99%
“…For the door transition points, the pixels are searched on the depth map obtained from the image with deep learning approaches. Bersan, Martins, Campos, and Nascimento (2018) used 2D CNN-based YOLO (Redmon, Divvala, Girshick, and Farhadi, 2016) object identifier and 3D model-based segmentation algorithm for door object during semantic metric mapping of the environment. The point cloud was obtained using the depth information of the pixels in the bounding box with YOLO, and only the points belonging to the door were extracted using RANSAC.…”
Section: Learning-based Approachesmentioning
confidence: 99%
“…All the resulted associations which distances are smaller than a threshold (D(x i , y j ) < δ) are assumed to correspond to previously seen objects; otherwise, new object instances representing the remaining observations are included in the dictionary. In order to track and to increase the accuracy of detected instances, each stored semantic object is modeled with a constant state Kalman filter [6], since we are interested in storing mostly static classes in the final augmented map, to maintain its state up-to-date and combine different objects observations. Each filter combines the information of the different observations temporally as shown in Figure 8.…”
Section: Object Tracking and Final Augmented Representationmentioning
confidence: 99%
“…A preliminary conference version paper is introduced in our previous [6]. In this manuscript, we have made a number of major modifications that we summarize as follows: -The localization and object tracking of the classes are improved to handle multiple objects per image and to support online pose updates, during loop closing of the localization and metric mapping back-end.…”
Section: Introductionmentioning
confidence: 99%
“…where Steering represents the value in px of the worst case, and the place variable includes the "g" value for when the maximum value is detected. This steering value is transformed into a percentage value by considering the limitation indicated by Equations (17) and (18):…”
Section: Path Calculation and Motionmentioning
confidence: 99%