2020
DOI: 10.1109/access.2020.3034537
|View full text |Cite
|
Sign up to set email alerts
|

RoomSLAM: Simultaneous Localization and Mapping With Objects and Indoor Layout Structure

Abstract: This paper presents RoomSLAM, a Simultaneous Localization and Mapping (SLAM) method for mobile robots in indoor environments where environments are modeled by points and quadrilaterals in 2D space. Points represent positions of semantic objects whereas quadrilaterals approximate the structural layout of the environment, namely rooms. The benefit of such modeling is threefold. Firstly, rooms are a logical way to partition a graph in large-scale SLAM. Secondly, rooms and objects reduce search space in data assoc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 24 publications
0
5
0
1
Order By: Relevance
“…Recently, three ways to develop deep learning-based VSLAM software components encompassing auxiliary modules, original deep learning modules, and end-to-end deep neural networks have been identified with different degrees of implementation. A way to develop auxiliary deep-based modules introduces most of the published studies including feature extraction [48][49][50], semantic segmentation [51][52][53][54][55][56][57][58][59][60], pose estimation [8,45,46,[61][62][63], map construction [3,[64][65][66], and loop closure [67][68][69][70]. It should be noted that deep neural networks extract low-level features from images by converting them to high-level featureslayer by layer.…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
See 2 more Smart Citations
“…Recently, three ways to develop deep learning-based VSLAM software components encompassing auxiliary modules, original deep learning modules, and end-to-end deep neural networks have been identified with different degrees of implementation. A way to develop auxiliary deep-based modules introduces most of the published studies including feature extraction [48][49][50], semantic segmentation [51][52][53][54][55][56][57][58][59][60], pose estimation [8,45,46,[61][62][63], map construction [3,[64][65][66], and loop closure [67][68][69][70]. It should be noted that deep neural networks extract low-level features from images by converting them to high-level featureslayer by layer.…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
“…Thus, deep learning "changes" the term "feature extraction" from conventional keypoints extraction to complex tasks, such as matching keypoints of a 2D image and 3D LiDAR points [48], keypoints extraction from an optical flow [49], extraction of image patches using the famous ORB-SLAM algorithm [50], etc. Semantic segmentation seems to be a more explored area, with semantic filtering [51,52], object detection followed by semantic segmentation in static and dynamic environments [55][56][57], and scene representation [58,59] being the main approaches. Deep learning-based pose estimation is a wide area of study in many scientific fields, but only a few approaches have been implemented in VSLAM systems related to VO tasks [8,45,62], ego-motion of camera [46,61], and low illumination conditions [63].…”
Section: Discussion and Future Trendsmentioning
confidence: 99%
See 1 more Smart Citation
“…Many configurations were tested, and an SVM with a combination of Harri3D (as keypoint detector) and PFHRGB (as feature extractor) was reported to score the overall highest location accuracy. The newest article found by Rusli et al [56] proposed a full Simultaneous Localization and Mapping (SLAM) method. Their implementation processed two separated yet synchronized data samples for each analyzed timestamp-one from the RGBD sensor and one from the robot's odometry (position and orientation).…”
Section: Indoor Navigationmentioning
confidence: 99%
“…There are also other multi-modal datasets acquired by the devices setup on robot, such as the MIT Stata Center Dataset [ 112 ], TUMindoor Dataset [ 113 ], Fribourg Dataset [ 114 ], and KITTI Dataset [ 115 ], etc. These datasets also provide abundant multi-modal data and are widely used in navigation research [ 116 , 117 , 118 ].…”
Section: Multi-modal Datasetsmentioning
confidence: 99%