Visual simultaneous localization and mapping (visual-SLAM) is a prominent technology for autonomous navigation of mobile robots. As a significant requirement for visual-SLAM, loop closure detection (LCD) involves recognizing a revisited place,
Loop closure detection is a significant requirement for simultaneous localization and mapping (SLAM) to recognize revisited place. This paper presents a novel line-based loop closure detection method for vision-based SLAM that allows reliable loop closure detections, especial under structural environment. The performance of coping with perceptual aliasing conditions is more competitive than point based methods. The bag of words model is extended in this work which uses only line features. A variant of TF-IDF (term frequency & inverse document frequency) scoring scheme is proposed by adding a discrimination coefficient to improve the discrimination of image similarity scores, further to reinforce the similarity evaluation of two images. LBD (Line Band Descriptor) and binary LBD features are extracted to build visual vocabularies. Temporal consistency and spatial continuity checks enhance detection reliability. The performance of proposed scoring scheme was compared with original TF-IDF, results show that our proposed scheme has competitive discrimination ability. We also compared the query performance of our vocabularies with ORB-based, MSLD (mean standard-deviation line descriptor)-based, and PL (Point-and-Line)-based vocabularies, results indicate that our vocabularies obtain the highest successful retrieval rate. The performance of the whole loop closure detection algorithm was also evaluated in terms of precision, recall and efficiency, which were compared with ORB, MSLD, PL-based methods, and also with CNN-based method, results demonstrate that our method is superior to others with satisfactory precision and efficiency.
We address the problem of depth estimation from a single monocular image in the paper. Depth estimation from a single image is an ill-posed and inherently ambiguous problem. In the paper, we propose an encoder-decoder structure with the feature pyramid to predict the depth map from a single RGB image. More specifically, the feature pyramid is used to detect objects of different scales in the image. The encoder structure aims to extract the most representative information from the original image through a series of convolution operations and to reduce the resolution of the input image. We adopt Res2-50 as the encoder to extract important features. The decoder section uses a novel upsampling structure to improve the output resolution. Then, we also propose a novel loss function that adds gradient loss and surface normal loss to the depth loss, which can predict not only the global depth but also the depth of fuzzy edges and small objects. Additionally, we use Adam as our optimization function to optimize our network and speed up convergence. Our extensive experimental evaluation proves the efficiency and effectiveness of the method, which is competitive with previous methods on the Make3D dataset and outperforms state-of-theart methods on the NYU Depth v2 dataset.
Because of the tight coupling between the traditional network control layer and the data layer, the path allocation algorithm has poor globality and lack of real-time, and distribution of network traffic is unbalanced, which leads to network congestion. To solve the problem, according to the advantages of SDN centralized control and transparency, a Top-K routing algorithm based on Bandwidth utilization (Top-KRA-BU) is proposed. The algorithm calculates K available paths from the source nodes to the destination nodes in real time, and based on the bandwidth utilization, the K paths are evaluated, and the optimal forwarding path is selected. The experimental results show that the routing algorithm is superior to shortest path first (SPF) routing algorithm in the network bandwidth utilization.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.