This paper presents the design and implementation of a flexible manipulator formed of connected continuum kinematic modules (CKMs) to ease the fabrication of a continuum robot with multiple degrees of freedom. The CKM consists of five sequentially arranged circular plates, four universal joints intermediately connecting five circular plates, three individual actuated tension cables, and compression springs surrounding the tension cables. The base and movable circular plates are used to connect the robot platform or the neighboring CKM. All tension cables are controlled via linear actuators at a distal site. To demonstrate the function and feasibility of the proposed CKM, the kinematics of the continuum manipulator were verified through a kinematic simulation at different end velocities. The correctness of the manipulator posture was confirmed through the kinematic simulation. Then, a continuum robot formed with three CKMs is fabricated to perform Jacobian-based image servo tracking tasks. For the eye-to-hand (ETH) experiment, a heart shape trajectory was tracked to verify the precision of the kinematics, which achieved an endpoint error of 4.03 in Root Mean Square Error (RMSE). For the eye-in-hand (EIH) plugging-in/unplugging experiment, the accuracy of the image servo tracking system was demonstrated in extensive tolerance conditions, with processing times as low as 58±2.12 s and 83±6.87 s at the 90% confidence level in unplugging and plugging-in tasks, respectively. Finally, quantitative tracking error analyses are provided to evaluate the overall performance.
Accurate segmentation of drivable areas and road obstacles is critical for autonomous mobile robots to navigate safely in indoor and outdoor environments. With the fast advancement of deep learning, mobile robots may now perform autonomous navigation based on what they learned in the learning phase. On the other hand, existing techniques often have low performance when confronted with complex situations since unfamiliar objects are not included in the training dataset. Additionally, the use of a large amount of labeled data is generally essential for training deep neural networks to achieve good performance, which is time-consuming and labor-intensive. Thus, this paper presents a solution to these issues by proposing a self-supervised learning method for the drivable areas and road anomaly segmentation. First, we propose the Automatic Generating Segmentation Label (AGSL) framework, which is an efficient system automatically generating segmentation labels for drivable areas and road anomalies by finding dissimilarities between the input and resynthesized image and localizing obstacles in the disparity map. Then, we train RGB-D datasets with a semantic segmentation network using self-generated ground truth labels derived from our method (AGSL labels) to get the pre-trained model. The results showed that our AGSL achieved high performance in labeling evaluation, and the pre-trained model also obtains certain confidence in real-time segmentation application on mobile robots.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.