New robotic systems are going to play an essential role in the future dismantling service for renewing office interiors in buildings. In dismantling tasks, robots are expected to be able to find and remove very small parts such as screws and bolts. Such recognition of small parts is difficult for robots. The article describes a vision-based hierarchical recognition applied to dismantling tasks where large structures are detected at first, thus small parts attached to these structures are detected easier. Regarding the items in the ceiling side, after the dismantling task of the ceiling panels, it is necessary to remove carefully the screws that once held these panels to the light gauge steel (LGS), with the purpose of reusing it. With the pose detection of the large structure (LGS) and considering a robot arm with a stereo camera on its tip, a trajectory near that structure can be computed to detect the small parts, in this case the screws. The large structure is detected by using a process of line detection in 2D and its 3D pose is measured with the stereo camera. During the motion along the structure, the screws are detected by applying a multi-template matching process to every captured image. Followed by, the Support Vector Machine (SVM), which recognizes those screw candidates with high true positive rate and low false positive one. These rates are improved with a temporal multi-image integration for tracking the screw candidates. In the experiment, 10 actual screws distributed in 1.1 m along a linear segment on the LGS are successfully recognized with a few false positives and with a final computed 3D position of 2 mm in average. Feasibility of methodology is evaluated by experimentation under different lighting conditions in a realistic environment. Experimental
Mobile robots must be capable to obtain an accurate map of their surroundings to move within it. To detect different materials that might be undetectable to one sensor but not others it is necessary to construct at least a two-sensor fusion scheme. With this, it is possible to generate a 2D occupancy map in which glass obstacles are identified. An artificial neural network is used to fuse data from a tri-sensor (RealSense Stereo camera, 2D 360° LiDAR, and Ultrasonic Sensors) setup capable of detecting glass and other materials typically found in indoor environments that may or may not be visible to traditional 2D LiDAR sensors, hence the expression improved LiDAR. A preprocessing scheme is implemented to filter all the outliers, project a 3D pointcloud to a 2D plane and adjust distance data. With a Neural Network as a data fusion algorithm, we integrate all the information into a single, more accurate distance-to-obstacle reading to finally generate a 2D Occupancy Grid Map (OGM) that considers all sensors information. The Robotis Turtlebot3 Waffle Pi robot is used as the experimental platform to conduct experiments given the different fusion strategies. Test results show that with such a fusion algorithm, it is possible to detect glass and other obstacles with an estimated root-mean-square error (RMSE) of 3 cm with multiple fusion strategies.
Autonomous mobile robots are an important focus of current research due to the advantages they bring to the industry, such as performing dangerous tasks with greater precision than humans. An autonomous mobile robot must be able to generate a collision-free trajectory while avoiding static and dynamic obstacles from the specified start location to the target location. Machine learning, a sub-field of artificial intelligence, is applied to create a Long Short-Term Memory (LSTM) neural network that is implemented and executed to allow a mobile robot to find the trajectory between two points and navigate while avoiding a dynamic obstacle. The input of the network is the distance between the mobile robot and the obstacles thrown by the LiDAR sensor, the desired target location, and the mobile robot’s location with respect to the odometry reference frame. Using the model to learn the mapping between input and output in the sample data, the linear and angular velocity of the mobile robot are obtained. The mobile robot and its dynamic environment are simulated in Gazebo, which is an open-source 3D robotics simulator. Gazebo can be synchronized with ROS (Robot Operating System). The computational experiments show that the network model can plan a safe navigation path in a dynamic environment. The best test accuracy obtained was 99.24%, where the model can generalize other trajectories for which it was not specifically trained within a 15 cm radius of a trained destination position.
In the renewal process of office interiors, a robotic system is needed in order to assist the human workers engaged in this kind of job. Regarding the items in the ceiling side, after the dismantling task of the ceiling panels, it is necessary to remove carefully the screws that once held these boards to the Light Gauge Steel (LGS), with the purpose of reusing. The proposed methodology to achieve the task consists at first of the LGS detection. Such detection is useful as a first approach to detect the screws because a trajectory under and near of that metal-ceiling structure can be generated for a scanning process. In both tasks, a stereo camera configured as an eye-in-hand system is necessary. During the motion under the structure, the screws are detected by applying a multi-template matching process to every caught image. A multi-frame integration increases the robustness in the screw detection process. The results of all the processed images along the trajectory are analyzed in order to measure both the true and the false positive detection rates of the screws attached to the LGS and to measure the 3D position of each one.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.