System identification is a key discipline within the field of automation that deals with inferring mathematical models of dynamic systems based on input-output measurements. Conventional identification methods require extensive data generation and are thus not suitable for real-time applications. In this paper, a novel real-time approach for the parametric identification of linear systems using Deep Learning (DL) and the Modified Relay Feedback Test (MRFT) is proposed. The proposed approach requires only a single steady-state cycle of MRFT, and guarantees stability and performance in the identification and control phases. The MRFT output is passed to a trained DL model that identifies the underlying process parameters in milliseconds. A novel modification to the Softmax function is derived to better conform the DL model for the process identification task. Quadrotor Unmanned Aerial Vehicle (UAV) attitude and altitude dynamics were used in simulation and experimentation to verify the presented approach. Results show the effectiveness and real-time capabilities of the proposed approach, which outperforms the conventional Prediction Error Method in terms of accuracy, robustness to biases, computational efficiency and data requirements.
Robotic vision plays a major role in factory automation to service robot applications. However, the traditional use of frame-based cameras sets a limitation on continuous visual feedback due to their low sampling rate, poor performance in low light conditions and redundant data in real-time image processing, especially in the case of high-speed tasks. Neuromorphic event-based vision is a recent technology that gives human-like vision capabilities such as observing the dynamic changes asynchronously at a high temporal resolution (1µs) with low latency and wide dynamic range. In this paper, for the first time, we present a purely event-based visual servoing method using a neuromorphic camera in an eye-in-hand configuration for the grasping pipeline of a robotic manipulator. We devise three surface layers of active events to directly process the incoming stream of events from relative motion. A purely event-based approach is used to detect corner features, localize them robustly using heatmaps and generate virtual features for tracking and grasp alignment. Based on the visual feedback, the motion of the robot is controlled to make the temporal upcoming event features converge to the desired event in Spatiotemporal space. The controller switches its operation such that it explores the workspace, reaches the target object and achieves a stable grasp. The event-based visual servoing (EBVS) method is comprehensively studied and validated experimentally using a commercial robot manipulator in an eye-in-hand configuration for both static and dynamic targets. Experimental results show superior performance of the EBVS method over frame-based vision, especially in high-speed operations and poor lighting conditions. As such, EBVS overcomes the issues of motion blur, lighting and exposure timing that exist in conventional frame-based visual servoing methods.
Modern aircrafts require the assembly of thousands of components with high accuracy and reliability. The normality of drilled holes is a critical geometrical tolerance that is required to be achieved in order to realize an efficient assembly process. Failure to achieve the required tolerance leads to structures prone to fatigue problems and assembly errors. Elastomer-based tactile sensors have been used to support robots in acquiring useful physical interaction information with the environments. However, current tactile sensors have not yet been developed to support robotic machining in achieving the tight tolerances of aerospace structures. In this paper, a novel elastomer-based tactile sensor was developed for cobot machining. Three commercial silicon-based elastomer materials were characterised using mechanical testing in order to select a material with the best deformability. A Finite element model was developed to simulate the deformation of the tactile sensor upon interacting with surfaces with different normalities. Additive manufacturing was employed to fabricate the tactile sensor mould, which was chemically etched to improve the surface quality. The tactile sensor was obtained by directly casting and curing the optimum elastomer material onto the additively manufactured mould. A machine learning approach was used to train the simulated and experimental data obtained from the sensor. The capability of the developed vision tactile sensor was evaluated using real-world experiments with various inclination angles, and achieved a mean perpendicularity tolerance of 0.34°. The developed sensor opens a new perspective on low-cost precision cobot machining.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.