Laser welding is a rapidly developing technology that is of utmost importance in a number of industrial processes. The physics of the process has been investigated over the past 50 years and is mostly well understood. Nevertheless, online laser-quality monitoring remains an open issue until today due to its dynamic complexity. This paper is a supplement to existing approaches in the field of in situ and real-time laser-quality monitoring that presents a novel combination of state-of-the-art sensors and machine learning for data processing. The investigations were carried out using laser welding of titanium workpieces. The quality was estimated a posteriori by the visual inspection of cross-sections of the welded joints. Four quality categories were defined to cover the two main laser welding regimes: conduction and keyhole. The signals from the laser back reflection and optical and acoustic emissions were recorded during the laser welding process and were decomposed with the M-band wavelets. The relative energies of narrow frequency bands were taken as descriptive features. The correlation of the extracted features with the laser welding quality was carried out using the Laplacian graph support vector machine classifier. Also, an adaptive kernel for the classifier was developed to improve the analysis of the distributions of the complex features and was constructed from Gaussian mixtures. The presented laser welding setup and the developed adaptive kernel algorithm were able to classify the quality for every 2 µm of the welded joint with an accuracy ranged between 85.9% and 99.9%. Finally, the results of the developed adaptive kernel were compared with stateof-the-art machine learning methods.
Despite extensive research efforts in the field of laser welding, the imperfect repeatability of the weld quality still represents an open topic. Indeed, the inherent complexity of the underlying physical phenomena prevents the implementation of an effective controller using conventional regulators. To close this gap, we propose the application of Reinforcement Learning for closed-loop adaptive control of welding processes. The presented system is able to autonomously learn a control law that achieves a predefined weld quality independently from the starting conditions and without prior knowledge of the process dynamics. Specifically, our control unit influences the welding process by modulating the laser power and uses optical and acoustic emission signals as sensory input. The algorithm consists of three elements: a smart agent interacting with the process, a feedback network for quality monitoring, and an encoder that retains only the quality critic events from the sensory input. Based on the data representation provided by the encoder, the smart agent decides the output laser power accordingly. The corresponding input signals are then analyzed by the feedback network to determine the resulting process quality. Depending on the distance to the targeted quality, a reward is given to the agent. The latter is designed to learn from its experience by taking the actions that maximize not just its immediate reward, but the sum of all the rewards that it will receive from that moment on. Two learning schemes were tested for the agent, namely Q-Learning and Policy Gradient. The required training time to reach the targeted quality was 20 min for the former technique and 33 min for the latter.
One of the greatest challenges in the design of modern wearable devices is energy efficiency. While data processing and communication have received a lot of attention from the industry and academia, leading to highly efficient microcontrollers and transmission devices, sensor data acquisition in medical devices is still based on a conservative paradigm that requires regular sampling at the Nyquist rate of the target signal. This requirement is usually excessive for signals that are typically sparse and highly non-stationary, leading to data overload and a waste of resources in the full processing pipeline. In this work we propose a new system to create event-based heart-rate analysis devices, including a novel algorithm for QRS detection that is able to process electrocardiogram signals acquired irregularly and much below the theoretically-required Nyquist rate. This technique allows us to drastically reduce the average sampling frequency of the signal and, hence, the energy needed to process it and extract the relevant information. We implemented both the proposed event-based algorithm and a state-of-the-art version based on regular sampling on an ultra-low power hardware platform, and the experimental results show that the eventbased version reduces the energy consumption in runtime up to 15.6 times, while the detection performance is maintained at an average F1 score of 99.5%.
Event-based sensors have the potential to optimize energy consumption at every stage in the signal processing pipeline, including data acquisition, transmission, processing and storage. However, almost all state-of-the-art systems are still built upon the classical Nyquist-based periodic signal acquisition. In this work, we design and validate the Polygonal Approximation Sampler (PAS), a novel circuit to implement a generalpurpose event-based sampler using a polygonal approximation algorithm as the underlying sampling trigger. The circuit can be dynamically reconfigured to produce a coarse or a detailed reconstruction of the analog input, by adjusting the error threshold of the approximation. The proposed circuit is designed at the Register Transfer Level and processes each input sample received from the ADC in a single clock cycle. The PAS has been tested with three different types of archetypal signals captured by wearable devices (electrocardiogram, accelerometer and respiration data) and compared with a standard periodic ADC. These tests show that single-channel signals, with slow variations and constant segments (like the used single-lead ECG and the respiration signals) take great advantage from the used sampling technique, reducing the amount of data used up to 99% without significant performance degradation. At the same time, multi-channel signals (like the six-dimensional accelerometer signal) can still benefit from the designed circuit, achieving a reduction factor up to 80% with minor performance degradation. These results open the door to new types of wearable sensors with reduced size and higher battery lifetime.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.