Strong scattering medium brings great difficulties to optical imaging, which is also a problem in medical imaging and many other fields. Optical memory effect makes it possible to image through strong random scattering medium. However, this method also has the limitation of limited angle field-of-view (FOV), which prevents it from being applied in practice. In this paper, a kind of practical convolutional neural network called PDSNet is proposed, which effectively breaks through the limitation of optical memory effect on FOV. Experiments is conducted to prove that the scattered pattern can be reconstructed accurately in real-time by PDSNet, and it is widely applicable to retrieve complex objects of random scales and different scattering media.
Imaging through scattering media is one of the hotspots in the optical field, and impressive results have been demonstrated via deep learning (DL). However, most of the DL approaches are solely data-driven methods and lack the related physics prior, which results in a limited generalization capability. In this paper, through the effective combination of the speckle-correlation theory and the DL method, we demonstrate a physics-informed learning method in scalable imaging through an unknown thin scattering media, which can achieve high reconstruction fidelity for the sparse objects by training with only one diffuser. The method can solve the inverse problem with more general applicability, which promotes that the objects with different complexity and sparsity can be reconstructed accurately through unknown scattering media, even if the diffusers have different statistical properties. This approach can also extend the field of view (FOV) of traditional speckle-correlation methods. This method gives impetus to the development of scattering imaging in practical scenes and provides an enlightening reference for using DL methods to solve optical problems.
This paper focuses on the control of a simulated robot hand using a data glove, which gathers information from resistors that measure the flexion of the user's fingers. The data glove used was a commercially available controller, using 5 flex sensors on each finger to collect data. Arduino Uno R3 microcontroller board was used to power the data glove which is based on the ATmega328P microcontroller. CoppeliaSim was used as a platform for simulation, with a compatible model for the hand found within the software's community. C++ is used within both the CoppeliaSim and Arduino environments, with a micro-USB connecting the two. This paper allows for real time control with the proposed data glove, which transmit the data in the form of a string of 5 different values. In the future, many adjustments and modifications could be made to allow for more precise control and eventually more degrees of freedom. A video demo for the proposed design can be seen from: https://youtu.be/w-GqzigfkXY.
Scattering medium brings great difficulties to locate and reconstruct objects especially when the objects are distributed in different positions. In this paper, a novel physics and learning-heuristic method is presented to locate and image the object through a strong scattering medium. A novel physics-informed framework, named DINet, is constructed to predict the depth and the image of the hidden object from the captured speckle pattern. With the phase-space constraint and the efficient network structure, the proposed method enables to locate the object with a depth mean error less than 0.05 mm, and image the object with an average peak signal-to-noise ratio (PSNR) above 24 dB, ranging from 350 mm to 1150 mm. The constructed DINet firstly solves the problem of quantitative locating and imaging via a single speckle pattern in a large depth. Comparing with the traditional methods, it paves the way to the practical applications requiring multi-physics through scattering media.
As an important part of the manufacturing industry, redundant robots can undertake heavy and tough tasks, which human operators are difficult to sustain. Such onerous and repetitive industrial manipulations, that is, positioning and carrying, impose heavy burdens on the load bearing for redundancy robots’ joints. Under the circumstances of long-term and intense industrial operations, joints of redundant robots are conceivably to fall into functional failure, which may possibly cause abrupt joint lock or freeze at unknown time instants. Therefore, task accuracy by end-effectors tends to diminish considerably and gradually because of broken-down joints. In this paper, a sparsity-based method for fault-tolerant motion planning of redundant robots is provided for the first time. The developed fault-tolerant redundancy resolution approach is defined as L1-norm based optimization with immediate variables involved to avoid discontinuity in the dynamic solution process. Meanwhile, those potential faulty joint(s) can be located by the designed fault observer with the proposed fault-diagnosis algorithm. The proposed fault-tolerant motion planning method with fault diagnosis is dynamically optimized by resultant primal dual neural networks with provable convergence. Moreover, the sparsity of joint actuation by the proposed method can be enhanced by around 43.87% and 36.51%, respectively, for tracking circle and square paths. Simulation and experimental findings on a redundant robot (KUKA iiwa) prove the efficacy of the developed defect tolerant approach based on sparsity.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.