Strong scattering medium brings great difficulties to optical imaging, which is also a problem in medical imaging and many other fields. Optical memory effect makes it possible to image through strong random scattering medium. However, this method also has the limitation of limited angle field-of-view (FOV), which prevents it from being applied in practice. In this paper, a kind of practical convolutional neural network called PDSNet is proposed, which effectively breaks through the limitation of optical memory effect on FOV. Experiments is conducted to prove that the scattered pattern can be reconstructed accurately in real-time by PDSNet, and it is widely applicable to retrieve complex objects of random scales and different scattering media.
Imaging through scattering media is one of the hotspots in the optical field, and impressive results have been demonstrated via deep learning (DL). However, most of the DL approaches are solely data-driven methods and lack the related physics prior, which results in a limited generalization capability. In this paper, through the effective combination of the speckle-correlation theory and the DL method, we demonstrate a physics-informed learning method in scalable imaging through an unknown thin scattering media, which can achieve high reconstruction fidelity for the sparse objects by training with only one diffuser. The method can solve the inverse problem with more general applicability, which promotes that the objects with different complexity and sparsity can be reconstructed accurately through unknown scattering media, even if the diffusers have different statistical properties. This approach can also extend the field of view (FOV) of traditional speckle-correlation methods. This method gives impetus to the development of scattering imaging in practical scenes and provides an enlightening reference for using DL methods to solve optical problems.
Deep learning-based fringe projection profilometry (FPP) shows potential for challenging three-dimensional (3D) reconstruction of objects with dynamic motion, complex surface, and extreme environment. However, the previous deep learning-based methods are all supervised ones, which are difficult to be applied for scenes that are different from the training, thus requiring a large number of training datasets. In this paper, we propose a new geometric constraint-based phase unwrapping (GCPU) method that enables an untrained deep learning-based FPP for the first time. An untrained convolutional neural network is designed to achieve correct phase unwrapping through a network parameter space optimization. The loss function of the optimization is constructed by following the 3D, structural, and phase consistency. The designed untrained network directly outputs the desired fringe order with the inputted phase and fringe background. The experiments verify that the proposed GCPU method provides higher robustness compared with the traditional GCPU methods, thus resulting in accurate 3D reconstruction for objects with a complex surface. Unlike the commonly used temporal phase unwrapping, the proposed GCPU method does not require additional fringe patterns, which can also be used for the dynamic 3D measurement.
Salient object detection remains one of the most important and active research topics in computer vision, with wide-ranging applications to object recognition, scene understanding, image retrieval, context aware image editing, image compression, etc. Most existing methods directly determine salient objects by exploring various salient object features. Here, we propose a novel graph based ranking method to detect and segment the most salient object in a scene according to its relationship to image border (background) regions, i.e., the background feature. Firstly, we use regions/super-pixels as graph nodes, which are fully connected to enable both long range and short range relations to be modeled.The relationship of each region to the image border (background) is evaluated in two stages: (i) ranking with hard background queries, and (ii) ranking with soft foreground queries. We experimentally show how this two-stage ranking based salient object detection method is complementary to traditional methods, and that integrated results outperform both. Our method allows the exploitation of intrinsic image structure to achieve high quality salient object determination using a quadratic optimization framework, with a closed form solution which can be easily computed. Extensive method evaluation and comparison using three challenging saliency datasets demonstrate that our method consistently outperforms 10 state-of-theart models by a big margin.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đź’™ for researchers
Part of the Research Solutions Family.