A novel resolution-enhancement method for an integral imaging microscopy that applies interpolation and deep learning is proposed, and the complete system with both hardware and software components is implemented. The resolution of the captured elemental image array is increased by generating intermediate-view elemental images between each neighboring elemental image, and an orthographic-view visualization of the specimen is reconstructed. Then, a deep learning algorithm is used to generate maximum possible resolution for each reconstructed directional-view image with improved quality. Since a pretrained model is applied, the proposed system processes the images directly without data training. The experimental results indicate that the proposed system produces resolutionenhanced directional-view images, and quantitative evaluation methods for reconstructed images such as the peak signal-to-noise ratio and the power spectral density confirm that the proposed system provides improvements in image quality.
In this paper, we propose an advanced three-dimensional visualization method for an integral imaging microscope system to simultaneously improve the resolution and quality of the reconstructed image. The main advance of the proposed method is that it generates a high-quality three-dimensional model without limitation of resolution by combining the high-resolution two-dimensional color image with depth data obtained through a fully convolutional neural network. First, the high-resolution twodimensional image and an elemental image array for a specimen are captured, and the orthographic-view image is reconstructed from the elemental image array. Then, via a convolutional neural network-based depth estimation after the brightness of input images are uniformed, a more accurate and improved depth image is generated; and the noise of result depth image is filtered. Subsequently, the estimated depth data is combined with the high-resolution two-dimensional image and transformed into a highquality three-dimensional model. In the experiment, it was confirmed that the displayed high-quality three-dimensional model could be visualized very similarly to the original image.
Unmanned aerial vehicles and battleships are equipped with the infrared search and tracking (IRST) systems for its mission to search and detect targets even in low visibility environments. However, infrared sensors are easily affected by diverse types of conditions, therefore most of IRST systems need to apply advanced contrast enhancement (CE) methods to cope with the low dynamic range of sensor output and image saturation. The general histogram equalization for infrared images has unwanted side effects such as low contrast expansion and saturation. Also, the local area processing for saturation reduction has been studied to solve the problems regarding the saturation and non-uniformity. We propose the cross fusion based adaptive contrast enhancement with three counter non-uniformity methods. We evaluate the proposed method and compare it with conventional CE methods using the discrete entropy, PSNR, SSIM, RMSE, and computation time indexes. We present the experimental results for images from various products using several datasets such as infrared, multi-spectral satellite, surveillance, general gray and color images, as well as video sequences. The results are compared using the integrated image quality measurement index and they show that the proposed method maintains its performance on various degraded datasets.
We propose a light-field microscopy display system that provides improved image quality and realistic three-dimensional (3D) measurement information. Our approach acquires both high-resolution two-dimensional (2D) and light-field images of the specimen sequentially. We put forward a matting Laplacian-based depth estimation algorithm to obtain nearly realistic 3D surface data, allowing the calculation of depth data, which is relatively close to the actual surface, and measurement information from the light-field images of specimens. High-reliability area data of the focus measure map and spatial affinity information of the matting Laplacian are used to estimate nearly realistic depths. This process represents a reference value for the light-field microscopy depth range that was not previously available. A 3D model is regenerated by combining the depth data and the high-resolution 2D image. The element image array is rendered through a simplified direction-reversal calculation method, which depends on user interaction from the 3D model and is displayed on the 3D display device. We confirm that the proposed system increases the accuracy of depth estimation and measurement and improves the quality of visualization and 3D display images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.