Plasmon-enhanced polymer photovoltaic cells based on large aspect ratio gold nanorods and the related working mechanism
.We propose an approach to reconstruct spectrum using artificial neural networks (ANNs) instead of directly solving a matrix equation using calibration coefficients. ANNs are particularly effective in reconstructing spectra in noise environment by learning the relationship between inputs and outputs with large amount of data training. There are several different training methods for ANNs. Compared with scaled conjugate gradient algorithm and Levenberg–Marquardt algorithm, Bayesian regularization (BR) algorithm is demonstrated to be a better training algorithm for spectral reconstruction. We also compare the spectral reconstruction of BR algorithm and that of the traditional algorithms. Experimental results indicate that the spectral reconstruction of BR algorithm is nearly in line with that measured by a commercial spectrometer. Obvious deviations are occurred in the spectral reconstruction of the traditional algorithms due to inevitable background noise, rounding errors, and temperature variations. Therefore, spectral reconstruction using ANNs with a train method of BR algorithm is a more suitable choice for the disorder dispersion spectrometer.
The underwater images acquired by marine detectors inevitably suffer from quality degradation due to color distortion and the haze effect. Traditional methods are ineffective in removing haze, resulting in the residual haze being intensified during color correction and contrast enhancement operations. Recently, deep-learning-based approaches have achieved greatly improved performance. However, most existing networks focus on the characteristics of the RGB color space, while ignoring factors such as saturation and hue, which are more important to the human visual system. Considering the above research, we propose a two-step triple-color space feature fusion and reconstruction network (TCRN) for underwater image enhancement. Briefly, in the first step, we extract LAB, HSV, and RGB feature maps of the image via a parallel U-net-like network and introduce a dense pixel attention module (DPM) to filter the haze noise of the feature maps. In the second step, we first propose the utilization of fully connected layers to enhance the long-term dependence between high-dimensional features of different color spaces; then, a group structure is used to reconstruct specific spacial features. When applied to the UFO dataset, our method improved PSNR by 0.21% and SSIM by 0.1%, compared with the second-best method. Numerous experiments have shown that our TCRN brings competitive results compared with state-of-the-art methods in both qualitative and quantitative analyses.
Abstract. A dense point cloud with rich and realistic texture is generated from multiview images using dense reconstruction algorithms such as Multi View Stereo (MVS). However, its spatial precision depends on the performance of the matching and dense reconstruction algorithms used. Moreover, outliers are usually unavoidable as mismatching of image features. The lidar point cloud lacks texture but performs better spatial precision because it avoids computational errors. This paper proposes a multiresolution patch-based 3D dense reconstruction method based on integrating multiview images and the laser point cloud. A sparse point cloud is firstly generated with multiview images by Structure from Motion (SfM), and then registered with the laser point cloud to establish the mapping relationship between the laser point cloud and multiview images. The laser point cloud is reprojected to multiview images. The corresponding optimal level of the image pyramid is predicted by the distance distribution of projected pixels, which is used as the starting level for patch optimization during dense reconstruction. The laser point cloud is used as stable seed points for patch growth and expansion, and stored by the dynamic octree structure. Subsequently, the corresponding patches are optimized and expanded with the pyramid image to achieve multiscale and multiresolution dense reconstruction. In addition, the octree’s spatial index structure facilitates parallel computing with highly efficiency. The experimental results show that the proposed method is superior to the traditional MVS technology in terms of model accuracy and completeness, and have broad application prospects in high-precision 3D modeling of large scenes.
Abstract. In large-scale projects such as hydropower and transportation, the real-time acquisition and generation of the 3D tunnel model can provide an important basis for the analysis and evaluation of the tunnel stability. The Simultaneous Localization And Mapping (SLAM) technology has the advantages of low cost and strong real-time, which can greatly improve the data acquisition efficiency during tunnel excavation. Feature tracking and matching are critical processes of traditional 3D reconstruction technologies such as Structure from Motion (SfM) and SLAM. However, the complicated rock mass structures on the tunnel surface and the limited lighting environment make feature tracking and matching difficult. Manhattan SLAM is a technology integrating superpixels and Manhattan world assumptions, in which both line features and planar features can be better extracted. Rock mass discontinuities including traces and structural planes are distributed on the inner surface of tunnels, which can be extracted for feature tracking and matching. Therefore, this paper proposes a 3D reconstruction pipeline for tunnels, in which, the Manhattan SLAM algorithm is applied for camera pose parameters estimation and the sparse point cloud generation, and the Patch-based Multi-View Stereo (PMVS) is adopted for dense reconstruction. In this paper, the Azure Kinect DK sensor is used for data acquisition. Experiments are proceeded and the results show that the proposed pipeline based on Manhattan SLAM and PMVS performs good robustness and feasibility for tunnels 3D reconstruction.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.