Hyperspectral imaging is capable of capturing information beyond conventional RGB cameras; therefore, several applications of this have been found, such as material identification and spectral analysis. However, similar to many camera systems, most of the existing hyperspectral cameras are still passive imaging systems. Such systems require an external light source to illuminate the objects, to capture the spectral intensity. As a result, the collected images highly depend on the environment lighting and the imaging system cannot function in a dark or low-light environment. This work develops a prototype system for active hyperspectral imaging, which actively emits diverse single-wavelength light rays at a specific frequency when imaging. This concept has several advantages: first, using the controlled lighting, the magnitude of the individual bands is more standardized to extract reflectance information; second, the system is capable of focusing on the desired spectral range by adjusting the number and type of LEDs; third, an active system could be mechanically easier to manufacture, since it does not require complex band filters as used in passive systems. Three lab experiments show that such a design is feasible and could yield informative hyperspectral images in low light or dark environments: (1) spectral analysis: this system’s hyperspectral images improve food ripening and stone type discernibility over RGB images; (2) interpretability: this system’s hyperspectral images improve machine learning accuracy. Therefore, it can potentially benefit the academic and industry segments, such as geochemistry, earth science, subsurface energy, and mining.
Quantifying the colors of objects is useful in a wide range of applications, including medical diagnosis, agricultural monitoring, and food safety. Accurate colorimetric measurement of objects is a laborious process normally performed through a color matching test in the laboratory. A promising alternative is to use digital images for colorimetric measurement, due to their portability and ease of use. However, image-based measurements suffer from errors caused by the non-linear image formation process and unpredictable environmental lighting. Solutions to this problem often perform relative color correction among multiple images through discrete color reference boards, which may yield biased results due to the lack of continuous observation. In this paper, we propose a smartphone-based solution, that couples a designated color reference board with a novel color correction algorithm, to achieve accurate and absolute color measurements. Our color reference board contains multiple color stripes with continuous color sampling at the sides. A novel correction algorithm is proposed to utilize a first-order spatial varying regression model to perform the color correction, which leverages both the absolute color magnitude and scale to maximize the correction accuracy. The proposed algorithm is implemented as a “human-in-the-loop” smartphone application, where users are guided by an augmented reality scheme with a marker tracking module to take images at an angle that minimizes the impact of non-Lambertian reflectance. Our experimental results show that our colorimetric measurement is device independent and can reduce up to 90% color variance for images collected under different lighting conditions. In the application of reading pH values from test papers, we show that our system performs 200% better than human reading. The designed color reference board, the correction algorithm, and our augmented reality guiding approach form an integrated system as a novel solution to measure color with increased accuracy. This technique has the flexibility to improve color reading performance in systems beyond existing applications, evidenced by both qualitative and quantitative experiments on example applications such as pH-test reading.
Abstract. Recovering surface albedos from photogrammetric images for realistic rendering and synthetic environments can greatly facilitate its downstream applications in VR/AR/MR and digital twins. The textured 3D models from standard photogrammetric pipelines are suboptimal to these applications because these textures are directly derived from images, which intrinsically embedded the spatially and temporally variant environmental lighting information, such as the sun illumination, direction, causing different looks of the surface, making such models less realistic when used in 3D rendering under synthetic lightings. On the other hand, since albedo images are less variable by environmental lighting, it can, in turn, benefit basic photogrammetric processing. In this paper, we attack the problem of albedo recovery for aerial images for the photogrammetric process and demonstrate the benefit of albedo recovery for photogrammetry data processing through enhanced feature matching and dense matching. To this end, we proposed an image formation model with respect to outdoor aerial imagery under natural illumination conditions; we then, derived the inverse model to estimate the albedo by utilizing the typical photogrammetric products as an initial approximation of the geometry. The estimated albedo images are tested in intrinsic image decomposition, relighting, feature matching, and dense matching/point cloud generation results. Both synthetic and real-world experiments have demonstrated that our method outperforms existing methods and can enhance photogrammetric processing.
Conflating/stitching 2.5D raster digital surface models (DSM) into a large one has been a running practice in geoscience applications, however, conflating full-3D mesh models, such as those from oblique photogrammetry, is extremely challenging. In this letter, we propose a novel approach to address this challenge by conflating multiple full-3D oblique photogrammetric models into a single, and seamless mesh for high-resolution site modeling. Given two or more individually collected and created photogrammetric meshes, we first propose to create a virtual camera field (with a panoramic field of view) to incubate virtual spaces represented by Truncated Signed Distance Field (TSDF), an implicit volumetric field friendly for linear 3D fusion; then we adaptively leverage the truncated bound of meshes in TSDF to conflate them into a single and accurate full 3D site model. With drone-based 3D meshes, we show that our approach significantly improves upon traditional methods for model conflations, to drive new potentials to create excessively large and accurate full 3D mesh models in support of geoscience and environmental applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.