With the increased use of virtual and augmented reality applications, the importance of point cloud data rises. High-quality capturing of point clouds is still expensive and thus, the need for point cloud super-resolution or point cloud upsampling techniques emerges. In this paper, we propose an interpolation scheme for color upsampling of three-dimensional color point clouds. As a point cloud represents an object's surface in three-dimensional space, we first conduct a local transform of the surface into a two-dimensional plane. Secondly, we propose to apply a novel Frequency-Selective Mesh-to-Mesh Resampling (FSMMR) technique for the interpolation of the points in 2D. FSMMR generates a model of weighted superpositions of basis functions on scattered points. This model is then evaluated for the final points in order to increase the resolution of the original point cloud. Evaluation shows that our approach outperforms common interpolation schemes. Visual comparisons of the jaguar point cloud underlines the quality of our upsampling results. The high performance of FSMMR holds for various sampling densities of the input point cloud.
Many applications in image processing require resampling of arbitrarily located samples onto regular grid positions. This is important in frame-rate up-conversion, superresolution, and image warping among others. A state-of-the-art high quality model-based resampling technique is frequencyselective mesh-to-grid resampling which requires pre-estimation of key points. In this paper, we propose a new key point agnostic frequency-selective mesh-to-grid resampling that does not depend on pre-estimated key points. Hence, the number of data points that are included is reduced drastically and the run time decreases significantly. To compensate for the key points, a spectral weighting function is introduced that models the optical transfer function in order to favor low frequencies more than high ones. Thereby, resampling artefacts like ringing are supressed reliably and the resampling quality increases. On average, the new AFSMR is conceptually simpler and gains up to 1.2 dB in terms of PSNR compared to the original mesh-to-grid resampling while being approximately 14.5 times faster.
High frame rates are desired in many fields of application. As in many cases the frame repetition rate of an already captured video has to be increased, frame rate up-conversion (FRUC) is of high interest. We conduct a motion compensated approach. From two neighboring frames, the motion is estimated and the neighboring pixels are shifted along the motion vector into the frame to be reconstructed. For displaying, these irregularly distributed mesh pixels have to be resampled onto regularly spaced grid positions. We use the model-based key point agnostic frequency-selective mesh-to-grid resampling (AFSMR) for this task and show that AFSMR works best for applications that contain irregular meshes with varying densities. AFSMR gains up to 3.2 dB in contrast to the already high performing frequency-selective mesh-to-grid resampling (FSMR). Additionally, AFSMR increases the run time by a factor of 11 relative to FSMR.
The demand for high-resolution point clouds has increased throughout the last years. However, capturing high-resolution point clouds is expensive and thus, frequently replaced by upsampling of lowresolution data. Most state-of-the-art methods are either restricted to a rastered grid, incorporate normal vectors, or are trained for a single use case. We propose to use the frequency selectivity principle, where a frequency model is estimated locally that approximates the surface of the point cloud. Then, additional points are inserted into the approximated surface. Our novel frequency-selective geometry upsampling shows superior results in terms of subjective as well as objective quality compared to state-of-the-art methods for scaling factors of 2 and 4. On average, our proposed method shows a 4.4 times smaller point-to-point error than the second best state-ofthe-art PU-Net for a scale factor of 4.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with đŸ’™ for researchers
Part of the Research Solutions Family.