International audienceA new approach based on a mixture of Gaussian and quadrilateral functions was developed to process bathymetric lidar waveforms. The approach was tested on two simulated data sets obtained from the existing Water-LIDAR (Wa-LID) waveform simulator. The first simulated data set corresponds to a sensor configuration modeled after a possible future satellite bathymetric lidar sensor that was previously studied. The second simulated data set corresponds to a lidar airborne configuration modeled using the HawkEye airborne lidar parameters. In the proposed approach, the lidar waveform is fitted into a combination of three functions, two Gaussians for both the water surface and water bottom contributions and a quadrilateral function to fit the water column contribution. The results show more accurate bathymetry estimates compared with the use of a triangular function to fit the column contribution or a simple peak detection method. For the satellite configuration, the bias is improved by 16.8 and 0.8 cm compared with the peak detection method and the use of a triangular function, respectively. For the airborne configuration, the bias is improved by 10.0 and 2.4 cm compared with the peak detection method and the use of a triangular function, respectively. The proposed waveform fitting using the quadrilateral function underestimates the bathymetry by −5.0 and −6.1 cm for the simulated satellite and airborne data sets, respectively. The standard deviations of the bathymetry estimates are 6.0 and 8.2 cm, respectively. The obtained biases are inherent to overlaps between functions fitting the water surface, column, and bottom contributions
International audienceBathymetry is usually determined using the positions of the water surface and the water bottom peaks of the green LiDAR waveform. The water bottom peak characteristics are known to be sensitive to the bottom slope, which induces pulse stretching. However, the effects of a more complex bottom geometry within the footprint below semitransparent media are less understood. In this letter, the effects of the water bottom geometry on the shifting of the bottom peaks in the waveforms were modeled. For the sake of simplicity, the bottom geometry is modeled as a 1D sequence of successive contiguous segments with various slopes. The positions of the peaks in waveforms were deduced using a conventional peak detection process on simulated waveforms. The waveforms were simulated using the existing Wa-LID waveform simulator, which was extended in this study to account for a 1D complex bottom geometry. An experimental design using various water depths, bottom slopes, and LiDAR footprint sizes according to the design of satellite sensors was used for the waveform simulation. Power laws that explained the peak time shifting as a function of the footprint size and the water bottom slope were approximated. Peak shifting induces a bias in the bathymetry estimates that is based on a peak detection of up to 92% of the true water depth. This bias may also explain the frequent underestimation of the water depth from bathymetric airborne LiDAR surveys observed in various empirical studies
Generation and manipulation of digital images based on deep learning (DL) are receiving increasing attention for both benign and malevolent uses. As the importance of satellite imagery is increasing, DL has started being used also for the generation of synthetic satellite images. However, the direct use of techniques developed for computer vision applications is not possible, due to the different nature of satellite images. The goal of our work is to describe a number of methods to generate manipulated and synthetic satellite images. To be specific, we focus on two different types of manipulations: full image modification and local splicing. In the former case, we rely on generative adversarial networks commonly used for style transfer applications, adapting them to implement two different kinds of transfer: (i) land cover transfer, aiming at modifying the image content from vegetation to barren and vice versa and (ii) season transfer, aiming at modifying the image content from winter to summer and vice versa. With regard to local splicing, we present two different architectures. The first one uses image generative pretrained transformer and is trained on pixel sequences in order to predict pixels in semantically consistent regions identified using watershed segmentation. The second technique uses a vision transformer operating on image patches rather than on a pixel by pixel basis. We use the trained vision transformer to generate synthetic image segments and splice them into a selected region of the to-be-manipulated image. All the proposed methods generate highly realistic, synthetic, and satellite images. Among the possible applications of the proposed techniques, we mention the generation of proper datasets for the evaluation and training of tools for the analysis of satellite images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.