Speckle filtering is an unavoidable step when dealing with applications that involve amplitude or intensity images acquired by coherent systems, such as Synthetic Aperture Radar (SAR). Speckle is a target-dependent phenomenon; thus, its estimation and reduction require the individuation of specific properties of the image features. Speckle filtering is one of the most prominent topics in the SAR image processing research community, who has first tackled this issue using handcrafted feature-based filters. Even if classical algorithms have slowly and progressively achieved better and better performance, the more recent Convolutional-Neural-Networks (CNNs) have proven to be a promising alternative, in the light of the outstanding capabilities in efficiently learning task-specific filters. Currently, only simplistic CNN architectures have been exploited for the speckle filtering task. While these architectures outperform classical algorithms, they still show some weakness in the texture preservation. In this work, a deep encoder–decoder CNN architecture, focused in the specific context of SAR images, is proposed in order to enhance speckle filtering capabilities alongside texture preservation. This objective has been addressed through the adaptation of the U-Net CNN, which has been modified and optimized accordingly. This architecture allows for the extraction of features at different scales, and it is capable of producing detailed reconstructions through its system of skip connections. In this work, a two-phase learning strategy is adopted, by first pre-training the model on a synthetic dataset and by adapting the learned network to the real SAR image domain through a fast fine-tuning procedure. During the fine-tuning phase, a modified version of the total variation (TV) regularization was introduced to improve the network performance when dealing with real SAR data. Finally, experiments were carried out on simulated and real data to compare the performance of the proposed method with respect to the state-of-the-art methodologies.
Interferometric SAR (InSAR) algorithms exploit synthetic aperture radar (SAR) images to estimate ground displacements, which are updated at each new satellite acquisition, over wide areas. The analysis of the resulting time series finds its application, among others, in monitoring tasks regarding seismic faults, subsidence, landslides, and urban structures, for which an accurate and timely response is required. Typical analyses consist of identifying among the numerous time series the ones that exhibit an anomalous displacement, thus deserving to be further investigated. In practice, this is realised by selecting the time series which are characterised by trend changes w.r.t. the historical behaviour. In this work, we propose a Deep Learning approach for change point detection in InSAR time series. The designed architecture combines Long Short-Term Memory (LSTM) cells, to model the temporal correlation among samples in the input time series, and Time-Gated LSTM (TGLSTM) cells, to consider the sampling rate as additional information during learning. We further propose a solution to the lack of ground truth by developing a suitable pipeline for realistic data simulation. The method has been developed and validated through a large suite of experiments. Both quantitative and qualitative analyses have been conducted to demonstrate the detection capabilities of the learned model and how it is a valid alternative to the statistical reference algorithm. We further applied the developed method in a real continuous monitoring project to analyse InSAR time series over the Tuscany region in Italy, proving its effectiveness in the real domain.
Survival analysis studies time-modeling techniques for an event of interest occurring for a population. Survival analysis found widespread applications in healthcare, engineering, and social sciences. However, the data needed to train survival models are often distributed, incomplete, censored, and confidential. In this context, federated learning can be exploited to tremendously improve the quality of the models trained on distributed data while preserving user privacy. However, federated survival analysis is still in its early development, and there is no common benchmarking dataset to test federated survival models. This work proposes a novel technique for constructing realistic heterogeneous datasets by starting from existing nonfederated datasets in a reproducible way. Specifically, we provide two novel dataset-splitting algorithms based on the Dirichlet distribution to assign each data sample to a carefully chosen client: quantity-skewed splitting and label-skewed splitting. Furthermore, these algorithms allow for obtaining different levels of heterogeneity by changing a single hyperparameter. Finally, numerical experiments provide a quantitative evaluation of the heterogeneity level using log-rank tests and a qualitative analysis of the generated splits. The implementation of the proposed methods is publicly available in favor of reproducibility and to encourage common practices to simulate federated environments for survival analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.