Convolutional sparse representations are a form of sparse representation with a structured, translation invariant dictionary. Most convolutional dictionary learning algorithms to date operate in batch mode, requiring simultaneous access to all training images during the learning process, which results in very high memory usage, and severely limits the training data size that can be used. Very recently, however, a number of authors have considered the design of online convolutional dictionary learning algorithms that offer far better scaling of memory and computational cost with training set size than batch methods. This paper extends our prior work, improving a number of aspects of our previous algorithm; proposing an entirely new one, with better performance, and that supports the inclusion of a spatial mask for learning from incomplete data; and providing a rigorous theoretical analysis of these methods.where C = {D | d m 2 2 ≤ 1, ∀m} is the constraint set, which is necessary to resolve the scaling ambiguity between D and x.Batch dictionary learning methods (e.g. [18,17,2,68]) sample a batch of training signals {s 1 , s 2 , . . . , s K } before training, and minimize an objective function such as (4) minThese methods require simultaneous access to all the training samples during training.In contrast, online dictionary learning methods process training samples in a streaming fashion. Specifically, let s (t) be the chosen sample at the t th training step. The framework of online dictionary learning iswhere SC denotes sparse coding, for instance, (2), and D-update computes a new dictionary D (t) given the past information. While each outer iteration of a batch dictionary learning algorithm involves computing the coefficient maps x k for all training samples, online learning methods compute the coefficient map x (t) for only one, or a small number, of training sample s (t) at each iteration, the other coefficient maps {x (τ ) } t−1 τ =1 used in the D-update having been computed in previous iterations. Thus, these algorithms can be implemented for large sets of training data or dynamically generated data. Online D-update methods and the corresponding online dictionary learning algorithms can be divided into two classes:Class I: first-order algorithms [55,35,1] are inspired by Stochastic Gradient Descent (SGD), which only uses first-order information, the gradient of the loss function, to update the dictionary D.Class II: second-order algorithms. These algorithms are inspired by Recursive Least Squares (RLS) [47,15], Iterative Reweighted Least Squares (IRLS) [34,56], Kernel RLS [21], second-order Stochastic Approximation (SA) [37,51,71,48,74,30], etc. They use previous informationto construct a surrogate function F (t) (D) to estimate the true loss function of D and then update D by minimizing this surrogate function. These surrogate functions involve both firstorder and second-order information, i.e. the gradient and Hessian of the loss function, respectively.The most significant difference between the two classes i...
No abstract
This is the pre-acceptance version, to read the final version please go to IEEE Transactions on Neural Networks and Learning Systems on IEEE Xplore. Interferometric phase restoration has been investigated for decades and most of the state-of-the-art methods have achieved promising performances for InSAR phase restoration. These methods generally follow the nonlocal filtering processing chain aiming at circumventing the staircase effect and preserving the details of phase variations. In this paper, we propose an alternative approach for InSAR phase restoration, i.e. Complex Convolutional Sparse Coding (Com-CSC) and its gradient regularized version. To our best knowledge, this is the first time that we solve the InSAR phase restoration problem in a deconvolutional fashion. The proposed methods can not only suppress interferometric phase noise, but also avoid the staircase effect and preserve the details. Furthermore, they provide an insight of the elementary phase components for the interferometric phases. The experimental results on synthetic and realistic high-and medium-resolution datasets from TerraSAR-X StripMap and Sentinel-1 interferometric wide swath mode, respectively, show that our method outperforms those previous state-of-the-art methods based on nonlocal InSAR filters, particularly the state-of-the-art method: InSAR-BM3D. The source code of this paper will be made publicly available for reproducible research inside the community.Index Terms-Convolutional dictionary learning, sparse coding, SAR interferometry (InSAR), nonlocal filtering
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.