2021
DOI: 10.1109/tnnls.2020.2979546
|View full text |Cite
|
Sign up to set email alerts
|

Learning Convolutional Sparse Coding on Complex Domain for Interferometric Phase Restoration

Abstract: This is the pre-acceptance version, to read the final version please go to IEEE Transactions on Neural Networks and Learning Systems on IEEE Xplore. Interferometric phase restoration has been investigated for decades and most of the state-of-the-art methods have achieved promising performances for InSAR phase restoration. These methods generally follow the nonlocal filtering processing chain aiming at circumventing the staircase effect and preserving the details of phase variations. In this paper, we propose a… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
22
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
8
1
1

Relationship

4
6

Authors

Journals

citations
Cited by 61 publications
(22 citation statements)
references
References 59 publications
0
22
0
Order By: Relevance
“…The iterative alternating strategy used in Algorithm 1 is nothing but a block coordinate descent, whose convergence is theoretically guaranteed as long as each subproblem of (12) is exactly minimized [47]. Each subproblem optimized in Algorithm 2 is strongly convex, and thus the ADMM-based optimization strategy can converge to a unique minimum when the parameters are updated in finite steps [48], [49]. Moreover, we experimentally illustrate to clarify the convergences of J-Play and the proposed JPSA on the two HS datasets, where the relative errors of the objective function value are recorded in each iteration (see Fig.…”
Section: Convergence Analysismentioning
confidence: 99%
“…The iterative alternating strategy used in Algorithm 1 is nothing but a block coordinate descent, whose convergence is theoretically guaranteed as long as each subproblem of (12) is exactly minimized [47]. Each subproblem optimized in Algorithm 2 is strongly convex, and thus the ADMM-based optimization strategy can converge to a unique minimum when the parameters are updated in finite steps [48], [49]. Moreover, we experimentally illustrate to clarify the convergences of J-Play and the proposed JPSA on the two HS datasets, where the relative errors of the objective function value are recorded in each iteration (see Fig.…”
Section: Convergence Analysismentioning
confidence: 99%
“…Early attempts exploiting HSI mostly employed support vector machines (SVM) [11][12][13], K-means clustering (KNN) [14], and polynomial logistic regression (MLR) [15] schemes. Traditional feature extraction mostly relies on feature extractors designed by human experts [16,17] exploiting the domain knowledge and engineering experience. However, these feature extractors are not appealing in the HSI classification domain as they ignore the spatial correlation and local consistency and neglect exploiting the spatial feature information of HSI.…”
Section: Introductionmentioning
confidence: 99%
“…To address the challenge, we employed the transfer sparse coding (TSC) approach [12] [13], which has successfully extended the applications of the sparse coding [14][15][16][17] in the two domains. We extended TSC to adapt the three domains transfer learning problem based on the following key ideas.…”
Section: Introductionmentioning
confidence: 99%