2018
DOI: 10.1007/s00365-018-9447-1
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic Subspace Correction in Hilbert Space

Abstract: We consider an incremental approximation method for solving variational problems in infinite-dimensional separable Hilbert spaces, where in each step a randomly and independently selected subproblem from an infinite collection of subproblems is solved. We show that convergence rates for the expectation of the squared error can be guaranteed under weaker conditions than previously established in [9].

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(11 citation statements)
references
References 15 publications
1
10
0
Order By: Relevance
“…In Figure 3 (right), we show similar results for the accelerated method (12)(13), see also Table 1 for the recorded iteration counts to termination. The parameters ξ = 0.3 and η = 0.577 were determined by experiment, and are near-optimal in the sense that for them the iteration count of the additive Schwarz method for the given problem and error reduction level ε 0 is close to minimal.…”
Section: Master-slave Networksupporting
confidence: 64%
See 3 more Smart Citations
“…In Figure 3 (right), we show similar results for the accelerated method (12)(13), see also Table 1 for the recorded iteration counts to termination. The parameters ξ = 0.3 and η = 0.577 were determined by experiment, and are near-optimal in the sense that for them the iteration count of the additive Schwarz method for the given problem and error reduction level ε 0 is close to minimal.…”
Section: Master-slave Networksupporting
confidence: 64%
“….. Here,κ =λ /λ is an upper bound for the condition κ of the space splitting. If in addition B holds then the algorithm (12)(13) converges in expectation for any u ∈ V , and…”
Section: Theoretical Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…This strategy is generalized in [11] to the collective setting (with α k = 1 − (k + 1) −1 and β k minimizing the norm of the residual), and is proved to achieve similar convergence properties as the collective OMP algorithm.…”
Section: Convergence Analysismentioning
confidence: 99%