2007
DOI: 10.1007/s00211-007-0076-z
|View full text |Cite
|
Sign up to set email alerts
|

On the balancing principle for some problems of Numerical Analysis

Abstract: We discuss a choice of weight in penalization methods. The motivation for the use of penalization in computational mathematics is to improve the conditioning of the numerical solution. One example of such improvement is a regularization, where a penalization substitutes an ill-posed problem for a well-posed one. In modern numerical methods for PDEs a penalization is used, for example, to enforce a continuity of an approximate solution on non-matching grids. A choice of penalty weight should provide a balance b… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
30
0

Year Published

2008
2008
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 33 publications
(30 citation statements)
references
References 24 publications
0
30
0
Order By: Relevance
“…Another analog [23] of the modified discrepancy principle uses the sequence α i = α i−1 q with q > 1 as the popular balancing principle [12,13,17] and chooses α(δ) = α i , where i is the first index, for which…”
Section: Rules For Choosing the Regularization Parametermentioning
confidence: 99%
“…Another analog [23] of the modified discrepancy principle uses the sequence α i = α i−1 q with q > 1 as the popular balancing principle [12,13,17] and chooses α(δ) = α i , where i is the first index, for which…”
Section: Rules For Choosing the Regularization Parametermentioning
confidence: 99%
“…We first establish the estimates under the mere assumption that the original domain is Lipschitz. We then assume the domain boundary is smoothly curved and establish sharp estimates, and use the aforementioned regularization/penalty error estimate to determine the balance between the mesh-size and regularization/penalty parameter such that the overall accuracy of the finite element domain embedding method is optimized [20,19]. We shall see that the balance is rather delicate in some cases.…”
Section: Introductionmentioning
confidence: 99%
“…Literature abounds in techniques of choosing optimal value of regularization parameters; however an optimal regularization parameter which makes a trade-off or in other words, balances the smoothness constraint and the data fitting is possibly the suitable one. Recently, selection of regularization parameter based on balancing principle is getting an increased attention [9][10][11]. However, as demonstrated in [11], the method requires at least an ad-hoc estimation of the noise level in the data.…”
Section: Theorymentioning
confidence: 99%