2015
DOI: 10.1088/0266-5611/31/7/075005
|View full text |Cite
|
Sign up to set email alerts
|

Aggregation of regularized solutions from multiple observation models

Abstract: Joint inversion of multiple observation models has important applications in many disciplines including geoscience, image processing and computational biology. One of the methodologies for joint inversion of ill-posed observation equations naturally leads to multi-parameter regularization, which has been intensively studied over the last several years. However, problems such as the choice of multiple regularization parameters remain unsolved. In the present study, we discuss a rather general approach to the re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
27
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(27 citation statements)
references
References 30 publications
0
27
0
Order By: Relevance
“…The methods that we have chosen construct classifiers with different mathematical structures.Therefore each constructed classifier may capture different aspects of the ideal classifier effectively. Using a combination of classifiers constructed using different statistical learning methods may give rise to a new classifier with better accuracy (and closer to an ideal classifier) than each classifier taken individually (Chen et al 2015;Kriukova et al 2016). The design of appropriate combination strategies is another avenue that we may explore in the future.…”
Section: Discussionmentioning
confidence: 99%
“…The methods that we have chosen construct classifiers with different mathematical structures.Therefore each constructed classifier may capture different aspects of the ideal classifier effectively. Using a combination of classifiers constructed using different statistical learning methods may give rise to a new classifier with better accuracy (and closer to an ideal classifier) than each classifier taken individually (Chen et al 2015;Kriukova et al 2016). The design of appropriate combination strategies is another avenue that we may explore in the future.…”
Section: Discussionmentioning
confidence: 99%
“…Note that this kind of endeavor has not been reported in the aforementioned literature. In addition, in order to enable an automatic regularization in the present study, we use the idea (Chen et al 2015) of an aggregation of regularized solutions corresponding to different values of multiple regularization parameters.…”
Section: Introductionmentioning
confidence: 99%
“…Note that x s agg,y δ can be effectively computed because it only uses access to T and y δ . Then by the same arguments as in the proof of Theorem 3.7 in [9], it follows from (5.2) that then the accuracy of x s agg may only be better than the one of x δ α(y δ ) . Moreover, from (5.1), (5.4), it follows that the error of the effectively computed aggregator x s agg,y δ differs from the error of x s agg by a quantity of higher order than the accuracy guaranteed by the standard quasi-optimality criterion.…”
Section: The Quasi-optimality Criterion In the Aggregation Of The Regmentioning
confidence: 82%
“…At the same time, as follows from our results, in the linear functional strategy, the above-mentioned convergence rate gap can be essentially reduced. This hints at an opportunity to use the linear functional strategy equipped with the quasi-optimality criterion for aggregating the constructed regularized approximants in a way described in [9]. Then from [9], it follows that such aggregation by the linear functional strategy can improve the accuracy compared to the aggregated regularized approximations, and this can be seen as a way to use the quasi-optimality criterion for mildly and severely ill-posed problems.…”
Section: Introductionmentioning
confidence: 99%