2022
DOI: 10.1142/s0219691322500102
|View full text |Cite
|
Sign up to set email alerts
|

Unified error estimate for weak biorthogonal Greedy algorithms

Abstract: In this paper, we obtain the unified error estimate for some weak biorthogonal greedy algorithms with respect to dictionaries in Banach spaces by using some kind of [Formula: see text]-functional. From this estimate, we derive the sufficient conditions for the convergence and the convergence rates on sparse classes induced by the [Formula: see text]-functional. The results on convergence and the convergence rates are sharp.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 17 publications
0
6
0
Order By: Relevance
“…Additionally, Jiang et al [5] developed a standardized benchmarking system for data valuation. They summarized four downstream machine learning tasks for evaluating the values estimated by different data valuation methods.…”
Section: A Data Valuationmentioning
confidence: 99%
See 3 more Smart Citations
“…Additionally, Jiang et al [5] developed a standardized benchmarking system for data valuation. They summarized four downstream machine learning tasks for evaluating the values estimated by different data valuation methods.…”
Section: A Data Valuationmentioning
confidence: 99%
“…Following previous research [5], [11], we conduct experiments using twelve classification datasets that encompass tabular, text, and image types. Their information is summarized in Table II, which includes their sample size, input dimension, number of classes, source, and proportion of minor classes.…”
Section: A Datasets and Baselinesmentioning
confidence: 99%
See 2 more Smart Citations
“…Among others, simultaneous sparse approximation has been utilized in signal vector processing and multi-task learning (see [11][12][13][14]). It is well known that the greedy-type algorithms are powerful tools for generating such sparse approximations (see [15][16][17][18][19]). In particular, vector greedy algorithms are very efficient at approximating a given finite number of target elements simultaneously( see [20][21][22][23]).…”
Section: Introductionmentioning
confidence: 99%