2016
DOI: 10.1007/s13042-016-0496-0
|View full text |Cite
|
Sign up to set email alerts
|

The improved learning rate for regularized regression with RKBSs

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
5
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
6

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…In order to make the value range of equation ( 7) as consistent with equation (1), suppose a � −b. In addition, the rapid growth of y � e x is mainly reflected in the range of x ≥ 0, but it is not obvious at x < 0. erefore, in order to ensure that equation (7) can achieve the expected effect, let…”
Section: Methods Improvementmentioning
confidence: 99%
See 2 more Smart Citations
“…In order to make the value range of equation ( 7) as consistent with equation (1), suppose a � −b. In addition, the rapid growth of y � e x is mainly reflected in the range of x ≥ 0, but it is not obvious at x < 0. erefore, in order to ensure that equation (7) can achieve the expected effect, let…”
Section: Methods Improvementmentioning
confidence: 99%
“…e rate is provided in both expected mean and empirical mean. e results show that the uniform convexity influences the learning rate (see Liu et al's study [7]). Aiming at the protection of users' commodity viewing privacy in a commercial website, Wu et al [8] propose to construct a group of dummy requests on a trusted client, then, which are submitted together with a user commodity viewing request to the untrusted server-side, so as to confuse and cover up the user preferences.…”
Section: Introductionmentioning
confidence: 93%
See 1 more Smart Citation
“…Optimizers. For VGG16 and ResNet50, RAdam 30 optimizer was used with an initial learning rate of 0.001 and a weight decay of 0.0001. On the other hand, PyramidNet110-270 with ShakeDrop regularization was trained using stochastic gradient decent (SGD), consistent with the approach reported by Yamada et al 7 The learning rate was set to 0.1 initially and scaled down by multiplying 0.1 at the 75th and 150th epochs.…”
Section: Analysis For Jpeg Cifar-10mentioning
confidence: 99%
“…It is known that the learning rates of a kernel-based regularization algorithm are influenced by the geometry properties, e.g. the capacity, the covering number, the uniformly convexity ( [4,5,20,40]). Some other parameters with respect to the RKHS also influence the learning rates.…”
mentioning
confidence: 99%