2020
DOI: 10.1007/s10462-020-09880-z
|View full text |Cite
|
Sign up to set email alerts
|

Major advancements in kernel function approximation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 16 publications
(6 citation statements)
references
References 29 publications
0
6
0
Order By: Relevance
“…Due to the use of the kernel trick ( Aizerman, 1964 ; Francis and Raimond, 2021 ), it is not possible to directly examine the relationships between the estimation and specific feature variables ( Üstün et al, 2007 ). In this instance, it was a necessity to use a kernel-based methodology as the data was both cross-sectional and in a time series, and the required output was a complex polynomial.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Due to the use of the kernel trick ( Aizerman, 1964 ; Francis and Raimond, 2021 ), it is not possible to directly examine the relationships between the estimation and specific feature variables ( Üstün et al, 2007 ). In this instance, it was a necessity to use a kernel-based methodology as the data was both cross-sectional and in a time series, and the required output was a complex polynomial.…”
Section: Discussionmentioning
confidence: 99%
“…In this way, SVR aims to optimally minimise the distance from an estimated function output to the true output—in this case the collision rate by optimising the weight and bias coefficients of the input feature vectors. A radial-based kernel function was used to accommodate both a non-linear output response and a highly dimensional cross-sectional input ( Aizerman, 1964 ; Francis and Raimond, 2021 ). As we are using time-series data, it is highly likely that the current estimation will be similar to the previous estimation.…”
Section: Methodsmentioning
confidence: 99%
“…There are two main approaches to large-scale kernel approximations and both can be applied to KGL. The former focuses on kernel matrix approximations using methods such as Nyström sampling [65], while the latter deals with the approximation of the kernel function itself, using methods such as Random Fourier Features [66]. Nonetheless, this paper focuses on the modelling perspective, and we will leave the algorithmic improvement as a future direction.…”
Section: Algorithm 1 Kernel Graph Learning (Kgl)mentioning
confidence: 99%
“…Much lower runtime complexity could be achieved with the use of efficient linear models. As one of the key methods to kernelize algorithms with linear models, they are applied in various methods successfully, especially in machine learning [12].…”
Section: Introductionmentioning
confidence: 99%