DOI: 10.32657/10220/47835
|View full text |Cite
|
Sign up to set email alerts
|

Kernel learning for visual perception

Abstract: The contributions of the co-authors are as follows: • I proposed the idea, designed the experiments, and prepared the manuscript. • Handuo Zhang, Thien-Minh Nguyen, and I conducted the experiments.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 126 publications
(238 reference statements)
0
4
0
Order By: Relevance
“…It recently has also been widely applied to correlation filter for improving the processing speed. For example, the kernelized correlation filter (KCF) [22] is proposed to speed up the calculation of kernel ridge regression by bypassing a big matrix inversion, while it assumes that all the data are circular shifts of each other [49], hence it can only predict signal translation. To break this theoretical limitation, the kernel cross-correlator (KCC) is proposed in [54] by defining the correlator in frequency domain directly, resulting in a closed-form solution with computational complexity of O(N log N ), where N is the signal length.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…It recently has also been widely applied to correlation filter for improving the processing speed. For example, the kernelized correlation filter (KCF) [22] is proposed to speed up the calculation of kernel ridge regression by bypassing a big matrix inversion, while it assumes that all the data are circular shifts of each other [49], hence it can only predict signal translation. To break this theoretical limitation, the kernel cross-correlator (KCC) is proposed in [54] by defining the correlator in frequency domain directly, resulting in a closed-form solution with computational complexity of O(N log N ), where N is the signal length.…”
Section: Related Workmentioning
confidence: 99%
“…This property is crucial, since when invariances are present in the data, encoding them explicitly in an architecture provides an important source of regularization, which reduces the amount of training data required [23]. As mentioned in Section 2, the same property is also presented in [22], which is achieved by assuming that all the training samples are circular shifts of each other [49], while ours is inherited from convolution. Interestingly, the kernel crosscorrelator (KCC) defined in [54] is equivariant to any affine transforms (e.g., translation, rotation, and scale), which may be useful for further development of this work.…”
Section: Translational Equivariancementioning
confidence: 99%
“…s,t . By the end of an epoch, cross-correlation, also known as discrete convolution-with the difference that convolution operator g is not an impulse function but an arbitrary filter signal [45]-is applied to the τ epoch . This operation can be either in a 1D or 2D spatial frame, based on the specific problem and physical continuity logic among search space locus.…”
Section: Extended Aco For Smart Storage System Managementmentioning
confidence: 99%
“…Intuitively, to find the maximum cosine similarity, we need to repeatedly compute (3) for translated memory cube h × w times, resulting in a very high computational complexity. To solve this problem, we leverage the fast Fourier transform (FFT) to compute the cross-correlation [35]. Recall that 2-D cross-correlation is the inner-products between the first signal and circular translations of the second signal [37], we can compute S i (x, M i ) as…”
Section: Memory Readingmentioning
confidence: 99%