2020
DOI: 10.1007/s41095-020-0181-9
|View full text |Cite
|
Sign up to set email alerts
|

Kernel-blending connection approximated by a neural network for image classification

Abstract: This paper proposes a kernel-blending connection approximated by a neural network (KBNN) for image classification. A kernel mapping connection structure, guaranteed by the function approximation theorem, is devised to blend feature extraction and feature classification through neural network learning. First, a feature extractor learns features from the raw images. Next, an automatically constructed kernel mapping connection maps the feature vectors into a feature space. Finally, a linear classifier is used as … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
8
1

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…As we all know, almost all images have high information redundancy either in the form of low rank or sparse representation [11,12]: many pixels share similar features. Based on a low-rank prior or sparse representation, images can be denoised [13][14][15][16][17].…”
Section: Motivation and Contributionmentioning
confidence: 99%
“…As we all know, almost all images have high information redundancy either in the form of low rank or sparse representation [11,12]: many pixels share similar features. Based on a low-rank prior or sparse representation, images can be denoised [13][14][15][16][17].…”
Section: Motivation and Contributionmentioning
confidence: 99%
“…For the first problem, we decompose the image into two parts: the low frequency part representing the structure and the high frequency part containing the texture. In image super-resolution and image reconstruction, the high frequencies are usually considered to be the missing information in the scaling process used to refine the result [23][24][25].…”
Section: Global Sparse Decompositionmentioning
confidence: 99%
“…We select three classical datasets, i.e., CIFAR-10 [15], CIFAR-100 [15] and UC-M [16] for our experiments. The datasets CIFAR-10 and CIFAR-100 are both 32×32 color images, containing 10 and 100 categories respectively.…”
Section: Simulation Experiments Analysismentioning
confidence: 99%