2016
DOI: 10.1007/s00521-016-2758-x
|View full text |Cite
|
Sign up to set email alerts
|

Representation learning with deep extreme learning machines for efficient image set classification

Abstract: Efficient and accurate representation of a collection of images, that belong to the same class, is a major research challenge for practical image set classification. Existing methods either make prior assumptions about the data structure, or perform heavy computations to learn structure from the data itself. In this paper, we propose an efficient image set representation that does not make any prior assumptions about the structure of the underlying data. We learn the nonlinear structure of image sets with deep… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 49 publications
(22 citation statements)
references
References 43 publications
0
22
0
Order By: Relevance
“…SVM [26][27][28][29][30] is based on VC theory and structural risk minimization criterion in statistical learning theory, SVM algorithm combines many techniques and methods, such as kernel, sparse solution, loose variables, convex quadratic programming and maximum interval hyperplane, etc. it has advantages in solving problems of small sample, non-linear, high dimension, local minimum, global optimization and generalization performance.…”
Section: Svmmentioning
confidence: 99%
See 1 more Smart Citation
“…SVM [26][27][28][29][30] is based on VC theory and structural risk minimization criterion in statistical learning theory, SVM algorithm combines many techniques and methods, such as kernel, sparse solution, loose variables, convex quadratic programming and maximum interval hyperplane, etc. it has advantages in solving problems of small sample, non-linear, high dimension, local minimum, global optimization and generalization performance.…”
Section: Svmmentioning
confidence: 99%
“…Compared with particle swarm optimization, the iterative equation of quantum particle swarm optimization algorithm does not need particle velocity vector, and needs less parameters to be adjusted, so it can be realized more easily. The experimental results on widely used benchmark functions show that the quantum particle swarm optimization algorithm has better performance than the standard particle swarm optimization algorithm [25][26]. For the parameter optimization problem of the classifier parameters, the quantum particle swarm optimization algorithm [40] will be used to optimize the basic parameters (penalty factor, kernel parameter) and the adjustable parameters of the SVM to obtain the optimal combination of parameters.…”
Section: Introductionmentioning
confidence: 99%
“…The extreme learning machine is a supervised learning algorithm originally for a single hidden layer feedforward neural network [3,21]. But after extensive research in the past few years, it has been modified and updated to work for deep neural networks as well, details can be found here [34][35][36][37]. We use the original form of the ELM, to keep things simple and fast.…”
Section: Layer 2: Extreme Learning Machine Ensemblementioning
confidence: 99%
“…In terms of algorithms, various machine-learning models, such as Naïve Bayes [15,16], Ensemble [17], or Deep Learning Structure [18] have been used for crime prediction, but Deep Neural Networks (DNN) provided better results in our previous experiments. This study uses DNN because it reflects representation learning and has been used in crosslingual transfer [19], speech recognition [20][21][22][23], image recognition [24][25][26][27], sentiment analysis [28][29][30][31][32], and biomedical [33]. Although the upper bound of the prediction performance still depends on the problem and the data themselves, DNN's auto-feature extraction [34] allows us to use rapid model building without feature processing, thus reducing the application threshold due to feature processing.…”
Section: Related Workmentioning
confidence: 99%