2016
DOI: 10.1016/j.neunet.2015.10.008
|View full text |Cite
|
Sign up to set email alerts
|

One pass learning for generalized classifier neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
10

Relationship

0
10

Authors

Journals

citations
Cited by 20 publications
(6 citation statements)
references
References 16 publications
0
5
0
1
Order By: Relevance
“…The discriminant boundary of the PNN is close to the Bayesian optimal decision surface. RBFNN uses RBF to construct the space of hidden layers to make the input data linearly separable [36]. GS-SVM uses a kernel function to map low-dimensional data and constructs a hyperplane to make the data separable.…”
Section: Taste Sensation Evaluationmentioning
confidence: 99%
“…The discriminant boundary of the PNN is close to the Bayesian optimal decision surface. RBFNN uses RBF to construct the space of hidden layers to make the input data linearly separable [36]. GS-SVM uses a kernel function to map low-dimensional data and constructs a hyperplane to make the data separable.…”
Section: Taste Sensation Evaluationmentioning
confidence: 99%
“…Tidak seperti jaringan syaraf tiruan berbasis radial basis function lainnya, L-GCNN menggunakan fungsi logarithmic sehingga dapat mengurangi jumlah iterasi untuk mencapai error minimal. Metode ini memiliki 5 layer utama yaitu input, pattern, summation, normalization dan output layer [14].…”
Section: Logarithmic Learning For Generalized Classifier Neural Netwounclassified
“…For the convenience of description, the autoencoder model implemented by multi-layer perceptrons will be referred to as AE-MLP and the autoencoder model implemented by residual networks will be referred to as AE-RES in the remainder of this paper. The AE-MLP model needs to be trained in the iterative mode, while the AE-RES model can be trained in the one-pass learning mode [52,53]. (2) The high-dimensional features extracted by the autoencoders are reduced to 2D using the uniform manifold approximate and projection (UMAP) algorithm [54], which is a commonly used dimensionality reduction method in life sciences [55,56].…”
Section: Introductionmentioning
confidence: 99%