2017
DOI: 10.1016/j.infrared.2017.07.015
|View full text |Cite
|
Sign up to set email alerts
|

NIRS feature extraction based on deep auto-encoder neural network

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(18 citation statements)
references
References 11 publications
0
18
0
Order By: Relevance
“…Further, many variants were proposed [156]. For example, Liu provided us with a reference example of a deep auto-encoder to extract features for classification tasks [157]. To optimize the performance of feature extraction through obtaining the required network parameters based on proper learning rate, Song proposed a variable learning speed DAE (VLSAE) that adaptively adjusts its learning rate with the help of multi-scale reconstruction error (MRE) and weight update correlation (WUC) [158].…”
Section: Deep Auto Encodermentioning
confidence: 99%
“…Further, many variants were proposed [156]. For example, Liu provided us with a reference example of a deep auto-encoder to extract features for classification tasks [157]. To optimize the performance of feature extraction through obtaining the required network parameters based on proper learning rate, Song proposed a variable learning speed DAE (VLSAE) that adaptively adjusts its learning rate with the help of multi-scale reconstruction error (MRE) and weight update correlation (WUC) [158].…”
Section: Deep Auto Encodermentioning
confidence: 99%
“…This results in an so called autoencoder architecture. Unsupervised learning without labels has been used for training of autoencoders, which is a major advantage since labeled spectral data can be very expensive to produce [ 28 ]. More precisely, autoencoder architectures are used to reconstruct the input, of which the size is compressed after transforming it to internal hidden layers.…”
Section: Nir Data Processing Using the Neural Network Anninetmentioning
confidence: 99%
“…The detection of QS complexes is out of the scope of this paper. The following correct ECG recordings from the MIT-BIH database were used for training and testing: 100,101,102,103,104,105,106,109,112,113,115,116,118,119,121,122,123,201,202,208,209,212,213,214,215,217,219,220,221,222,228,230, 231, 232, and 234. Furthermore, the following correct ECG recordings from the European ST-T database were used for training and testing: e0103, e0104, e0111, e0112, e0113, e0115, e0116, e0118, e0123, e0127, e0136, e0147, e0151, e0154, e0159, e0161, e0166, e0170, e0203, e0204, e0206, e0207, e0208, e0210, e0212, e0303, e0306, e0404, e0406, e0408, e0409, e0410, e0411, e0417, e0418, e0509, e0601, e0606, e0607, e0609, e0610, e0611, e0612, e0613, e0615, e0704, e0818, and e1304.…”
Section: Data Preparationmentioning
confidence: 99%
“…For the MIT-BIH NST, the following recordings are used for training and testing: 100,104,108,113,117,122,201,207,212,217,222,231,101,105,109,114,118,123,208,213,219,223,232,102,106,111,115,119,124,203,209,214,220,228,233,103,107,112,116,121,200,205,210,215,221,230, and 234. The QRS complex inverter is applied to the MIT-BIH as it has many inverted QRS complexes.…”
Section: Mit-bih Nstmentioning
confidence: 99%
See 1 more Smart Citation