2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2014
DOI: 10.1109/icassp.2014.6854517
|View full text |Cite
|
Sign up to set email alerts
|

Introducing shared-hidden-layer autoencoders for transfer learning and their application in acoustic emotion recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
47
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 70 publications
(47 citation statements)
references
References 15 publications
0
47
0
Order By: Relevance
“…The encoder function can be written in the form (3) where is a nonlinear activation function, typically a logistic sigmoid [6], [5], is the weight matrix, and is the bias vector.…”
Section: A Local Invariant Feature Learningmentioning
confidence: 99%
See 1 more Smart Citation
“…The encoder function can be written in the form (3) where is a nonlinear activation function, typically a logistic sigmoid [6], [5], is the weight matrix, and is the bias vector.…”
Section: A Local Invariant Feature Learningmentioning
confidence: 99%
“…is the weight matrix shared with the encoder and is the bias vector. Auto-encoder trains the network by adjusting the parameters on the collected training set to minimize the total reconstruction error (5) where is the reconstruction error that is computed using squared error . In order to encourage units to maintain a low average activation, an additional penalty term is added into (5) (6) where is the sparse penalty term, is the average activation of hidden unit (averaged over the training set), denotes the number of active units, and denotes the number of training samples.…”
Section: A Local Invariant Feature Learningmentioning
confidence: 99%
“….g., by DAE neural networks [ [12,11,13]]. In fact, emotional data resources often come in very different labellings such as (often different) categories or dimensions -transferring by suited approaches of machine learning can also be of help in this respect.…”
Section: Transfer Learningmentioning
confidence: 99%
“…Prior works on speech emotion recognition utilize various methods to deal with low-resource training samples, including unsupervised representation learning [6,7] and transfer learning [8,9]. Unsupervised representation learning takes full advantage of the information from unlabeled data.…”
Section: Introductionmentioning
confidence: 99%