2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2016
DOI: 10.1109/icassp.2016.7472689
|View full text |Cite
|
Sign up to set email alerts
|

DNN speaker adaptation using parameterised sigmoid and ReLU hidden activation functions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
37
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 56 publications
(37 citation statements)
references
References 20 publications
0
37
0
Order By: Relevance
“…The weighted sum is added with a bias to determine whether this neuron should be activated. Commonly used activation functions include linear function, sigmoid function, exponential linear unit (ELU), and rectified linear unit (ReLU) . In computational materials science, the feature values are obtained by numerical simulations, corresponding to one or more target values such as potential energy or mechanical properties.…”
Section: Approaches In Computational Materials Sciencementioning
confidence: 99%
“…The weighted sum is added with a bias to determine whether this neuron should be activated. Commonly used activation functions include linear function, sigmoid function, exponential linear unit (ELU), and rectified linear unit (ReLU) . In computational materials science, the feature values are obtained by numerical simulations, corresponding to one or more target values such as potential energy or mechanical properties.…”
Section: Approaches In Computational Materials Sciencementioning
confidence: 99%
“…Here η and γ still refer to the output and input value scaling vectors for tanhη,γ (·) and ReLUη(·). Other types of parameterised activation functions have also been investigated for both conventional modelling [22][23][24][25] and speaker adaptation [26][27][28][29].…”
Section: Parameterised Activation Function For Stusmentioning
confidence: 99%
“…Alternatively, the scaling values can also be linear as in [11], but this will not be investigated in the paper. Subsequently, the lth hidden layer activation outputs are computed by…”
Section: Feature Based Hidden Output Scalingmentioning
confidence: 99%
“…Model based adaptation approaches adapt models containing acoustic condition dependent parameters. These parameters can be linear transformation on the Gaussian mixture model (GMM) parameters like maximum likelihood linear regression (MLLR) transformation [3,4,5], weight vector for GMM mean or DNN transformation interpolation [6,7,8,9], or scaling values on the DNN hidden outputs [10,11]. The adaptation can be applied on evaluation data directly, or after the adaptive training [12] which jointly trains both speaker dependent and independent parameters on training data.…”
Section: Introductionmentioning
confidence: 99%