2015
DOI: 10.1109/taslp.2015.2412466
|View full text |Cite
|
Sign up to set email alerts
|

Laplace Group Sensing for Acoustic Models

Abstract: This paper presents the group sparse learning for acoustic models where a sequence of acoustic features is driven by Markov chain and each feature vector is represented by groups of basis vectors. The group of common bases represents the features across Markov states within a regression class. The group of individual basis compensates the intra-state residual information. Laplace distribution is used as the sparse prior of sensing weights for group basis representation. Laplace parameter serves as regularizati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
8
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
4
2

Relationship

3
3

Authors

Journals

citations
Cited by 11 publications
(8 citation statements)
references
References 25 publications
0
8
0
Order By: Relevance
“…The paradigm on combination of traditional and modern machine learning based on probabilistic model and neural network [8] is addressed, respectively. The second section surveys a number of Bayesain methods ranging from latent variable model to variational inference [5,9,36], sampling method [4,6,7], deep unfolding [11] and Bayesian neural network [29]. In the third section, a series of advanced deep models including endto-end memory network [12,37], sequence-to-sequence network [20,22], convolutional network [8,16,25,38], dilated network [2] and attention network [13,17,35] are introduced.…”
Section: Bayesian Information Processingmentioning
confidence: 99%
“…The paradigm on combination of traditional and modern machine learning based on probabilistic model and neural network [8] is addressed, respectively. The second section surveys a number of Bayesain methods ranging from latent variable model to variational inference [5,9,36], sampling method [4,6,7], deep unfolding [11] and Bayesian neural network [29]. In the third section, a series of advanced deep models including endto-end memory network [12,37], sequence-to-sequence network [20,22], convolutional network [8,16,25,38], dilated network [2] and attention network [13,17,35] are introduced.…”
Section: Bayesian Information Processingmentioning
confidence: 99%
“…A new paradigm called the symbolic neural learning is introduced to extend how data analysis is performed from language processing to semantic learning and memory networking. Secondly, we address a number of Bayesian models ranging from latent variable model to VB inference (Chien and Chang, 2014;Chien and Chueh, 2011;Chien, 2015b), MCMC sampling (Watanabe and Chien, 2015) and BNP learning (Chien, 2016;Chien, 2015a;Chien, 2018) for hierarchical, thematic and sparse topics from natural language. In the third part, a series of deep models including deep unfolding (Chien and Lee, 2018), Bayesian RNN (Gal and Ghahramani, 2016;Chien and Ku, 2016), sequence-to-sequence learning (Graves et al, 2006;Gehring et al, 2017), CNN (Kalchbrenner et al, 2014;Xingjian et al, 2015;, GAN (Tsai and Chien, 2017) and VAE are introduced.…”
Section: Description Of Tutorial Contentmentioning
confidence: 99%
“…With the estimated posterior distributions, the original parameters can be effectively reconstructed in polynomial fitting problems, and the BALSON framework is found to perform better than conventional methods.Index Terms-Bayesian learning, least squares optimization, L 1 -norm constraint, Dirichlet distribution, sampling method 1. INTRODUCTION In machine learning and statistics, optimization methods, including Newton's method [1], quasi-Newton method [1], sequence quadratic programming (SQP) method [2], gradient descent method [3], interior-point (IP) method [4], and Bayesian methods [5,6,7], are widely applied. The least squares optimization (LSO), which is one of the unconstrained optimization problems, includes the residual sum of squares (RSS) errors as the objective function.…”
mentioning
confidence: 99%
“…For example, with the L 1 -norm constraint, the prior distribution is usually assumed to be a Laplacian [6,13,14]. Chien [5] proposed a Bayesian framework based on the Laplace prior of model parameters for sparse representation of sequential data. Finding the mode of the posterior distribution for Gaussian likelihood and Laplacian prior can solve the sparse optimization problem with numerical simulation.There exists another type of regularization with nonnegative L 1 -norm constraint, i.e., the regularization term contains nonnegative elements only [9].…”
mentioning
confidence: 99%
See 1 more Smart Citation