1983
DOI: 10.1002/j.1538-7305.1983.tb03114.x
|View full text |Cite
|
Sign up to set email alerts
|

An Introduction to the Application of the Theory of Probabilistic Functions of a Markov Process to Automatic Speech Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
317
0
2

Year Published

1999
1999
2013
2013

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 791 publications
(320 citation statements)
references
References 16 publications
1
317
0
2
Order By: Relevance
“…BKT models are usually fit using the expectation maximization method (EM) [2], Conjugate Gradient Search [1], or discretized brute-force search [7].…”
Section: Bayesian Knowledge Tracingmentioning
confidence: 99%
See 1 more Smart Citation
“…BKT models are usually fit using the expectation maximization method (EM) [2], Conjugate Gradient Search [1], or discretized brute-force search [7].…”
Section: Bayesian Knowledge Tracingmentioning
confidence: 99%
“…Instead of a traditional Expectation Maximization (EM) method for learning BKT parameters, we base our method on the so-called optimization techniques approach described in [2] for the following reasons. First, EM does not directly optimize a likelihood of the student observations given BKT parameters (a standard metric for HMM).…”
Section: Bayesian Knowledge Tracing With Student-specific Parametersmentioning
confidence: 99%
“…First, we set some initial values for λ. Then, we obtain new values of these parameters in each iteration, using increasing transformations, applying the Baum-Eagon inequality [10,11]. It is guaranteed that the new estimated values increase the value of the objective function and, therefore, its convergence.…”
Section: Cyclic Trainingmentioning
confidence: 99%
“…Let A = {a ij } be the matrix of transition probabilities between states (1 ≤ i, j ≤ n, where n is the number of states) and let B = {b ij } be the matrix of observation probabilities (1 ≤ i ≤ n and 1 ≤ j ≤ w) [2]. As we know that n j=0 a ij = 1 for 0 ≤ i ≤ n and that (3) is a polynomial with respect to A, the new estimation,ā ij , can be obtained with the Baum-Eagon inequality [10,11]. Applying logarithms and [10] to (3), we conclude that:…”
Section: Cyclic Trainingmentioning
confidence: 99%
“…[19][20][21][22][23][24][25] We have previously reported the application of hidden Markov models to the "two-color problem" where a molecule fluctuates between two states that can be distinguished based on the color of the fluorescence. 26 In that work, it was observed that estimation of two-state kinetic parameters was robust even in the presence of considerable background and spectral crosstalk in the data and for kinetic rates comparable to the photon count rates.…”
Section: Introductionmentioning
confidence: 99%