2010
DOI: 10.1007/s12559-010-9041-8
|View full text |Cite
|
Sign up to set email alerts
|

Bidirectional LSTM Networks for Context-Sensitive Keyword Detection in a Cognitive Virtual Agent Framework

Abstract: Robustly detecting keywords in human speech is an important precondition for cognitive systems, which aim at intelligently interacting with users. Conventional techniques for keyword spotting usually show good performance when evaluated on well articulated read speech. However, modeling natural, spontaneous, and emotionally colored speech is challenging for today's speech recognition systems and thus requires novel approaches with enhanced robustness. In this article, we propose a new architecture for vocabula… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
45
0

Year Published

2011
2011
2020
2020

Publication Types

Select...
4
3
2

Relationship

2
7

Authors

Journals

citations
Cited by 75 publications
(46 citation statements)
references
References 51 publications
1
45
0
Order By: Relevance
“…Many linguistic problems feature dependencies at longer distances, which LSTMs are better able to capture than convolutional or plain recurrent approaches. Bidirection LSTM (Bi-LSTM) networks (Graves and Schmidhuber, 2005;Graves et al, 2005;Wöllmer et al, 2010) also use future context, and recent work has shown advantages of Bi-LSTM networks for sequence labeling and named entity recognition (Huang et al, 2015;Chiu and Nichols, 2015;Wang et al, 2015;Lample et al, 2016;Ma and Hovy, 2016;Plank et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Many linguistic problems feature dependencies at longer distances, which LSTMs are better able to capture than convolutional or plain recurrent approaches. Bidirection LSTM (Bi-LSTM) networks (Graves and Schmidhuber, 2005;Graves et al, 2005;Wöllmer et al, 2010) also use future context, and recent work has shown advantages of Bi-LSTM networks for sequence labeling and named entity recognition (Huang et al, 2015;Chiu and Nichols, 2015;Wang et al, 2015;Lample et al, 2016;Ma and Hovy, 2016;Plank et al, 2016).…”
Section: Related Workmentioning
confidence: 99%
“…Long Short-Term Memory networks have shown excellent performance in many pattern recognition disciplines [30]- [33].…”
Section: Long Short-term Memorymentioning
confidence: 99%
“…Tasks that would benefit from such algorithms abound, including distributed multimedia classification [12], event detection with array of microphones [13], classification of texts in cluster environments [14] and prediction of highly nonlinear time-series in wireless sensor networks [4]. Still, it is known that training an RNN model is a challenging task even in a centralized context, which is far from being fully solved [15]- [18]. As such, there is a lack of available distributed protocols for RNN models satisfying all the requirements discussed above.…”
Section: Introductionmentioning
confidence: 99%