Interspeech 2016 2016
DOI: 10.21437/interspeech.2016-1583
|View full text |Cite
|
Sign up to set email alerts
|

LatticeRnn: Recurrent Neural Networks Over Lattices

Abstract: We present a new model called LATTICERNN, which generalizes recurrent neural networks (RNNs) to process weighted lattices as input, instead of sequences. A LATTICERNN can encode the complete structure of a lattice into a dense representation, which makes it suitable to a variety of problems, including rescoring, classifying, parsing, or translating lattices using deep neural networks (DNNs). In this paper, we use LATTICERNNs for a classification task: each lattice represents the output from an automatic speech… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
52
1

Year Published

2017
2017
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 66 publications
(53 citation statements)
references
References 21 publications
0
52
1
Order By: Relevance
“…However, to the best of our knowledge they have not been used with Neural Semantic Parsers implemented by Recurrent Neural Networks (RNNs) or similar architectures. The closest work would be [8], who propose to traverse an input lattice in topological order and use the RNN hidden state of the lattice final state as the dense vector representing the entire lattice. However, word confusion networks provide a much better and more efficient solution thanks to token alignments.…”
Section: Related Workmentioning
confidence: 99%
“…However, to the best of our knowledge they have not been used with Neural Semantic Parsers implemented by Recurrent Neural Networks (RNNs) or similar architectures. The closest work would be [8], who propose to traverse an input lattice in topological order and use the RNN hidden state of the lattice final state as the dense vector representing the entire lattice. However, word confusion networks provide a much better and more efficient solution thanks to token alignments.…”
Section: Related Workmentioning
confidence: 99%
“…Those DAGs are highly flexible structures that can be additionally enriched with a wide range of features [21,22]. Recently there has been much interest in examining neural network extensions to DAGs and other general graph structures [23,18,24]. The key question that any such approach needs to answer is how information associated with multiple graph arcs or nodes is combined.…”
Section: Lattice Recurrent Neural Networkmentioning
confidence: 99%
“…"LatticeRNN" [7] was originally introduced for the task of classifying user intents in natural language processing. For a topologically-sorted hypothesis lattice, the feature vector of each arc is input to the neural network along with the state vector of the arc's source node, and the output of the neural network becomes the arc's state.…”
Section: The Bidirectional Latticernnmentioning
confidence: 99%
“…To actively overcome some of the LVCSR's errors, we consider the use of a statistical model that can interpret the hypothesis lattice in a discriminative, data-driven manner. We propose the use of a bidirectional version of "LatticeRNN" [7] for this purpose, and show that a significant gain in accuracy can be obtained compared to using the simple posterior probability.…”
Section: Introductionmentioning
confidence: 99%