2019
DOI: 10.1016/j.specom.2019.02.003
|View full text |Cite
|
Sign up to set email alerts
|

Text normalization using memory augmented neural networks

Abstract: We perform text normalization, i.e. the transformation of words from the written to the spoken form, using a memory augmented neural network. With the addition of dynamic memory access and storage mechanism, we present a neural architecture that will serve as a language-agnostic text normalization system while avoiding the kind of unacceptable errors made by the LSTM-based recurrent neural networks. By successfully reducing the frequency of such mistakes, we show that this novel architecture is indeed a better… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
24
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 24 publications
(25 citation statements)
references
References 19 publications
0
24
0
1
Order By: Relevance
“…For the innovation of the modeling process, a matching network based on memory and attention is proposed, which makes it possible to learn quickly. For the innovation of the training process, this work is based on a principle of traditional ML, that is training and testing are to be carried out under the same conditions [35]- [36]. It is proposed that the network should constantly look at the insufficient samples of each type during training, which will be consistent with the testing process.…”
Section: Matching Networkmentioning
confidence: 99%
“…For the innovation of the modeling process, a matching network based on memory and attention is proposed, which makes it possible to learn quickly. For the innovation of the training process, this work is based on a principle of traditional ML, that is training and testing are to be carried out under the same conditions [35]- [36]. It is proposed that the network should constantly look at the insufficient samples of each type during training, which will be consistent with the testing process.…”
Section: Matching Networkmentioning
confidence: 99%
“…Next generation of text normalisation systems used the combination of rules and language model (Sproat et al, 2001;Graliński et al, 2006;Brocki et al, 2012). Latest research focused on neural networks Jaitly, 2016, 2017;Zare and Rohatgi, 2017;Pramanik and Hussain, 2018;Zhang et al, 2019). Especially recurrent neural networks (RNN) have promising results, but also tend to fail in some unexpected and unacceptable cases, such as translating large numbers with one digit mistake or treating cm as kilometres (Zhang et al, 2019).…”
Section: Related Workmentioning
confidence: 99%
“…Next generation of text normalisation systems used the combination of rules and language model (Sproat et al, 2001;Graliński et al, 2006;Brocki et al, 2012). Latest research focused on neural networks Jaitly, 2016, 2017;Zare and Rohatgi, 2017;Pramanik and Hussain, 2018;Zhang et al, 2019). Especially recurrent neural networks (RNN) have promising results, but also tend to fail in some unexpected and unacceptable cases, such as translating large numbers with one digit mistake or treating cm as kilometres (Zhang et al, 2019).…”
Section: Related Workmentioning
confidence: 99%