2018
DOI: 10.1109/taslp.2018.2855968
|View full text |Cite
|
Sign up to set email alerts
|

Dependency-to-Dependency Neural Machine Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
51
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 62 publications
(51 citation statements)
references
References 22 publications
0
51
0
Order By: Relevance
“…As for tree-based NMT models, a lot of different methods have been proposed. Trees can be used in either the source side (Li, Xiong, Tu, Zhu, Zhang, and Zhou 2017) or the target side (Aharoni and Goldberg 2017), or both (Wu et al 2018), can be encoded either using treestructured neural networks (Eriguchi et al 2016) or with the help of linearization (Sennrich and Haddow 2016), and can be either constituent trees (Chen, Huang, Chiang, and Chen 2017) or dependency trees (Wu, Zhang, Yang, Li, and Zhou 2017). As for forest-based NMT models, Ma et al 2018is the first attempt, where linearized packed forests are encoded using RNNs in order to make the model robust to parsing errors.…”
Section: Syntax-based Nmtmentioning
confidence: 99%
See 1 more Smart Citation
“…As for tree-based NMT models, a lot of different methods have been proposed. Trees can be used in either the source side (Li, Xiong, Tu, Zhu, Zhang, and Zhou 2017) or the target side (Aharoni and Goldberg 2017), or both (Wu et al 2018), can be encoded either using treestructured neural networks (Eriguchi et al 2016) or with the help of linearization (Sennrich and Haddow 2016), and can be either constituent trees (Chen, Huang, Chiang, and Chen 2017) or dependency trees (Wu, Zhang, Yang, Li, and Zhou 2017). As for forest-based NMT models, Ma et al 2018is the first attempt, where linearized packed forests are encoded using RNNs in order to make the model robust to parsing errors.…”
Section: Syntax-based Nmtmentioning
confidence: 99%
“…Syntactic information can be used on either the source-side (Eriguchi, Tsuruoka, and Cho 2017), or the target-side (Aharoni and Goldberg 2017), or both (Wu, Zhang, Zhang, Yang, Li, and Zhou 2018). Syntactic information can be represented as constituent trees (Eriguchi, Hashimoto, and Tsuruoka 2016), packed forests (Ma, Tamura, Utiyama, Zhao, and Sumita 2018), or graphs (Hashimoto and Tsuruoka 2017).…”
mentioning
confidence: 99%
“…Incorporating morphological information for NMT is a challenging area of research. A significant number of works involve dependency structure at the source side (Eriguchi et al, 2016;Shi et al, 2016;Bastings et al, 2017;Chen et al, 2017;Hashimoto and Tsuruoka, 2017;Li et al, 2017;Wu et al, 2018;Zhang et al, 2019). Eriguchi et al (2016) proposed a syntax-aware encoding mechanism that encodes the source sentence maintaining the hierarchy of its dependency tree.…”
Section: Related Workmentioning
confidence: 99%
“…Long short-term memory (LSTM), a variant of RNN, has the ability of mining long-distance time-series data information 15 . It is extensively used in machine translation 16,17 , fault diagnosis 18,19 , speech recognition 20,21 , and electrocardiogram classification 22,23 . In literature 24 , the representation of speech signals from an original network is automatically learned by CNN, and then the temporal representation of features is learned by LSTM; In literature 25 , the features of wearable sensor data is learned by CNN, and then the time dependence between actions are modeled by LSTM.…”
mentioning
confidence: 99%