Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2014
DOI: 10.3115/v1/d14-1021
|View full text |Cite
|
Sign up to set email alerts
|

Syntactic SMT Using a Discriminative Text Generation Model

Abstract: We study a novel architecture for syntactic SMT. In contrast to the dominant approach in the literature, the system does not rely on translation rules, but treat translation as an unconstrained target sentence generation task, using soft features to capture lexical and syntactic correspondences between the source and target languages. Target syntax features and bilingual translation features are trained consistently in a discriminative model. Experiments using the IWSLT 2010 dataset show that the system achiev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
8
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 10 publications
(9 citation statements)
references
References 22 publications
1
8
0
Order By: Relevance
“…Input Figure 1: An illustration of our proposed benchmark, which includes diverse CTG instructions, can be used to evaluate whether large language models can properly respond to the control constraints specified in the instructions. eration (CTG) (Zhang et al 2022). While traditional CTG has been extensively studied (Dathathri et al 2019;Zhang and Song 2022), the formulation of control conditions is discrete variables, thus not directly applicable under the new instruction-following paradigm, as the latter entails natural language instructions instead.…”
Section: Llm2 Diversify Instructionsmentioning
confidence: 99%
See 1 more Smart Citation
“…Input Figure 1: An illustration of our proposed benchmark, which includes diverse CTG instructions, can be used to evaluate whether large language models can properly respond to the control constraints specified in the instructions. eration (CTG) (Zhang et al 2022). While traditional CTG has been extensively studied (Dathathri et al 2019;Zhang and Song 2022), the formulation of control conditions is discrete variables, thus not directly applicable under the new instruction-following paradigm, as the latter entails natural language instructions instead.…”
Section: Llm2 Diversify Instructionsmentioning
confidence: 99%
“…eration (CTG) (Zhang et al 2022). While traditional CTG has been extensively studied (Dathathri et al 2019;Zhang and Song 2022), the formulation of control conditions is discrete variables, thus not directly applicable under the new instruction-following paradigm, as the latter entails natural language instructions instead. Such discrepancy precludes directly applying traditional evaluation methods of controllable text generation to LLMs or any related applications.…”
Section: Llm2 Diversify Instructionsmentioning
confidence: 99%
“…The Critic engages in a debate with the Scorer and offers constructive criticism, playing the role of a Devil's Advocate. Eskenazi, 2020) is a knowledge-grounded humanto-human conversation dataset, and we refer Zhong et al (2022) to evaluate four dimensions: naturalness, coherence, engagingness, and groundedness.…”
Section: Multi-agent Scoring Frameworkmentioning
confidence: 99%
“…We extensively evaluate the performance of DEBATE with eight baselines, including a traditional evaluator, ROUGE-L (Lin, 2004); the pretrained language model-based evaluators, BERTScore , MoverScore (Zhao et al, 2019), BARTScore (Yuan et al, 2021), and UniEval (Zhong et al, 2022); the recent LLM-based evaluators, GPTScore , G-Eval , and ChatEval (Chan et al, 2023). We also include MultiAgent, a framework similar to DEBATE but with the Critic assigned a neutral debating role, denoted as Plain.…”
Section: Baselinesmentioning
confidence: 99%
See 1 more Smart Citation