Proceedings of the 14th Conference of the European Chapter of the Association for Computational Linguistics 2014
DOI: 10.3115/v1/e14-1064
|View full text |Cite
|
Sign up to set email alerts
|

Bilingual Sentiment Consistency for Statistical Machine Translation

Abstract: In this paper, we explore bilingual sentiment knowledge for statistical machine translation (SMT). We propose to explicitly model the consistency of sentiment between the source and target side with a lexicon-based approach. The experiments show that the proposed model significantly improves Chinese-to-English NIST translation over a competitive baseline.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
11
0

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
3
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 14 publications
(11 citation statements)
references
References 20 publications
0
11
0
Order By: Relevance
“…Additionally, we deal with noisy social media texts as opposed to more polished news media texts. There exists research on using sentiment analysis to improve machine translation (Chen and Zhu, 2014), but that is beyond the scope of this paper.…”
Section: Multilingual Sentiment Analysismentioning
confidence: 99%
“…Additionally, we deal with noisy social media texts as opposed to more polished news media texts. There exists research on using sentiment analysis to improve machine translation (Chen and Zhu, 2014), but that is beyond the scope of this paper.…”
Section: Multilingual Sentiment Analysismentioning
confidence: 99%
“…The authors observed that these German, Spanish and French sentiment analysis systems performed similar to the initial English sentiment classifier. There also exists research on using sentiment analysis to improve machine translation, such as the work by Chen and Zhu [45], but that is beyond the scope of the proposed work.…”
Section: Multilingual Sentiment Analysismentioning
confidence: 99%
“…Our approach does not break phrases down to words, but learns phrase embeddings directly. Chen et al (2010) represent a rule in the hierarchical phrase table using a bag-of-words approach. Instead, we learn phrase vectors directly without resorting to their constituent words.…”
Section: Related Workmentioning
confidence: 99%