2013
DOI: 10.1007/978-3-642-41644-6_10
|View full text |Cite
|
Sign up to set email alerts
|

A Mixed Model for Cross Lingual Opinion Analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2014
2014
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 12 publications
(11 citation statements)
references
References 5 publications
0
11
0
Order By: Relevance
“…Note that in Table 3, the top performer of NLP&CC 2013 CLOA evaluation is the HLT-HITSZ system(underscored in the table), which used the co-training method in transfer learning (Gui et al, 2013), proving that co-training is quite effective for cross-lingual analysis. With the additional negative transfer detection, our proposed approach achieves the best performance on this dataset outperformed the top system (by HLT-HITSZ) by a 2.97% which translate to 13.1% error reduction improvement to this state-of-the-art system as shown in the last row of To further investigate the effectiveness of our method, the third set of experiments evaluate the negative transfer detection (NTD) compared to co-training (CO) without negative transfer detection as shown in Table.4 and Fig.3 Taking all categories of data, our proposed method improves the overall average precision (the best cases) from 79.4% to 80.1% when compared to the state of the art system which translates to error reduction of 3.40% (pvalue≤0.01 in Wilcoxon signed rank test).…”
Section: Cloa Experiments Resultsmentioning
confidence: 99%
“…Note that in Table 3, the top performer of NLP&CC 2013 CLOA evaluation is the HLT-HITSZ system(underscored in the table), which used the co-training method in transfer learning (Gui et al, 2013), proving that co-training is quite effective for cross-lingual analysis. With the additional negative transfer detection, our proposed approach achieves the best performance on this dataset outperformed the top system (by HLT-HITSZ) by a 2.97% which translate to 13.1% error reduction improvement to this state-of-the-art system as shown in the last row of To further investigate the effectiveness of our method, the third set of experiments evaluate the negative transfer detection (NTD) compared to co-training (CO) without negative transfer detection as shown in Table.4 and Fig.3 Taking all categories of data, our proposed method improves the overall average precision (the best cases) from 79.4% to 80.1% when compared to the state of the art system which translates to error reduction of 3.40% (pvalue≤0.01 in Wilcoxon signed rank test).…”
Section: Cloa Experiments Resultsmentioning
confidence: 99%
“…To deal with this issue, several methods have been proposed to reduce translation errors, such as applying both directions of translation simultaneously (Hajmohammadi et al, 2014) or enriching the MT system with sentiment patterns (Hiroshi et al, 2004). In the case of supervised systems, self-training and co-training techniques have also been explored to improve performance (Gui et al, 2013;Gui et al, 2014).…”
Section: Multilingual Samentioning
confidence: 99%
“…Then, they performed training and testing in two independent views: English view and Chinese view. Gui et al (2013) combined self-training approach with co-training approach by estimating the confidence of each monolingual system. Li et al (2013) selected the samples in the source language that were similar to those in the target language to decrease the gap between two languages.…”
Section: Cross-language Sentiment Classification (Clsc)mentioning
confidence: 99%
“…Gui et al (2013) combined self-training approach with co-training approach by estimating the confidence of each monolingual system. Li et al (2013) selected the samples in the source language that were similar to those in the target language to decrease the gap between two languages. Zhou et al (2014a) proposed a combination CLSC model, which adopted denoising autoencoders (Vincent et al, 2008) to enhance the robustness to translation errors of the input.…”
Section: Cross-language Sentiment Classification (Clsc)mentioning
confidence: 99%
See 1 more Smart Citation