Proceedings of the Ninth Workshop on Statistical Machine Translation 2014
DOI: 10.3115/v1/w14-3338
|View full text |Cite
|
Sign up to set email alerts
|

SHEF-Lite 2.0: Sparse Multi-task Gaussian Processes for Translation Quality Estimation

Abstract: We describe our systems for the WMT14 Shared Task on Quality Estimation (subtasks 1.1, 1.2 and 1.3). Our submissions use the framework of Multi-task Gaussian Processes, where we combine multiple datasets in a multi-task setting. Due to the large size of our datasets we also experiment with Sparse Gaussian Processes, which aim to speed up training and prediction by providing sensible sparse approximations.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
17
0
1

Year Published

2014
2014
2023
2023

Publication Types

Select...
6
4

Relationship

3
7

Authors

Journals

citations
Cited by 16 publications
(18 citation statements)
references
References 8 publications
0
17
0
1
Order By: Relevance
“…In this task, the standard GP model outperformed the baseline, with the sparse GP model following very closely. These figures represent significant improvements compared to our submission to the same task in last year's shared task (Beck et al, 2013), where we were not able to beat the baseline. The main differences between last year's and this year's models are the use of additional datasets and a higher number of features (25 vs. 40).…”
Section: Official Results and Discussionmentioning
confidence: 78%
“…In this task, the standard GP model outperformed the baseline, with the sparse GP model following very closely. These figures represent significant improvements compared to our submission to the same task in last year's shared task (Beck et al, 2013), where we were not able to beat the baseline. The main differences between last year's and this year's models are the use of additional datasets and a higher number of features (25 vs. 40).…”
Section: Official Results and Discussionmentioning
confidence: 78%
“…Future work could use our proposed model to detect heavy sentences that needs such pre-processing. Our findings can also inspire informative features for sentence quality estimation, in which the task is to predict the sentence-level fluency (Beck et al, 2014). We have shown that heavy Chinese sentences are likely to lead to hard to read, disfluent sentences in English.…”
Section: Discussionmentioning
confidence: 99%
“…GP regression models were recently successfully employed for post-editing time and HTER 2 prediction (Beck et al, 2013). Both used RBF kernels as the covariance function so a natural extension is to apply the structured kernels of Section 3.1.…”
Section: Quality Estimationmentioning
confidence: 99%