Proceedings of the 12th European Workshop on Natural Language Generation - ENLG '09 2009
DOI: 10.3115/1610195.1610224
|View full text |Cite
|
Sign up to set email alerts
|

The TUNA-REG Challenge 2009

Abstract: The TUNA-REG'09 Challenge was one of the shared-task evaluation competitions at Generation Challenges 2009. TUNA-REG'09 used data from the TUNA Corpus of paired representations of entities and human-authored referring expressions. The shared task was to create systems that generate referring expressions for entities given representations of sets of entities and their properties. Four teams submitted six systems to TUNA-REG'09. We evaluated the six systems and two sets of human-authored referring expressions us… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2010
2010
2024
2024

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 38 publications
(6 citation statements)
references
References 13 publications
0
6
0
Order By: Relevance
“…The comparison involving TUNA data is complicated by the fact that different studies make use of different portions of the corpus as test data, which may be a consequence of its gradual release along the first two TUNA shared tasks (Gatt and Belz 2007; Gatt et al 2008) before changes in the evaluation methodology were introduced (Gatt et al 2009), and Dice scores were no longer reported. We are unaware of any REG algorithm that has been evaluated over the entire set of TUNA singular descriptions as presented in the previous section, and we notice that the reported results tend to vary widely depending on the portion of test data under consideration.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The comparison involving TUNA data is complicated by the fact that different studies make use of different portions of the corpus as test data, which may be a consequence of its gradual release along the first two TUNA shared tasks (Gatt and Belz 2007; Gatt et al 2008) before changes in the evaluation methodology were introduced (Gatt et al 2009), and Dice scores were no longer reported. We are unaware of any REG algorithm that has been evaluated over the entire set of TUNA singular descriptions as presented in the previous section, and we notice that the reported results tend to vary widely depending on the portion of test data under consideration.…”
Section: Resultsmentioning
confidence: 99%
“…These include, for instance, the generation of boolean referring expressions (van Deemter 2002), the use of graphs to generate relational descriptions (Krahmer et al 2003), the use of lexical information to generate descriptions in open domains (Siddharthan and Copestake 2004), and the generation of overspecified descriptions to facilitate search (Paraboni and van Deemter 2013), among many others. In the series of REG shared tasks (Gatt and Belz 2007; Gatt, Belz and Kow 2008, 2009) a large number of algorithms have been proposed as well, including the 2007 best-performing IS-FBN algorithm (Bohnet 2007) and the 2008 winner JU-PTBSGRE (Paladhi and Bandyopadhyay 2008). For a comprehensive review of studies of this kind, we refer to Krahmer and van Deemter (2012).…”
Section: Introductionmentioning
confidence: 99%
“…Many previous research efforts have relied on the original TUNA corpus for the choice of conceptual information when generating referring expressions. These efforts range from several REG algorithms implemented based on the findings of the corpus, including the participants in the REG Shared Tasks (Gatt 2007; Belz and Gatt 2007; Gatt Belz and Kow 2008b; Gatt, Belz and Kow 2009), to theoretical works assessing the psychological basis of existing algorithms (Belz and Gatt 2008; van Deemter et al 2012). Thanks to our work, it will be possible to extend these efforts in the choice of lexical forms for the conceptual information chosen.…”
Section: Discussionmentioning
confidence: 99%
“…These rules typically encode syntactic and semantic patterns to ensure the generated text is grammatically correct and coherent. Examples: One example of a rule-based NLG system is SimpleNLG, which is an open-source Java library for generating natural language text from structured data (Gatt et al, 2009). Another example is the RealPro NLG system, which is used for generating weather forecasts (Belz & Reiter, 2006).…”
Section: Nlg Techniques and Algorithms A Rule-based Nlgmentioning
confidence: 99%