Proceedings of the Ninth Conference on Computational Natural Language Learning - CONLL '05 2005
DOI: 10.3115/1706543.1706574
|View full text |Cite
|
Sign up to set email alerts
|

A joint model for semantic role labeling

Abstract: We present a semantic role labeling system submitted to the closed track of the CoNLL-2005 shared task. The system, introduced in (Toutanova et al., 2005), implements a joint model that captures dependencies among arguments of a predicate using log-linear models in a discriminative re-ranking framework. We also describe experiments aimed at increasing the robustness of the system in the presence of syntactic parse errors. Our final system achieves F1-Measures of 76.68 and 78.45 on the development and the WSJ p… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
40
1

Year Published

2006
2006
2020
2020

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(42 citation statements)
references
References 5 publications
1
40
1
Order By: Relevance
“…Joint models have been previously explored for other NLP problems (Haghighi et al, 2005;Moschitti et al, 2006;Moschitti, 2009). Our global inference model focuses on opinion polarity recognition task.…”
Section: Related Workmentioning
confidence: 99%
“…Joint models have been previously explored for other NLP problems (Haghighi et al, 2005;Moschitti et al, 2006;Moschitti, 2009). Our global inference model focuses on opinion polarity recognition task.…”
Section: Related Workmentioning
confidence: 99%
“…2 We merge the partial trees output by a semantic role labeller with the output of the parser on which it was trained, and compute PropBank parsing performance measures on the resulting parse trees. The third line, PropBank column of Table 1 reports such measures summarised for the five best semantic role labelling systems (Punyakanok et al, 2005b;Haghighi et al, 2005;Pradhan et al, 2005;Surdeanu and Turmo, 2005) in the CoNLL 2005 shared task. These systems all use (Charniak, 2000)'s parse trees both for training and testing, as well as various other information sources including sets of n-best parse trees, chunks, or named entities.…”
Section: Experiments and Discussionmentioning
confidence: 99%
“…Since no information about the other roles involved in a relation is available to NS and RC, a joint inference model can be learnt considering alternative outcomes of the classifiers. The joint inference step can be arbitrarily complex, ranging from label-sequence correction schemes [26] to whole probabilistic frameworks built on top of NS and RC output [12].…”
Section: Object Selection and Classificationmentioning
confidence: 99%