Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics 2016
DOI: 10.18653/v1/s16-2012
|View full text |Cite
|
Sign up to set email alerts
|

Leveraging VerbNet to build Corpus-Specific Verb Clusters

Abstract: In this paper, we aim to close the gap from extensive, human-built semantic resources and corpus-driven unsupervised models. The particular resource explored here is VerbNet, whose organizing principle is that semantics and syntax are linked. To capture patterns of usage that can augment knowledge resources like VerbNet, we expand a Dirichlet process mixture model to predict a VerbNet class for each sense of each verb, allowing us to incorporate annotated VerbNet data to guide the clustering process. The resul… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 9 publications
0
5
0
Order By: Relevance
“…First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently (Kawahara et al, 2014;Peterson et al, 2016), the current lack of polysemyaware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the ATTRACT-REPEL specialisation framework for sense-aware crosslingual transfer relying on recently developed multisense/prototype word representations (Neelakantan et al, 2014;Pilehvar and Collier, 2016, inter alia).…”
Section: Further Discussion and Future Workmentioning
confidence: 99%
“…First, in the current work we have operated with standard single-sense/single-prototype representations, thus effectively disregarding the problem of verb polysemy. While several polysemy-aware verb classification models for English were developed recently (Kawahara et al, 2014;Peterson et al, 2016), the current lack of polysemyaware evaluation sets in other languages impedes this line of research. Evaluation issues aside, one idea for future work is to use the ATTRACT-REPEL specialisation framework for sense-aware crosslingual transfer relying on recently developed multisense/prototype word representations (Neelakantan et al, 2014;Pilehvar and Collier, 2016, inter alia).…”
Section: Further Discussion and Future Workmentioning
confidence: 99%
“…Adding partial supervision to these models significantly improves the clustering quality. Supervision in the Step-wise model (Peterson et al 2016) dramatically boosts the mPU score of the clusters, improving absolute F1 by nearly 10%, and requires a significant increase in computational complexity. Adding supervision to the Joint model using our method significantly improves both mPU and iPU of the clusters, producing a nearly 12% absolute F1 score improvement without increasing computational complexity.…”
Section: Quantitative Evaluation Resultsmentioning
confidence: 99%
“…Adding partial supervision to probabilistic clustering techniques can help recover the desired clusters. (Peterson et al 2016) added VerbNet class preferences to the clustering step of the step-wise process, but did not directly use labeled sentences to guide the clustering. An automatically-acquired sense may contain sentences from multiple VerbNet classes, so there is no straightforward way to extrapolate from sentence labels to labels for the senses.…”
Section: Introductionmentioning
confidence: 99%
“…Palmer et al (2005) for VerbNet's coverage of the Penn Treebank II). An automated analysis and linking of networks to verbal entries in corpora will use existing computational methods for verb sense disambiguation (Loper et al, 2007;Chen and Palmer, 2009;Brown et al, 2011;Peterson et al, 2016) to accomplish a correct match of verb senses to verbal networks.…”
Section: Discussionmentioning
confidence: 99%