2022
DOI: 10.48550/arxiv.2204.07241
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

The Art of Prompting: Event Detection based on Type Specific Prompts

Abstract: We compare various forms of prompts to represent event types and develop a unified framework to incorporate the event type specific prompts for supervised, few-shot, and zeroshot event detection. The experimental results demonstrate that a well-defined and comprehensive event type prompt can significantly improve the performance of event detection, especially when the annotated data is scarce (fewshot event detection) or not available (zeroshot event detection). By leveraging the semantics of event types, our … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 30 publications
0
2
0
Order By: Relevance
“…When moving to new types, domains or languages, we have to start from scratch by creating annotations and re-training the extraction models. In this part of tutorial, we will cover the recent advances in improving the transferability of IE, including (1) cross-lingual transfer by leveraging adversarial training (Chen et al, 2019a;Zhou et al, 2019), language-invariant representations (Huang et al, 2018a; and resources (Tsai et al, 2016;Pan et al, 2017), pre-trained multilingual language models (Wu and Dredze, 2019;Conneau et al, 2020) as well as data projection (Ni et al, 2017;Yarmohammadi et al, 2021), (2) cross-type transfer including zero-shot and few-shot IE by learning prototypes (Huang et al, 2018b;Chan et al, 2019;, reading the definitions (Chen et al, 2020b;Logeswaran et al, 2019;Obeidat et al, 2019;Yu et al, 2022;Wang et al, 2022a), answering questions (Levy et al, 2017;Liu et al, 2020;Lyu et al, 2021), and (3) transfer across different benchmark datasets (Xia and Van Durme, 2021;Wang et al, 2021b). Finally, we will also discuss the progress on life-long learning for IE Cao et al, 2020;Yu et al, 2021;Liu et al, 2022) to enable knowledge transfer across incrementally updated models.…”
Section: Transferablity Of Ie Systems [35min]mentioning
confidence: 99%
“…When moving to new types, domains or languages, we have to start from scratch by creating annotations and re-training the extraction models. In this part of tutorial, we will cover the recent advances in improving the transferability of IE, including (1) cross-lingual transfer by leveraging adversarial training (Chen et al, 2019a;Zhou et al, 2019), language-invariant representations (Huang et al, 2018a; and resources (Tsai et al, 2016;Pan et al, 2017), pre-trained multilingual language models (Wu and Dredze, 2019;Conneau et al, 2020) as well as data projection (Ni et al, 2017;Yarmohammadi et al, 2021), (2) cross-type transfer including zero-shot and few-shot IE by learning prototypes (Huang et al, 2018b;Chan et al, 2019;, reading the definitions (Chen et al, 2020b;Logeswaran et al, 2019;Obeidat et al, 2019;Yu et al, 2022;Wang et al, 2022a), answering questions (Levy et al, 2017;Liu et al, 2020;Lyu et al, 2021), and (3) transfer across different benchmark datasets (Xia and Van Durme, 2021;Wang et al, 2021b). Finally, we will also discuss the progress on life-long learning for IE Cao et al, 2020;Yu et al, 2021;Liu et al, 2022) to enable knowledge transfer across incrementally updated models.…”
Section: Transferablity Of Ie Systems [35min]mentioning
confidence: 99%
“…When moving to new types, domains or languages, we have to start from scratch by creating annotations and re-training the extraction models. In this part of tutorial, we will cover the recent advances in improving the transferability of IE, including (1) cross-lingual transfer by leveraging adversarial training (Chen et al, 2019a;Huang et al, 2019;Zhou et al, 2019), language-invariant representations (Huang et al, 2018a;Subburathinam et al, 2019) and resources (Tsai et al, 2016;Pan et al, 2017), pre-trained multilingual language models (Wu and Dredze, 2019;Conneau et al, 2020) as well as data projection (Ni et al, 2017;Yarmohammadi et al, 2021), (2) cross-type transfer including zero-shot and few-shot IE by learning prototypes (Huang et al, 2018b;Chan et al, 2019;Huang and Ji, 2020), reading the definitions (Chen et al, 2020b;Logeswaran et al, 2019;Obeidat et al, 2019;Yu et al, 2022;Wang et al, 2022a), answering questions (Levy et al, 2017;Lyu et al, 2021), and (3) transfer across different benchmark datasets (Xia and Van Durme, 2021;. Finally, we will also discuss the progress on life-long learning for IE Cao et al, 2020;Yu et al, 2021;Liu et al, 2022) to enable knowledge transfer across incrementally updated models.…”
Section: Transferablity Of Ie Systems [35min]mentioning
confidence: 99%