Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Langua 2022
DOI: 10.18653/v1/2022.naacl-main.342
|View full text |Cite
|
Sign up to set email alerts
|

GenIE: Generative Information Extraction

Abstract: Structured and grounded representation of text is typically formalized by closed information extraction, the problem of extracting an exhaustive set of (subject, relation, object) triplets that are consistent with a predefined set of entities and relations from a knowledge base schema. Most existing works are pipelines prone to error accumulation, and all approaches are only applicable to unrealistically small numbers of entities and relations. We introduce GenIE (generative information extraction), the first … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
45
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 31 publications
(45 citation statements)
references
References 22 publications
0
45
0
Order By: Relevance
“…There have been two branches exploring the use of LLMs in few-shot IE tasks with the aid of ICL. The first branch (Ding et al, 2022;Josifoski et al, 2023) views LLMs as an annotator and generates abundant samples with (pseudo) labels via ICL approaches. They then train SLMs using augmented data to achieve superior performance.…”
Section: Icl In Information Extraction Tasksmentioning
confidence: 99%
See 1 more Smart Citation
“…There have been two branches exploring the use of LLMs in few-shot IE tasks with the aid of ICL. The first branch (Ding et al, 2022;Josifoski et al, 2023) views LLMs as an annotator and generates abundant samples with (pseudo) labels via ICL approaches. They then train SLMs using augmented data to achieve superior performance.…”
Section: Icl In Information Extraction Tasksmentioning
confidence: 99%
“…Moreover, the number of outputs and the extracted span within each output are not fixed. Standing with Josifoski et al (2023), we believe ICL approaches are not experienced on such task formats.…”
Section: Large Language Modelsmentioning
confidence: 99%
“…Similarly, ) used a self-describing mechanism for fewshot NER, which leverages mention describing and entity generation. GenIE (Josifoski et al 2022) uses the transformer model to extract unstructured text relationally through global structural constraint. And LightNER is addresses class transfer by constructing a unified learnable verbalizer of entity categories and tackles domain transfer with a pluggable guidance module.…”
Section: Related Workmentioning
confidence: 99%
“…Our experiments and analysis show that Wikidata Parser produces more accurate triples improving in both precision and recall if compared with the state-of-the-art generative information extraction methods [6,19,32].…”
Section: Knowledge Generation and Linking: Wikidata Parsermentioning
confidence: 99%
“…MD-F1 TYPE-F1 EL-F1 RN-F1 REL-P REL-R REL-F1 Approach SOTA IE Pipeline [19] ----43. 30 For both subject and object, we generate the surface form mention, canonical label, type label, relation label.…”
Section: Knowledge Generation and Linking: Wikidata Parsermentioning
confidence: 99%