Proceedings of the 30th ACM International Conference on Multimedia 2022
DOI: 10.1145/3503161.3547877
|View full text |Cite
|
Sign up to set email alerts
|

Query-driven Generative Network for Document Information Extraction in the Wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(1 citation statement)
references
References 18 publications
0
1
0
Order By: Relevance
“…The task of visually rich document understanding (VRDU), which involves extracting information from VRDs [2,18], requires models that can handle various types of documents, such as voice, receipts, forms, emails, and advertisements, and various types of information, including rich visuals, large amounts of text, and complex document layouts [27,17,25]. Recently, fine-tuning based on pre-trained visual document understanding models has yielded impressive results in extracting information from VRDs [40,12,21,22,14,20], suggesting that the use of large-scale, unlabeled training documents in pre-training document understanding models can benefit information extraction from VRDs.…”
Section: Introductionmentioning
confidence: 99%
“…The task of visually rich document understanding (VRDU), which involves extracting information from VRDs [2,18], requires models that can handle various types of documents, such as voice, receipts, forms, emails, and advertisements, and various types of information, including rich visuals, large amounts of text, and complex document layouts [27,17,25]. Recently, fine-tuning based on pre-trained visual document understanding models has yielded impressive results in extracting information from VRDs [40,12,21,22,14,20], suggesting that the use of large-scale, unlabeled training documents in pre-training document understanding models can benefit information extraction from VRDs.…”
Section: Introductionmentioning
confidence: 99%