2022
DOI: 10.1007/978-3-031-19815-1_29
|View full text |Cite
|
Sign up to set email alerts
|

OCR-Free Document Understanding Transformer

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
46
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 107 publications
(46 citation statements)
references
References 47 publications
0
46
0
Order By: Relevance
“…Using visual encoders like ResNet (He et al, 2016), visual features of an input image are also being incorporated into the recent VDU backbones (Xu et al, 2021a;Huang et al, 2022). More recently, with the advances in Vision Transformer (ViT) (Dosovitskiy et al, 2021), training a Transformer encoder-decoder VDU backbone without OCR has also been attempted (Kim et al, 2022;Davis et al, 2022;Lee et al, 2022). Our engine can be used together in various VDU backbones.…”
Section: Vdu Backbonesmentioning
confidence: 99%
See 3 more Smart Citations
“…Using visual encoders like ResNet (He et al, 2016), visual features of an input image are also being incorporated into the recent VDU backbones (Xu et al, 2021a;Huang et al, 2022). More recently, with the advances in Vision Transformer (ViT) (Dosovitskiy et al, 2021), training a Transformer encoder-decoder VDU backbone without OCR has also been attempted (Kim et al, 2022;Davis et al, 2022;Lee et al, 2022). Our engine can be used together in various VDU backbones.…”
Section: Vdu Backbonesmentioning
confidence: 99%
“…To construct a rich corpus, in the traditional pipeline, large-scale real-world document images (e.g., IIT-CDIP) and an OCR engine (e.g., CLOVA OCR API 1 ), are used. The quality of the OCR engine significantly affects the downstream processes (Kim et al, 2022;Davis et al, 2022). Hence, there have been difficulties in training and testing the VDU backbone.…”
Section: Visual Corpus Construction For Vdumentioning
confidence: 99%
See 2 more Smart Citations
“…The task of visually rich document understanding (VRDU), which involves extracting information from VRDs [2,18], requires models that can handle various types of documents, such as voice, receipts, forms, emails, and advertisements, and various types of information, including rich visuals, large amounts of text, and complex document layouts [27,17,25]. Recently, fine-tuning based on pre-trained visual document understanding models has yielded impressive results in extracting information from VRDs [40,12,21,22,14,20], suggesting that the use of large-scale, unlabeled training documents in pre-training document understanding models can benefit information extraction from VRDs.…”
Section: Introductionmentioning
confidence: 99%