“…The task of visually rich document understanding (VRDU), which involves extracting information from VRDs [2,18], requires models that can handle various types of documents, such as voice, receipts, forms, emails, and advertisements, and various types of information, including rich visuals, large amounts of text, and complex document layouts [27,17,25]. Recently, fine-tuning based on pre-trained visual document understanding models has yielded impressive results in extracting information from VRDs [40,12,21,22,14,20], suggesting that the use of large-scale, unlabeled training documents in pre-training document understanding models can benefit information extraction from VRDs.…”