2021
DOI: 10.1007/978-3-030-69535-4_18
|View full text |Cite
|
Sign up to set email alerts
|

Synthetic-to-Real Unsupervised Domain Adaptation for Scene Text Detection in the Wild

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 15 publications
(8 citation statements)
references
References 32 publications
0
8
0
Order By: Relevance
“…Recent methods [35,36,32,43,42,40,41] based on deep learning have been made tremendous progress for image-level text detection. CTPN [32] adopted Faster RCNN [25] and modified RPN to detect horizontal text.…”
Section: Text Detection and Trackingmentioning
confidence: 99%
“…Recent methods [35,36,32,43,42,40,41] based on deep learning have been made tremendous progress for image-level text detection. CTPN [32] adopted Faster RCNN [25] and modified RPN to detect horizontal text.…”
Section: Text Detection and Trackingmentioning
confidence: 99%
“…Domain adaptation aims to reduce the domain gap between training and testing data. There are also some methods [19,20,21,22] to solve the domain adaptation problem in scene text detection. GA-DAN [21] converts a source-domain image into multiple images of different spatial views as in target domain.…”
Section: Related Workmentioning
confidence: 99%
“…GA-DAN [21] converts a source-domain image into multiple images of different spatial views as in target domain. Wu et al [22] aims at the serious domain difference between synthetic data and real-world data, and proposes a synthetic-to-real domain adaptation method for scene text detection, which transfers knowledge from synthetic data to real-world data. In this work, we focus on how to use unlabeled real-world data to improve the pre-trained model to obtain better initialization and final performance during finetuning.…”
Section: Related Workmentioning
confidence: 99%
“…Instead of adapting data, it is possible to learn features that are resistant to the differences between domains [13,57]. Wu et al [71] mix real and synthetic data through a domain classifier to learn domain-invariant features for text detection, and Saleh et al [56] exploit the observation that shape is less affected by the domain gap than appearance for scene semantic segmentation.…”
Section: Training With Synthetic Datamentioning
confidence: 99%