2022
DOI: 10.1007/978-3-031-19815-1_21
|View full text |Cite
|
Sign up to set email alerts
|

Dynamic Low-Resolution Distillation for Cost-Efficient End-to-End Text Spotting

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 53 publications
0
1
0
Order By: Relevance
“…This approach significantly improved real-time performance without sacrificing accuracy. Chen et al [29] proposed a low-cost and efficient dynamic resolution distillation framework, offering varying resolutions for different input texts and images to strike a balance between recognition accuracy and computational efficiency. Additionally, they introduced sequence knowledge distillation to achieve superior recognition results, particularly for low-resolution images.…”
Section: End-to-end Text Recognitionmentioning
confidence: 99%
“…This approach significantly improved real-time performance without sacrificing accuracy. Chen et al [29] proposed a low-cost and efficient dynamic resolution distillation framework, offering varying resolutions for different input texts and images to strike a balance between recognition accuracy and computational efficiency. Additionally, they introduced sequence knowledge distillation to achieve superior recognition results, particularly for low-resolution images.…”
Section: End-to-end Text Recognitionmentioning
confidence: 99%