2022
DOI: 10.3390/electronics11142137
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Super-Resolution of Text Images via Self-Distillation

Abstract: This research develops an effective single-image super-resolution (SR) method that increases the resolution of scanned text or document images and improves their readability. To this end, we introduce a new semantic loss and propose a semantic SR method that guides an SR network to learn implicit text-specific semantic priors through self-distillation. Experiments on the enhanced deep SR (EDSR) model, one of the most popular SR networks, confirmed that semantic loss can contribute to further improving the qual… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(9 citation statements)
references
References 23 publications
0
9
0
Order By: Relevance
“…Inspired by the Park's method [5], the semantic priors are incorporated into the SR process via self-distillation. That is, we distill the category-specific semantic information from SR images of the same category (e.g., ''text'' or ''face'') during training an SR generator.…”
Section: Proposed Semantic Sr Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…Inspired by the Park's method [5], the semantic priors are incorporated into the SR process via self-distillation. That is, we distill the category-specific semantic information from SR images of the same category (e.g., ''text'' or ''face'') during training an SR generator.…”
Section: Proposed Semantic Sr Methodsmentioning
confidence: 99%
“…Therefore, it was successfully used as a regularization technique for matching the predictive distribution of the network between different samples of the same label [10]. Also, it was used for matching semantic features between different images of the same category and guiding the network to learn the semantic information [5]. We also use self-distillation for the semantic learning in this study.…”
Section: Related Work a Self-distillationmentioning
confidence: 99%
See 2 more Smart Citations
“…And in Section 5, we present final conclusion and future work. [1] and Enhanced Deep Residual Networks (EDSR) were applied [2], as well as the SRDenseNet model constructed by the subsequent DenseNet structure [3]. Lai et al combined a series of structural features and combined with the image pyramid structure to propose a more complex Laplacian pyramid [4] image super-resolution structure to achieve multi-scale super-resolution reconstruction of images.…”
Section: Introductionmentioning
confidence: 99%