2018
DOI: 10.3390/info9010011
|View full text |Cite
|
Sign up to set email alerts
|

Image Super-Resolution Algorithm Based on an Improved Sparse Autoencoder

Abstract: Due to the limitations of the resolution of the imaging system and the influence of scene changes and other factors, sometimes only low-resolution images can be acquired, which cannot satisfy the practical application's requirements. To improve the quality of low-resolution images, a novel super-resolution algorithm based on an improved sparse autoencoder is proposed. Firstly, in the training set preprocessing stage, the high-and low-resolution image training sets are constructed, respectively, by using high-f… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 26 publications
0
4
0
Order By: Relevance
“…Traditional image processing algorithms mainly rely on basic digital image processing techniques. Generally, there are three categories: interpolation-based algorithms [21][22][23], degenerate-model-based algorithms [24][25][26] and learning-based algorithms [27][28][29][30].…”
Section: Traditional Methodsmentioning
confidence: 99%
“…Traditional image processing algorithms mainly rely on basic digital image processing techniques. Generally, there are three categories: interpolation-based algorithms [21][22][23], degenerate-model-based algorithms [24][25][26] and learning-based algorithms [27][28][29][30].…”
Section: Traditional Methodsmentioning
confidence: 99%
“…Instead, an AE learns an approximation of the input features to identify useful properties of the data. AEs are vital tools for dimensionality reduction (Hinton and Salakhutdinov 2006), feature learning (Vincent et al 2008), image colorization (Zhang et al 2016), higherresolution data generation (Huang et al 2018), and latent space clustering (Yeh et al 2017). Additionally, other versions of AEs such as variational autoencoders (VAEs) (Kingma and Welling 2014) can be used as generative models.…”
Section: Autoencodermentioning
confidence: 99%
“…Instead, an AE learns an approximation of the input features to identify useful properties of the data. AEs are vital tools for dimensionality reduction [63], feature learning [64], image colorization [65], higher-resolution data generation [66], and latent space clustering [67]. Additionally, other versions of AEs such as variational autoencoders (VAEs) [68] can be used as generative models.…”
Section: Autoencodermentioning
confidence: 99%