2021
DOI: 10.1016/j.jvcir.2021.103328
|View full text |Cite
|
Sign up to set email alerts
|

Multi-focus image fusion based on multi-scale sparse representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 9 publications
(2 citation statements)
references
References 46 publications
0
2
0
Order By: Relevance
“…The algorithm is divided into three sections: creating the dictionary, representing the image, integrating, and reconstructing. [36] Adaptively extracting source image patches from source image patches is done using the K-SVD [37]algorithm. To preserve each source image signal, a joint dictionary is built, and the batch-OMP algorithm is used to estimate the coefficient vectors.…”
Section: Sparse Representationmentioning
confidence: 99%
“…The algorithm is divided into three sections: creating the dictionary, representing the image, integrating, and reconstructing. [36] Adaptively extracting source image patches from source image patches is done using the K-SVD [37]algorithm. To preserve each source image signal, a joint dictionary is built, and the batch-OMP algorithm is used to estimate the coefficient vectors.…”
Section: Sparse Representationmentioning
confidence: 99%
“…Along with these applications, different terms have been used to indicate the different techniques, such as multisensor image fusion for remote sensing (Abdikan et al, 2014), multi-model image fusion for medical diagnosis (Hermessi et al, 2021), to the multi-focus image for optical microscopy (Liu et al, 2020). Meanwhile, more and more advanced techniques have been integrated into the fusion algorithms, such as sparse representation (Ma et al, 2021, Zhang et al, 2021, deep learning (Mustafa et al, 2020, Li andWu, 2018). While the fast development of image fusion algorithms, the development of fusion quality assessment, has been received less attention.…”
Section: Introductionmentioning
confidence: 99%