2024
DOI: 10.3390/electronics13071266
|View full text |Cite
|
Sign up to set email alerts
|

Deep Convolutional Dictionary Learning Denoising Method Based on Distributed Image Patches

Luqiao Yin,
Wenqing Gao,
Jingjing Liu

Abstract: To address susceptibility to noise interference in Micro-LED displays, a deep convolutional dictionary learning denoising method based on distributed image patches is proposed in this paper. In the preprocessing stage, the entire image is partitioned into locally consistent image patches, and a dictionary is learned based on the non-local self-similar sparse representation of distributed image patches. Subsequently, a convolutional dictionary learning method is employed for global self-similarity matching. Loc… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2024
2024
2024
2024

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(1 citation statement)
references
References 49 publications
0
1
0
Order By: Relevance
“…This approach is more advantageous for high-level semantic feature extraction. Specifically, this paper adopts the multi-head attention [32] mechanism proposed in Transformer [33], whereby the three modular inputs are passed through a convolutional block to obtain Q, K, and V. Subsequently, the acquired Q, K, and V are employed to execute multi-head attention computation. This attention mechanism enables the generation of a more comprehensive feature representation by leveraging the multi-head attention.…”
Section: Model Framementioning
confidence: 99%
“…This approach is more advantageous for high-level semantic feature extraction. Specifically, this paper adopts the multi-head attention [32] mechanism proposed in Transformer [33], whereby the three modular inputs are passed through a convolutional block to obtain Q, K, and V. Subsequently, the acquired Q, K, and V are employed to execute multi-head attention computation. This attention mechanism enables the generation of a more comprehensive feature representation by leveraging the multi-head attention.…”
Section: Model Framementioning
confidence: 99%