2020
DOI: 10.1364/oe.396159
|View full text |Cite
|
Sign up to set email alerts
|

ProDebNet: projector deblurring using a convolutional neural network

Abstract: Projection blur can occur in practical use cases that have non-planar and/or multi-projection display surfaces with various scattering characteristics because the surface often causes defocus and subsurface scattering. To address this issue, we propose ProDebNet, an end-to-end real-time projection deblurring network that synthesizes a projection image to minimize projection blur. The proposed method … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
18
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
1

Relationship

2
5

Authors

Journals

citations
Cited by 13 publications
(18 citation statements)
references
References 9 publications
0
18
0
Order By: Relevance
“…Other researchers achieved projector deblurring with fewer artifacts by applying a constrained optimization technique [52] or an inverse light transport matrix technique [50], though they were computationally expensive. Kageyama et al balanced the trade-off between the deblurring accuracy and the computational complexity using a deep neural network (DNN) [28]. Grosse et al also balanced the trade-off by applying a coded aperture to the projector optics, which preserves the high-frequency components of a projected image more than do normal circular apertures and, consequently, reduces the ringing artifacts caused by Wiener filtering [13].…”
Section: Projector Deblurringmentioning
confidence: 99%
See 2 more Smart Citations
“…Other researchers achieved projector deblurring with fewer artifacts by applying a constrained optimization technique [52] or an inverse light transport matrix technique [50], though they were computationally expensive. Kageyama et al balanced the trade-off between the deblurring accuracy and the computational complexity using a deep neural network (DNN) [28]. Grosse et al also balanced the trade-off by applying a coded aperture to the projector optics, which preserves the high-frequency components of a projected image more than do normal circular apertures and, consequently, reduces the ringing artifacts caused by Wiener filtering [13].…”
Section: Projector Deblurringmentioning
confidence: 99%
“…In these aforementioned techniques, the pixel-dependent PSFs must be estimated for the deconvolution. The estimation is done by projecting either dot patterns [9,13,50,52] or original images (i.e., target images) [28,38] in advance. Therefore, unblurred images cannot be continuously displayed on a moving projection surface, where the PSFs vary in time.…”
Section: Projector Deblurringmentioning
confidence: 99%
See 1 more Smart Citation
“…In theory, the projector compensation process is a very complicated nonlinear function involving the camera and the projector sensor radiometric responses [38], lens distortion/vignetting [30], perspective transformations [23], [63], surface material reflectance [21], [41], defocus [31], [59], [61] and inter-reflection [53]. A great amount of effort has been dedicated to designing practical and accurate compensation models, which can be roughly categorized into two types: full compensation [4], [17], [44], [49], [51], [52], [58] and partial ones [1], [3], [13], [15], [20], [35], [38], [48], [53].…”
Section: Related Workmentioning
confidence: 99%
“…With the rapid development of deep learning in computer vision and image processing, it is possible to use deep learning algorithms to obtain a clear image of the specimen with a large DOF. For example, the super-resolution (SR) and deblurring algorithms are used to improve the resolution of images [1][2][3][4][5][6][7][8][9]. Recently, the deep learning single-image super-resolution (SISR) model [10][11][12] and convolutional neural network (CNN) model [13] have been applied to improve the resolution and definition in microscope images, which indicates that they have great potential in microscopes.…”
Section: Introductionmentioning
confidence: 99%