2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00112
|View full text |Cite
|
Sign up to set email alerts
|

WESPE: Weakly Supervised Photo Enhancer for Digital Cameras

Abstract: Low-end and compact mobile cameras demonstrate limited photo quality mainly due to space, hardware and budget constraints. In this work, we propose a deep learning solution that translates photos taken by cameras with limited capabilities into DSLR-quality photos automatically. We tackle this problem by introducing a weakly supervised photo enhancer (WESPE) -a novel image-to-image Generative Adversarial Network-based architecture. The proposed model is trained by under weak supervision: unlike previous works, … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
102
0

Year Published

2018
2018
2019
2019

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 177 publications
(103 citation statements)
references
References 32 publications
(53 reference statements)
1
102
0
Order By: Relevance
“…For the albedo, we use a fully convolutional network without downsampling or upsampling blocks. This results in a small receptive field for the network and better preserves the texture details while avoiding large structural changes [16,17]. As shown in Figure 6, allowing downsampling blocks in the Figure 4: To finish the backward cycle, the real image is first translated to the PBR domain.…”
Section: Pbr-to-real Image Translationmentioning
confidence: 99%
“…For the albedo, we use a fully convolutional network without downsampling or upsampling blocks. This results in a small receptive field for the network and better preserves the texture details while avoiding large structural changes [16,17]. As shown in Figure 6, allowing downsampling blocks in the Figure 4: To finish the backward cycle, the real image is first translated to the PBR domain.…”
Section: Pbr-to-real Image Translationmentioning
confidence: 99%
“…Nowadays, various deep learning models can be found in nearly any mobile device. Among the most popular tasks are different computer vision problems like image classification [38,82,23], image enhancement [27,28,32,30], image super-resolution [17,42,83], bokeh simulation [85], object tracking [87,25], optical character recognition [56], face detection and recognition [44,70], augmented reality [3,16], etc. Another important group of tasks running on mobile devices is related to various NLP (Natural Language Processing) problems, such as natural language translation [80,7], sentence completion [52,24], sentence sentiment analysis [77,72,33], voice assistants [18] and interactive chatbots [71].…”
Section: Introductionmentioning
confidence: 99%
“…The ability of humans to easily imagine how a black haired person would look like if they were blond, or with a different type of eyeglasses, or to imagine a winter scene as summer is formulated as the image-to-image (I2I) translation problem in the computer vision community. Since the recent introduction of Generative Adversarial Networks (GANs) [19], a plethora of problems such as video analysis [51,7], super resolution [33,9], semantic synthesis [26,10], photo enhancement [24,25], photo editing [49,14], and most recently domain adaptation [21,43] have been addressed as I2I translation problems.…”
Section: Introductionmentioning
confidence: 99%
“…However, this approach is unpractical because the full representation of the cross-domain mapping is, in most cases, intractable. Existing techniques try to perform deterministic I2I translation with unpaired images to map from one domain into another (one-to-one) [55,4,37,25], or into multiple domains (one-to-many) [12,46,20]. Nevertheless, many problems are fundamentally stochastic as there are countless mappings from one domain to another e.g., a day↔night or cat↔dog translation.…”
Section: Introductionmentioning
confidence: 99%