2021
DOI: 10.1109/tip.2021.3051462
|View full text |Cite
|
Sign up to set email alerts
|

EnlightenGAN: Deep Light Enhancement Without Paired Supervision

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
884
0
2

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 1,369 publications
(887 citation statements)
references
References 44 publications
1
884
0
2
Order By: Relevance
“…To demonstrate the efficiency of our method, we evaluate it qualitative and quantitative on MIT-Adobe5k and LOL and compare our method with seven state-of-the-art CNN-based methods (RetinexNet [4], KinD [5], EnGAN [25], MIRNet [26], PMANet [27], DeepUPE [17], DRBN [18]). For fair comparison, The results are reproduced by publicly-available models released by the authors.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…To demonstrate the efficiency of our method, we evaluate it qualitative and quantitative on MIT-Adobe5k and LOL and compare our method with seven state-of-the-art CNN-based methods (RetinexNet [4], KinD [5], EnGAN [25], MIRNet [26], PMANet [27], DeepUPE [17], DRBN [18]). For fair comparison, The results are reproduced by publicly-available models released by the authors.…”
Section: Comparison With State-of-the-art Methodsmentioning
confidence: 99%
“…Where l t is the latency threshold, l r is the latency relax range. We propose the relax range because the networks with a la-Input KinD [5] EnGAN [25] DeepUPE [17] ZeroDCE [20] DRBN [18] MIRNet [26] Ours tency slightly higher than the latency threshold usually contain one or two choices shared with low-latency networks. We believe that these choices should also be well trained.…”
Section: Preliminariesmentioning
confidence: 99%
“…Guo et al [2] set a series of nonreference loss functions to enable the network to perform end-to-end training without any reference images. Jiang et al [18] proposed an efficient and unsupervised generative adversarial network called EnlightenGAN for the low-light image enhancement problem, which can be trained without low/normal-light image pairs. N. Anantrasirichai and David Bull [19] used an adaptation of the CycleGan structure to color and denoise images.…”
Section: Related Workmentioning
confidence: 99%
“…Unsupervised methods. Jiang et al [26] firstly apply unpaired training to the work of underexposed image enhancement, avoiding the dependence on paired data. They construct a dual discriminator structure to process global information and local information separately.…”
Section: Related Workmentioning
confidence: 99%