“…Also as far as the results are concerned, images taken in low-light scenes are affected by distracting factors such as blur and noise.For this type of problem, a hybrid architecture based on Retinex theory and Generative Adversarial Network (GAN) can be used to deal with it.For image vision tasks in the dark or under low light conditions, the image is first decomposed into a light image and a reflection image, and then the enhancement part is used to generate a high quality clear image, starting from minimizing the effect of blurring or noise generation.The method introduces Structural Similarity loss to avoid the side effect of blur.But real-life eligible low level and high level images may not be easily acquired and have the shortage of input.Also to maximize the performance of the algorithm, a sufficient size of data set is required.The data obtained after training also has the problem of real-time, which is not enough to meet real-life needs.In general, the algorithm is only from the perspective of solving image blurring and noise, making the impact of these two minimal, other aspects of the problem still exists more, need to further optimize the network structure. [186]This class of problems can also be explored by exploring multiple diffusion spaces to estimate the light component, which is used as bright pixels to enhance the shimmering image based on the maximum diffusion value.Generates high-fidelity im-ages without significant distortion, minimizing the problem of noise amplification [187].Later, the conditional diffusion implicit model is utilized in DiFaReli's method (DDIM) to decode the coding of decomposed light.Puntawat Ponglertnapakorn et al proposed a novel conditioning technique that eases the modeling of the complex interaction between light and geometry by using a rendered shading reference to spatially modulate the DDIM.This method allows for singleview face re-illumination in the wild.However, this method has limitations in eliminating shadows cast by external objects and is susceptible to image ambiguity [188].…”