The image dehazing stage is used significantly as a preprocessing step for various applications such as remote sensing and long range imaging and automatic driver assistance system. Images acquired under low illumination, fog and snow conditions frequently show qualities like low contrast and low brightness, which genuinely influence the enhanced abstract visualization on natural eyes and extraordinarily limit the exhibition of different machine vision frameworks. The images that are captured in low-light or heavy fog might have salient features that cannot be extracted using standard computer vision systems. A good way to get the enhanced image is to determine the transmission map (haze density or low illumination parameters) of air-light media from each pixel of the input image. In this article, an improved U-Net architecture is proposed to enhance images and provide robust performance metrics against the existing methods. In this model, the pooling operations in generalized U-Net architecture are replaced by discrete wavelet transform based on up and down samplings. An attention module is developed by fusing both up and down samples to identify the missing information of low-level features in up-samples. The proposed architecture for U-Net tested with different datasets: See-in-the-Dark (SID) dataset, Exclusively Dark Image Dataset (ExDark), Realistic Single Image Dehazing (RESIDE) dataset, and few real-time images and achieves superior performance metrics in terms of PSNR, MSE, and SSI when compared to the other state-of-art methods.