Complex lighting conditions and the limited dynamic range of imaging devices result in captured images with ill exposure and information loss. Existing image enhancement methods based on histogram equalization, Retinex-inspired decomposition model, and deep learning suffer from manual tuning or poor generalization. In this work, we report an image enhancement method against ill exposure with self-supervised learning, enabling tuning-free correction. First, a dual illumination estimation network is constructed to estimate the illumination for under- and over-exposed areas. Thus, we get the corresponding intermediate corrected images. Second, given the intermediate corrected images with different best-exposed areas, Mertens’ multi-exposure fusion strategy is utilized to fuse the intermediate corrected images to acquire a well-exposed image. The correction-fusion manner allows adaptive dealing with various types of ill-exposed images. Finally, the self-supervised learning strategy is studied which learns global histogram adjustment for better generalization. Compared to training on paired datasets, we only need ill-exposed images. This is crucial in cases where paired data is inaccessible or less than perfect. Experiments show that our method can reveal more details with better visual perception than other state-of-the-art methods. Furthermore, the weighted average scores of image naturalness matrics NIQE and BRISQUE, and contrast matrics CEIQ and NSS on five real-world image datasets are boosted by 7%, 15%, 4%, and 2%, respectively, over the recent exposure correction method.