Due to the influence of optical properties and water quality conditions in underwater environments, underwater images often encounter challenges such as light attenuation, scattering, and absorption, resulting in low image quality, including blurriness, dimness, and lack of details. These limitations impose restrictions on the successful implementation of underwater vision tasks. Nonetheless, the significance of original underwater images in the context of feature extraction and learning tends to be disregarded by the majority of existing methodologies. This paper addresses this oversight by introducing techniques known as Residual Dense Blocks (RDB) and Contrastive Regularization (CR). These techniques serve to amplify the influence of the original underwater images, facilitating the construction of an end-toend fully convolutional network tailored for the purpose of enhancing underwater image quality. This innovative network is denoted as RDCR. By leveraging the local and global feature fusion of RDB and the contrastive learning of CR, this model effectively extracts multi-level features from the original images, adaptively preserving hierarchical features, and achieves high-quality underwater image enhancement through learning from the original images. The superiority of our proposed model over alternative comparative algorithms is evidenced by the experimental results obtained across four distinct datasets, encompassing assessments of both subjective visual perception and objective evaluation.INDEX TERMS Underwater image enhancement, deep learning methods, residual dense network.