A low-light image enhancement method based on a deep symmetric encoder–decoder convolutional network (LLED-Net) is proposed in the paper. In surveillance and tactical reconnaissance, collecting visual information from a dynamic environment and accurately processing that data is critical to making the right decisions and ensuring mission success. However, due to the cost and technical limitations of camera sensors, it is difficult to capture clear images or videos in low-light conditions. In this paper, a special encoder–decoder convolution network is designed to utilize multi-scale feature maps and join jump connections to avoid gradient disappearance. In order to preserve the image texture as much as possible, by using structural similarity (SSIM) loss to train the model on the data sets with different brightness level, the model can adaptively enhance low-light images in low-light environments. The results show that the proposed algorithm provides significant improvements in quantitative comparison with RED-Net and several other representative image enhancement algorithms.