Numerous single-image dehazing algorithms have been developed, employing a spectrum of techniques ranging from intricate physical computations to state-of-the-art deep-learning methodologies. However, conventional deep-learning approaches, particularly those based on standard convolutional neural networks (CNNs), often result in the persistence of residual fog patches when applied to images featuring high fog concentration or heterogeneous fog distribution. In response to this challenge, we propose an innovative solution known as the multi-feature fusion image dehazing network (MFID-Net). This approach employs an end-to-end methodology to directly capture the mapping relationship between hazy and fog-free images. Central to our approach is the introduction of a novel multi-feature fusion (MF) module, strategically designed to address channel and pixel characteristics in regions with uneven or high fog concentrations. Notably, this module achieves effective haze reduction while minimizing computational resources, thereby mitigating the issue of residual fog patches. Experimental results underscore the superior performance of our algorithm compared to similar dehazing methods, as evidenced by higher scores in structural similarity (SSIM), peak signal-to-noise ratio (PSNR), and computational velocity. Moreover, MFID-Net exhibits significant advancements in restoring details within expansive monochromatic areas, such as skies and white walls.