Due to inevitable corruption on signals across all subcarriers, strong and frequently occurring impulses pose a grim threat to Orthogonal Frequency Division Multiplexing (OFDM) systems, where system performance is likely to be further degraded by multipath fading phenomenon. Recently, a deep neural network (DNN) receiver is shown to be capable of not only implicitly estimating channel state information but also explicitly recovering the transmitted symbols without assuming the signal-to-noise ratio (SNR) level. In principle, the DNN model is trained using data generated from computer simulations–according to WINNER II channel model as fading channels, and the additive white Gaussian noise (AWGN) as background noise. As with its conventional peers which assume the inter- fering source from AWGN only, the aforementioned DNN receiver is prone to substantial performance loss when subject to impulse noise. To address this obstacle, this paper incor- porates the fine-tuning notion into the DNN model–by substituting impulse noise-laced data samples for succeeding training data–with an aim at enhancing representation learning. To attest the efficacy of the proposed DNN receiver, the bit error rate (BER) result derived from a compressive sensing-based receiver, enabled by consensus alternating direction method of multipliers (ADMM), is used as a benchmark for performance comparison. Noteworthily, the induced BER performance is comparable to that of the clipping-featured receiver, the threshold of which however solicits the knowledge of the SNR value, an assumption that is otherwise relaxed by the proposed DNN receiver. Furthermore, through extensive simulations, the deep learning based approach demonstrates promising robustness against impulse noise model mismatch between training and testing counterparts.