The minimum resolvable temperature difference (MRTD) at which a four-rod target can be resolved is a critical parameter used to assess the comprehensive performance of thermal imaging systems, which is important for technological innovation in military and other fields. Recently, there have been some attempts to use an automatic objective approach based on deep learning to take the place of the classical manual subjective MRTD measurement approach, which is strongly affected by the psychological subjective factors of the experimenter and is limited in accuracy and speed. However, the scale variability of four-rod targets and the low pixels of infrared thermal cameras have turned out to be a challenging problem for automatic MRTD measurement. We propose a multiscale deblurred feature extraction network (MDF-Net), a backbone based on a yolov5 neural network, in an attempt to solve the aforementioned problem. We first present a global attention mechanism (GAM) attention module to represent strong images of the four-rod targets. Next, a Rep VGG module is introduced to decrease the blur. Our experiments show that the proposed method achieves the desired effect and state-of-the-art detection results, which innovatively improve the accuracy of four-rod target detection to 82.3% and thus make it possible for the thermal imagers to see further and to respond faster and more accurately.