In the absence of high-frequency visual observation, low-resolution (LR) targets (e.g., objects, human body keypoints) are intrinsically difficult to detect in unconstrained images. This challenge can be further exasperated by typical downsampling operations (e.g., pooling, stride) of existing deep networks (e.g., CNNs). To tackle this challenge, in this work, we introduce a generic, High-Frequency Information Preservation (HFIP) block as a replacement for existing downsampling operations. It is composed of two key components: (1) the decoupled high-frequency learning component, which extracts the high-frequency information along the vertical and horizontal directions separately, and (2) the dilated frequency-aware channel correlation component, which decomposes the input feature map into multiple smaller ones in a dilated manner, concatenates them by channel, and then correlates the combined channels in the frequency space. Our module can generally be integrated into existing network architectures for target detection (e.g., YOLO, HRNet). Extensive experiments on low-resolution human pose estimation and object detection tasks show that our HFIP technique can generally boost the performance of state-of-the-art detection models significantly, e.g., improving the object detection accuracy of YOLOv5s by an absolute margin of 3.30% in mAP under a resolution of 640 × 640 compared to the COCO benchmark.