With the accumulation and storage of remote sensing images in various satellite data centers, the rapid detection of objects of interest from large-scale remote sensing images is a current research focus and application requirement. Although some cutting-edge object detection algorithms in remote sensing images perform well in terms of accuracy (mAP), their inference speed is slow and requires high hardware requirements that are not suitable for real-time object detection in large-scale remote sensing images. To address this issue, we propose a fast inference framework for object detection in large-scale remote sensing images. On the one hand, we introduce α-IoU Loss on the YWCSL model to implement adaptive weighted loss and gradient, which achieves 64.62% and 79.54% mAP on DIOR-R and DOTA test sets, respectively. More importantly, the inference speed of the YWCSL model reaches 60.74 FPS on a single NVIDIA GeForce RTX 3080 Ti, which is 2.87 times faster than the current state-of-the-art one-stage detector S 2 A-Net. On the other hand, we build a distributed inference framework to enable fast inference on large-scale remote sensing images. Specifically, we save the images on HDFS for distributed storage and deploy the pretrained YWCSL model on the Spark cluster. In addition, we use a custom partitioner RankPartition to repartition the data to further improve the performance of the cluster. When using 5 nodes, the speedup of the cluster reaches 9.54, which is 90.80% higher than the theoretical linear speedup (5.00). Our distributed inference framework for large-scale remote sensing images significantly reduces the dependence of object detection on expensive hardware resources, which has important research significance for the wide application of object detection in remote sensing images.