Object detection from aerial and satellite remote sensing images has been an active research topic over the past decade. Thanks to the increase in computational resources and data availability, deep learning-based object detection methods have achieved numerous successes in computer vision, and more recently in remote sensing. However, the ability of current detectors to deal with (very) small objects still remains limited. In particular, the fast detection of small objects from a large observed scene is still an open question. In this work, we address this challenge and introduce an enhanced one-stage deep learning-based detection model, called You Only Look Once (YOLO)-fine, which is based on the structure of YOLOv3. Our detector is designed to be capable of detecting small objects with high accuracy and high speed, allowing further real-time applications within operational contexts. We also investigate its robustness to the appearance of new backgrounds in the validation set, thus tackling the issue of domain adaptation that is critical in remote sensing. Experimental studies that were conducted on both aerial and satellite benchmark datasets show some significant improvement of YOLO-fine as compared to other state-of-the art object detectors.