Leveraging the excellent feature representation capabilities of neural networks, deep learning methods have been widely adopted for object detection in synthetic aperture radar (SAR) images. However, persistent challenges are encountered in SAR ship detection due to factors such as small ship sizes, high noise levels, multiple targets, and scale variations. To address these complexities, in this paper, the MSFA-YOLO algorithm, a novel multiscale SAR ship detection approach em-powered by a fused attention mechanism, is presented. The proposed algorithm incorporates several key enhancements. The fused attention c2fSE module is integrated into the YOLOv8n baseline network to optimize feature extraction for SAR ships. In addition, the DenseASPP module is incorporated to enhance the model's adaptability to ships of varying scales, improving its ca-pability to accommodate larger ships within lower model scales. Furthermore, the Wise-IoU loss function is adopted, and a dynamic non-monotonic focusing mechanism is employed for bounding box loss, significantly enhancing the model's ability to handle low-quality images. Extensive experiments conducted on benchmark datasets, namely SAR-Ship-Dataset, SSDD, and HRSID, validate the robustness and reliability of the proposed model. Experimental results demonstrate significant performance improvements over YOLOv8n: a 3.1% enhancement in mAP75 and a 2.1% boost in mAP50-95 on the SAR-Ship-Dataset, a 0.7% increase in mAP75 and a 0.5% increase in mAP50-95 on the SSDD dataset, and a 1.8% increase in mAP75 and a 0.7% increase in mAP50-95 on the HRSID dataset. Exhibiting superior performance to existing SAR ship detection models in terms of accuracy, the MSFA-YOLO algorithm represents a significant advancement, establishing itself as the current state-of-the-art algorithm in SAR ship detection.
INDEX TERMS ship detection; Sar image, YOLO