Due to the unique distance and angles involved in satellite remote sensing, ships appear with a small pixel area in images, leading to insufficient feature representation. This results in suboptimal performance in ship detection, including potential misses and false detections. Moreover, the complexity of backgrounds in remote sensing images of ships and the clustering of vessels also adversely affect the accuracy of ship detection. Therefore, this paper proposes an optimized model named SSMA-YOLO, based on YOLOv8n. First, this paper introduces a newly designed SSC2f structure that incorporates spatial and channel convolution (SCConv) and spatial group-wise enhancement (SGE) attention mechanisms. This design reduces spatial and channel redundancies within the neural network, enhancing detection accuracy while simultaneously reducing the model’s parameter count. Second, the newly designed MC2f structure employs the multidimensional collaborative attention (MCA) mechanism to efficiently model spatial and channel features, enhancing recognition efficiency in complex backgrounds. Additionally, the asymptotic feature pyramid network (AFPN) structure was designed for progressively fusing multi-level features from the backbone layers, overcoming challenges posed by multi-scale variations. Experiments of the ships dataset show that the proposed model achieved a 4.4% increase in mAP compared to the state-of-the-art single-stage target detection YOLOv8n model while also reducing the number of parameters by 23%.