In response to challenges such as narrow visibility for ship navigators, limited field of view from a single camera, and complex maritime environments, this study proposes panoramic visual perception-assisted navigation technology. The approach includes introducing a region-of-interest search method based on SSIM and an elliptical weighted fusion method, culminating in the development of the ship panoramic visual stitching algorithm SSIM-EW. Additionally, the YOLOv8s model is improved by increasing the size of the detection head, introducing GhostNet, and replacing the regression loss function with the WIoU loss function, and a perception model yolov8-SGW for sea target detection is proposed. The experimental results demonstrate that the SSIM-EW algorithm achieves the highest PSNR indicator of 25.736, which can effectively reduce the stitching traces and significantly improve the stitching quality of panoramic images. Compared to the baseline model, the YOLOv8-SGW model shows improvements in the P, R, and mAP50 of 1.5%, 4.3%, and 2.3%, respectively, its mAP50 is significantly higher than that of other target detection models, and the detection ability of small targets at sea has been significantly improved. Implementing these algorithms in tugboat operations at ports enhances the fields of view of navigators, allowing for the identification of targets missed by AISs and radar systems, thus ensuring operational safety and advancing the level of vessel intelligence.