Artificial intelligence (AI) will play an important role in realizing maritime autonomous surface ships (MASSs). However, as a double-edged sword, this new technology brings forth new threats. The purpose of this study is to raise awareness among stakeholders regarding the potential security threats posed by AI in MASSs. To achieve this, we propose a hypothetical attack scenario in which a clean-label poisoning attack was executed on an object detection model, which resulted in boats being misclassified as ferries, thus preventing the detection of pirates approaching a boat. We used the poison frog algorithm to generate poisoning instances, and trained a YOLOv5 model with both clean and poisoned data. Despite the high accuracy of the model, it misclassified boats as ferries owing to the poisoning of the target instance. Although the experiment was conducted under limited conditions, we confirmed vulnerabilities in the object detection algorithm. This misclassification could lead to inaccurate AI decision making and accidents. The hypothetical scenario proposed in this study emphasizes the vulnerability of object detection models to clean-label poisoning attacks, and the need for mitigation strategies against security threats posed by AI in the maritime industry.