“…In the first part, Both the PP-PicoDet and NanoDet models are anchor-free models, while the YOLOv5 model employed the Kmeans method to obtain anchors such as [[23,24, 27,28, 26,34], [32,33,31,41,37,38], [38,48,54,58,66,69]]. The images were preprocessed before model training, including resizing the images to correspond to the size required by the model (640x640 for YOLOv5, 416x416 for PP-PicoDet, and 416x416 for NanoDet) and normalizing the images to a range of pixel values of (0, 1).…”