Identifying bird numbers in hostile environments, such as poultry facilities, presents significant challenges. The complexity of these environments demands robust and adaptive algorithmic approaches for the accurate detection and tracking of birds over time, ensuring reliable data analysis. This study aims to enhance methodologies for automated chicken identification in videos, addressing the dynamic and non-standardized nature of poultry farming environments. The YOLOv8n model was chosen for chicken detection due to its high portability. The developed algorithm promptly identifies and labels chickens as they appear in the image. The process is illustrated in two parallel flowcharts, emphasizing different aspects of image processing and behavioral analysis. False regions such as the chickens’ heads and tails are excluded to calculate the body area more accurately. The following three scenarios were tested with the newly modified deep-learning algorithm: (1) reappearing chicken with temporary invisibility; (2) multiple missing chickens with object occlusion; and (3) multiple missing chickens with coalescing chickens. This results in a precise measure of the chickens’ size and shape, with the YOLO model achieving an accuracy above 0.98 and a loss of less than 0.1. In all scenarios, the modified algorithm improved accuracy in maintaining chicken identification, enabling the simultaneous tracking of several chickens with respective error rates of 0, 0.007, and 0.017. Morphological identification, based on features extracted from each chicken, proved to be an effective strategy for enhancing tracking accuracy.