The use of UAVs for traffic monitoring greatly facilitates people's lives. Classical object detection algorithms struggle to balance high speed and accuracy when processing UAV images on edge devices. To solve the problem, the paper introduces an efficient and slim YOLO with low computational overhead, named LES-YOLO. In order to enrich the feature representation of small and medium objects in UAV images, a redesigned backbone is introduced. And C2f combined with Coord Attention (CA) is used to focus on key features. In order to enrich cross-scale information and reduce feature loss during network transmission, a novel structure called EMS-PAN (Enhanced Multi-Scale PAN) is designed. At the same time, to alleviate the problem of class imbalance, Focal EIoU is used to optimize network loss calculation instead of CIoU. To minimize redundancy and ensure a slim architecture, the P5 layer has been eliminated from the model. And verification experiments show that LES-YOLO without P5 is more efficient and slimmer. LES-YOLO is trained and tested on the VisDrone2019 dataset. Compared with YOLOv8n-p2, mAP@0.5 and Recall has increased by 7.4% and 7%. The number of parameters is reduced by over 50%, from 2.9 M to 1.4 M, but there is a certain degree of increase in FLOPS, reaching 18.8 GFLOPS. However, the overall computational overhead is still small enough. Moreover, compared with YOLOv8s-p2, both the number of parameters and FLOPS are significantly reduced, while the performance is similar. As for real-time, LES-YOLO reaches 138 fps on GPU and a maximum of 78 fps on edge devices of UAV.