The task of multiple-tiny-object detection from diverse perspectives in unmanned aerial vehicles (UAVs) using onboard edge devices is a significant and complex challenge within computer vision. In order to address this challenge, we propose a lightweight and efficient tiny-object-detection algorithm named LE-YOLO, based on the YOLOv8n architecture. To improve the detection performance and optimize the model efficiency, we present the LHGNet backbone, a more extensive feature extraction network, integrating depth-wise separable convolution and channel shuffle modules. This integration facilitates a thorough exploration of the inherent features within the network at deeper layers, promoting the fusion of local detail information and channel characteristics. Furthermore, we introduce the LGS bottleneck and LGSCSP fusion module incorporated into the neck, aiming to decrease the computational complexity while preserving the detector’s accuracy. Additionally, we enhance the detection accuracy by modifying its structure and the size of the feature maps. These improvements significantly enhance the model’s capability to capture tiny objects. The proposed LE-YOLO detector is examined in ablation and comparative experiments on the VisDrone2019 dataset. In contrast to YOLOv8n, the proposed LE-YOLO model achieved a 30.0% reduction in the parameter count, accompanied by a 15.9% increase in the mAP(0.5). These comprehensive experiments indicate that our approach can significantly enhance the detection accuracy and optimize the model efficiency through the organic combination of our suggested enhancements.