The growing use of drones in precision agriculture highlights the needs for enhanced operational efficiency, especially in the scope of detection tasks, even in segmentation. Although the ability of computer vision based on deep learning has made remarkable progress in the past ten years, the segmentation of images captured by Unmanned Aerial Vehicle (UAV) cameras, an exact detection task, still faces a conflict between high precision and low inference latency. Due to such a dilemma, we propose IA-YOLO (Inverted Attention You Only Look Once), an efficient model based on IA-Block (Inverted Attention Block) with the aim of providing constructive strategies for real-time detection tasks using UAV cameras. The working details of this paper are outlined as follows: (1) We construct a component named IA-Block, which is integrated into the YOLOv8-seg structure as IA-YOLO. It specializes in pixel-level classification of UAV camera images, facilitating the creation of exact maps to guide agricultural strategies. (2) In experiments on the Vatica dataset, compared with any other lightweight segmentation model, IA-YOLO achieves at least a 3.3% increase in mAP (mean Average Precision). Further validation on diverse species datasets confirms its robust generalization. (3) Without overloading the complex attention mechanism and deeper and deeper network, a stem that incorporates efficient feature extraction components, IA-Block, still possess credible modeling capabilities.