The development of smart agriculture holds great significance in ensuring the supply and cyber security of agricultural production. With the advancement of intelligent technologies, unmanned robots collaborating with the Internet of Things (IoT) play increasingly crucial roles in the realm of smart agriculture; they have become effective means to ensure agricultural safety and supply security. However, in the pursuit of unmanned agronomic applications, there is an urgent challenge: these intelligent systems generally show low accuracy in target detection when relying on visual perception due to fine-grained changes and differing postures of crops. To solve this issue, we proposed a novel multi-target detection approach via incorporating graph representation learning and multi-crossed attention techniques. The proposed model first utilizes a lightweight backbone network to accurately identify the characteristics and conditions of crops. Then, the higher-order graphic feature extractor is designed to comprehensively observe fine-grained features and potential graphic relationships among massive crops, enabling better perception capabilities of agricultural robots, allowing them to adapt to complex environments. Additionally, we can address bilevel routing by combining ghost attention and rotation annotations to handle continuous posture changes during crop growth and mutual occlusion. An extensive set of experiments demonstrated that our proposed approach outperforms various advanced methods of crop detection, achieving identification accuracies up to 89.6% (mAP) and 94.7% (AP50). Ablation studies further proved the preferable stability, of which the parameter size is only 628 Mbyte, while maintaining a high processing speed of 89 frames per second. This provides strong support for application of the technique in smart agriculture production and supply cyber security.