Deep neural networks (DNNs) have become essential for aerial detection. However, DNNs are vulnerable to adversarial examples, which poses great security concerns for security-critical systems. To physically evaluate the vulnerability of DNNs-based aerial detection methods, researchers recently devised adversarial patches. Nonetheless, adversarial patches generated by existing algorithms are not strong enough and extremely time-consuming. Moreover, the complicated physical factors are not accommodated well during the optimizing process. In this paper, a novel adaptive-patch-based physical attack (AP-PA) framework is proposed to alleviate the above problems, which achieves state-of-the-art performance in both accuracy and efficiency. Specifically, the AP-PA aims to generate adversarial patches that are adaptive in both physical dynamics and varying scales, and by which the particular targets can be hidden from being detected. Furthermore, the adversarial patch is also gifted with attack effectiveness against all targets of the same class with a patch outside the target (No need to smear targeted objects) and robust enough in the physical world. In addition, a new loss is devised to consider more available information of detected objects to optimize the adversarial patch, which can significantly improve the patch's attack efficacy (Average precision drop up to 87.86% and 85.48% in white-box and blackbox settings, respectively) and optimizing efficiency. We also establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks. Finally, several proportionally scaled experiments are performed physically to demonstrate that the elaborated adversarial patches can successfully deceive aerial detection algorithms in dynamic physical circumstances. The code is available at https://github.com/JiaweiLian/AP-PA.