In the past decade, deep learning has dramatically changed the traditional hand-craft feature manner with strong feature learning capability, promoting the tremendous improvement of conventional tasks. However, deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by small noise, which is imperceptible to human observers but can make DNNs misbehave. Existing adversarial attacks can be divided into digital and physical adversarial attacks. The former is designed to pursue strong attack performance in lab environments while hardly remaining effective when applied to the physical world. In contrast, the latter focus on developing physical deployable attacks, which are more robust in complex physical environmental conditions (e.g., brightness, occlusion, etc.). Recently, with the increasing deployment of the DNN-based system in the real world, enhancing the robustness of these systems is an emergency, while exploring physical adversarial attacks exhaustly is the precondition. To this end, this paper reviews the development of physical adversarial attacks against DNN-based computer vision tasks (i.e., image recognition and object detection tasks), which can provide beneficial information for developing stronger physical adversarial attacks. For completeness, we also briefly introduce the works that do not involve physical attacks but are closely related to them. Specifically, we first proposed a taxonomy to summarize the current physical adversarial attacks. Then we briefly discuss the existing physical attacks and focus on the technique for improving the robustness of physical attacks under complex physical environmental conditions. Finally, we discuss the issues of the current physical adversarial attacks to be solved and give promising directions.