Deep learning-based vulnerability detection models have received widespread attention; however, these models are susceptible to adversarial attack, and adversarial examples are a primary research direction to improve the robustness of the models. There are three main categories of adversarial example generation methods for source code tasks: changing identifier names, adding dead code, and changing code structure. However, these methods cannot be directly applied to vulnerability detection. Therefore, we propose the first study of adversarial attack on vulnerability detection models. Specifically, we utilize equivalent transformations to generate candidate statements and introduce an improved Monte Carlo tree search algorithm to guide the selection of candidate statements to generate adversarial examples. In addition, we devise a black-box approach that can be applied to widespread vulnerability detection models. The experimental results show that our approach achieves attack success rates of 16.48%, 27.92%, and 65.20%, respectively, in three vulnerability detection models with different levels of granularity. Compared with the state-of-the-art source code attack method ALERT, our method can handle models with identifier name mapping, and our attack success rate is 27.59% higher on average than ALERT.