Recent studies have shown that code models are susceptible to backdoor attacks. When injected with a backdoor, the victim code model can function normally on benign samples but may produce predetermined malicious outputs when triggers are activated. However, previous backdoor attacks on code models have used explicit triggers, and we aim to investigate the vulnerability of code models to stealthy backdoor attacks in this study. To this end, we propose a backdoor attack approach using Abstract Syntax Tree-based Triggers (ASTT) to obtain stealthiness. We evaluate ASTT on deep learning-based code models and three downstream tasks (i.e., code translation, code repair, and defect detection). With the clustering algorithm, we generated triggers based on abstract syntax trees. We find that the average attack success rate of our ASTT can reach 92.71%. Moreover, our ASTT is stealthy and can effectively bypass state-of-the-art defense approaches. Finally, we verify that the time overhead of our proposed ASTT is small and can meet the needs in real scenarios. Our finding demonstrates security weaknesses in code models under stealthy backdoor attacks.