Adversarial transferability presents an intriguing phenomenon, where adversarial examples designed for one model can effectively deceive other models. By exploiting this property, various transfer-based methods are proposed to conduct adversarial attacks without knowledge of target models, posing significant threats to practical black-box applications. However, these methods either have limited transferability or require high resource consumption. To bridge the gap, we investigate adversarial transferability from the optimization perspective and propose the ghost sample attack (GSA). GSA improves adversarial transferability by alleviating the overfitting issue of adversarial examples on the surrogate model. Based on the insight that a slight shift of the adversarial example is similar to a minor change in the decision boundary, we aggregate gradients of perturbed adversarial copies (named ghost samples) to efficiently achieve a similar effect to calculating gradients of multiple ensemble surrogate models. Extensive experiments demonstrate that GSA achieves state-of-the-art adversarial transferability with restricted resources. On average, GSA improves the attack success rate by 4.8% on normally trained models compared to state-of-the-art attacks. Additionally, GSA reduces the computational cost by 62% compared with TAIG-R. When combined with other methods, GSA further improves transferability to 96.9% on normally trained models and 82.7% on robust models.