To enhance adversarial robustness, adversarial training learns deep neural networks on the adversarial variants generated by their natural data. However, as the training progresses, the training data becomes less and less attackable, undermining the robustness enhancement. A straightforward remedy is to incorporate more training data, but sometimes incurring an unaffordable cost. In this paper, to mitigate this issue, we propose the guided interpolation framework (GIF): in each epoch, the GIF employs the previous epoch's meta information to guide the data's interpolation. Compared with the vanilla mixup, the GIF can provide a higher ratio of attackable data, which is beneficial to the robustness enhancement; it meanwhile mitigates the model's linear behavior between classes, where the linear behavior is favorable to standard training for generalization but not to adversarial training for robustness. As a result, the GIF encourages the model to predict invariantly in the cluster of each class. Experiments demonstrate that the GIF can indeed enhance adversarial robustness on various adversarial training methods and various datasets.Recent studies on AT suggest the unequal treatment of data (Ding et al., 2020;Wang et al., 2019;Zhang et al., 2021a). In particular, Zhang et al. (2021a) divided the training data into two categories-attackable data and guarded data, in which attackable/guarded data are close to/far away from the class boundary that can/cannot be attacked. To enhance adversarial robustness, attackable data are particularly useful in learning the decision boundary Zhang et al., 2021a).However, as the AT progresses, the ratio of attackable data decreases significantly, which jeopardizes the enhancement of adversarial robustness. In Figure 1, we plot the ratio of attackable data in the left panel and the robust accuracy in the right panel. The red lines show the training dynamics of a typical AT method . As the training progresses, more and more training data become guarded; thus, the ratio of attackable data decreases. After Epoch 30 (with a reduced learning rate), this ratio drops rapidly, while the robust accuracy ceases to rise but begins to drop. This strong correlation between this ratio and the robustness urges us to introduce more attackable data for AT.A straightforward remedy for the shortage of attackable data is to incorporate more training data . Hendrycks et al. (2019) showed that AT for learning a robust model requires substantially more training data than standard training (ST). Nevertheless, gathering additional data especially with high-quality * Equal contributions.