Studies have shown that deep neural networks are vulnerable to adversarial examples -perturbed inputs that cause DNN-based models to produce incorrect results. One robust adversarial attack in the NLP domain is the synonym substitution. In attacks of this variety, the adversary substitutes words with synonyms. Since synonym substitution perturbations aim to satisfy all lexical, grammatical, and semantic constraints, they are difficult to detect with automatic syntax check as well as by humans. In this work, we propose the first defensive method to mitigate synonym substitution perturbations that can improve the robustness of DNNs with both clean and adversarial data. We improve the generalization of DNN-based classifiers by replacing the embeddings of the important words in the input samples with the average of their synonyms' embeddings. By doing so, we reduce model sensitivity to particular words in the input samples. Our algorithm is generic enough to be applied in any NLP domain and to any model trained on any natural language.