Integrating adversarial machine learning with question answering (QA) systems has emerged as a critical area for understanding the vulnerabilities and robustness of these systems. This article aims to review adversarial example-generation techniques in the QA field, including textual and multimodal contexts. We examine the techniques employed through systematic categorization, providing a structured review. Beginning with an overview of traditional QA models, we traverse the adversarial example generation by exploring rule-based perturbations and advanced generative models. We then extend our research to include multimodal QA systems, analyze them across various methods, and examine generative models, seq2seq architectures, and hybrid methodologies. Our research grows to different defense strategies, adversarial datasets, and evaluation metrics and illustrates the literature on adversarial QA. Finally, the paper considers the future landscape of adversarial question generation, highlighting potential research directions that can advance textual and multimodal QA systems in the context of adversarial challenges.