In the realm of cybersecurity, the escalating sophistication of adversarial attacks poses a significant threat, particularly in the context of machine learning models. Traditional defensive mechanisms often fall short in identifying and mitigating such attacks, primarily due to their static nature and inability to adapt to the evolving strategies of adversaries. This limitation underscores the necessity for more dynamic and responsive approaches. Addressing this critical gap, our research introduces an innovative Active Machine Learning Adversarial Attack Detection framework process. Central to our approach is the strategic amalgamation of data collection and preprocessing techniques. We meticulously gather a diverse dataset encompassing both genuine and adversarial user feedback, which is then carefully annotated to differentiate between the two scenarios. This data undergoes rigorous preprocessing, including tokenization and conversion into numerical features through methods like TF-IDF and word embeddings, paving the way for more nuanced analysis. The core of our model employs a variety of machine learning algorithms—Logistic Regression, Random Forest, SVM, CNN, and XGBoost—each fine-tuned through meticulous hyperparameter optimizations. The novelty of our approach, however, lies in the integration of an active learning strategy for efficient results. By employing uncertainty sampling and query-by-committee, our model actively identifies and learns from instances of highest informational value, continuously evolving in its detection capabilities. Our framework further stands out in its post-training phases. The models are not only retrained with newly labeled data but are also subjected to a comprehensive evaluation on separate test datasets. Metrics such as accuracy, precision, recall, F1-score, and AUC are meticulously computed, ensuring the robustness of our results. Deployed in a real-time environment, the model demonstrates remarkable efficacy in detecting adversarial attacks in user feedback. Continuous monitoring and periodic retraining allow the model to adapt and respond to new adversarial tactics. The impact of our work is quantitatively significant—our model outperforms existing methods with a 9.5% improvement in precision, 8.5% higher accuracy, 8.3% increased recall, 9.4% greater AUC, 4.5% higher specificity, and a 2.9% reduction in detection delays for different scenarios.