As cyber threats continue to evolve in complexity, the need for robust intrusion detection systems (IDS) becomes increasingly critical. Machine learning (ML) models have demonstrated their effectiveness in detecting anomalies and potential intrusions. In this article, we delve into the world of intrusion detection by exploring the application of four distinct ML models: XGBoost, Decision Trees, Random Forests, and Bagging. And leveraging the interpretability tools LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive ex-Planations) to explain the classification results. Our exploration begins with an in-depth analysis of each machine learning model, shedding light on their strengths, weaknesses, and suitability for intrusion detection. However, machine learning models often operate as "black boxes" making it crucial to explain their inner workings. This article introduces LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive ex-Planations) as indispensable tools for model interpretability. Throughout the article, we demonstrate the practical application of LIME and SHAP to explain and interpret the output of our intrusion detection models. By doing so, we gain valuable insights into the decision-making process of these models, enhancing our ability to identify and respond to potential threats effectively.