With the explosive rise of internet usage and the development of web applications across various platforms, ensuring network and system security has become a critical concern. Networks and web services are particularly susceptible to targeted attacks, as hackers and intruders persistently attempt to gain unauthorized access. The integration of artificial intelligence (AI) has emerged as a crucial tool for detecting intrusions and constructing effective Intrusion Detection Systems (IDSs) to counter cyber-attacks and malicious activities. IDSs developed using machine learning (ML) and deep learning (DL) techniques have proven to be highly effective in detecting network attacks, offering machine-centric solutions. Nevertheless, mainstream adoption, confidence and trust in these systems have been greatly impeded by the fact that ML/DL implementations tend to be “black boxes,” and thus lacking human interpretability, transparency, explainability, and logical reasoning in their prediction outputs. This limitation has prompted questions about the responsibility and comprehension of AI-driven intrusion detection systems. In this study, we propose four novel architectures that incorporate Explainable Artificial Intelligence (XAI) techniques to overcome the challenges of limited interpretability in ML/DL based IDSs. We focus on the development of ExplainDTC, SecureForest-RFE, RationaleNet, and CNNShield architectures in network security solutions, and inquiry into their potential to convert the untrustworthy architectures into trustworthy. The models are applied to scan network traffic and identify, and report intrusions based on the traits extracted from the UNSW-NB15 dataset. To explain how a decision is made by the models and to add expansibility at every stage of machine learning pipeline, we integrate multiple XAI methods such as LIME, SHAP, ElI5, and ProtoDash on top of our architectures. The generated explanations provide quantifiable insights into the influential factors and their respective impact on network intrusion predictions.