Explainable Artificial Intelligence (XAI) is a cutting-edge development in the field of AI that is motivated by the need for transparency of black-box models in AI systems. This transparency enhances user trust, facilitates accountability, and enables better understanding of AI systems decisions especially in critical applications where insights into decision processes are essential. This has increased XAI research interest, aiming to provide techniques for interpreting and understanding the behavior of intelligent models. Counterfactual explanation is a popular technique for model interpretation based on updating a few features such that the outcome of an AI model is changed. By analyzing these counterfactuals, users can gain insights into the critical features or factors that influenced the AI system's decision. However, most counterfactual techniques often lack qualifications such as simplicity, robustness, and coherence. In this research, we propose a novel approach: Adaptive Feature Weight Genetic Explanation (AFWGE), for generating counterfactual explanations of AI models, where a custom genetic algorithm (GA) is employed, incorporating adaptive feature weights to enhance the algorithm's performance. Experimental results on four benchmark datasets show that AFWGE allows for the adaptation of feature weights during the evolutionary process, producing more effective counterfactual explanations with superior proximity, sparsity, plausibility, and actionability. Furthermore, it emphasizes feature weights as reliable indicators of the significance of the model's features, providing valuable insights for interpreting the model. AFWGE not only advances the field of counterfactual explanation generation but also establishes a robust framework for assessing feature importance in machine learning models.