Deaf and Hard of Hearing (DHH) students encounter obstacles in higher education due to language and communication challenges. Although research aims to improve their academic performance, the potential of Machine Learning (ML) remains underutilized in DHH education. The opacity of ML models further complicates their adoption. This study aims to fill this gap by developing a novel ML-based system with eXplainable AI (XAI), specifically utilizing Local Interpretable Model-Agnostic Explainer (LIME) and Shapley Additive Explainer (SHAP). The objective is twofold: predicting at-risk DHH students and explaining risk factors. Merging ML and XAI, this approach could positively impact DHH students' educational outcomes. A dataset of 454 records detailing DHH students is collected. To address dataset limitations, synthetic data and SMOTE are used. Students are categorized into three performance levels. The data is modeled with different ML models, transfer models, ensemble models, and combination models. Among the models, the stacked model with XGBoost, ExtraTrees, and Random Forest exhibited better performance with an accuracy of 92.99%. Results highlight the model's significance, providing insights through XAI into crucial factors affecting academic performance, including communication mode, early intervention, schooling type, and family deafness history. LIME and SHAP values were found to be effective in deriving insights into DHH student performance prediction framework. Communication mode, notably, strongly influences at-risk students. The major contribution of this study is the development of a novel MLbased system and the XAI interpretations whose value lies in its social relevance, guiding stakeholders to enhance DHH scholars' academic achievements.