This paper introduces a novel approach to identify at-risk students with a focus on output interpretability through analyzing learning activities at a finer granularity on a weekly basis. Specifically, this approach converts the predicted output from the former weeks into meaningful probabilities to infer the predictions in the current week for maintaining the consecutiveness among learning activities. To demonstrate the efficacy of our model in identifying at-risk students, we compare the weekly AUCs and averaged performance (i.e., accuracy, precision, recall, and f1-score) over each course with the baseline models (i.e., Random Forest, Support Vector Machine, and Decision Tree), respectively. Furthermore, we adopt a Top- K metric to examine the number of at-risk students that the model is able to identify with high precision during each week. Finally, the model output is interpreted through a model-agnostic interpretation approach to support instructors to make informed recommendations for students’ learning. The experimental results demonstrate the capability and interpretability of our model in identifying at-risk students in online learning settings. In addition to that our work also provides significant implications in building accountable machine learning pipelines that can be used to automatically generated individualized learning interventions while considering fairness between different learning groups.