To ensure the validity of an assessment programme, it is essential to align it with the intended learning outcomes (LO). We present a model for ensuring assessment validity which supports this constructive alignment and uses learning analytics (LA). The model is based on LA that include a comparison between ideal LO weights (expressing the prioritization of LOs), actual assessment weights (maximum assessment points per LO), and student assessment results (actually obtained assessment points per LO), as well as clustering and trace data analysis. These analytics are part of a continuous improvement cycle, including strategic planning and learning design (LD) supported by LO prioritization, and monitoring and evaluation supported by LA. To illustrate and test the model, we conducted a study on the example of a graduate‐level higher education course in applied mathematics, by analysing student assessment results and activity in a learning management system. The study showed that the analyses provided valuable insights with practical implications for the development of sound LD, tailored educational interventions, databases of assessment tasks, recommendation systems, and self‐regulated learning. Future research should investigate the possibilities for automation of such LA, to enable full exploitation of their potential and use in everyday teaching and learning.
Practitioner notesWhat is already known about this topic
To develop sound, student‐centred learning design (LD), it is essential to ensure that assessment is constructively aligned with the intended learning outcomes (LO).
This constructive alignment is crucial for ensuring the validity of an assessment program.
Learning analytics (LA) can provide insights that help develop valid assessment programs.
What this paper adds
As not all LOs are equally important, assessment programs should reflect the prioritization of LOs, which can be determined by using various multi‐criteria decision‐making (MCDM) methods.
This article presents and illustrates, based on an empirical case, a model of continuous improvement of LD, which uses LA to compare how LOs are reflected in (actual) students' results, in an (actual) assessment program, and in the (ideal) prioritization of LOs based on MCDM.
The study presents how clustering of students based on their assessment results can be used in LA to provide insights for educational interventions better targeted to students' needs.
Implications for practice and/or policy
The proposed LA can provide important insights for the development (or improvement) of LD in line with the intended course LOs, but also study program LOs (if course and study program LOs are properly aligned).
The LA can also contribute to the development of databases of assessment tasks aligned with course LOs, with ensured validity, supporting sharing and reusing, as well as to the development of tailored educational interventions (eg, based on clustering).
The proposed LA can also contribute to the development of recommendation systems, with recommendations for the improvement of LD for teachers or learning suggestions for students, as well as students' meta‐cognition and self‐regulated learning.