“…To ensure quality, it seemed counterproductive to enforce top-down decisions and mandatory requirements on how much or what type of assessment information should be used for learning and decisionmaking. This approach likely results in tick-box activities and in both learners and teachers gaming or Table 2 Inferred strategies from the literature to improve the value and use of programmatic assessment Inferred strategy and exemplifying references Build on creating a shared understanding of programmatic assessment by clearly introducing the nature and purpose, providing explanatory guidelines for individual assessments and how they are used in the system as a whole, and involving teachers and learners in the whole chain of the system [16,19,21,29,30,32,38,40] Provide teachers and learners with feedback on the quality of provided assessment information and how their input contributes to the decisionmaking process [17,21,24,40] Normalize daily feedback, observation, and follow-up, as well as reflection and continuous improvement [19,21,22,28,34,38] Be cautious with mandatory requirements, being overly bureaucratic, and the use of summative signals in the design of programmatic assessment [17, 20-22, 24, 28, 33-35, 40], but keep the approach flexible, fit for purpose and negotiable, specifically in relation to the information needs of different stakeholders and the realities of the educational context [16,17,20,21,24,28,33,34,41] Promote learner agency and the development of life-long learner capabilities by increasing learners' ownership over the assessment process [20,28,30,34,41] Address learners' and teachers' assessment beliefs and the implications o...…”