Background: Objective Structured Clinical Examination (OSCE) is a valid tool to assess the clinical skills of medical students. Feedback after OSCE is essential for student improvement and safe clinical practice. Many examiners do not provide helpful or insightful feedback in the text space provided after OSCE stations, which may adversely affect learning outcomes. The aim of this systematic review was to identify the best determinants for quality written feedback in the field of medicine. Methods: PubMed, Medline, Embase, CINHAL, Scopus, and Web of Science were searched for relevant literature up to February 2021. We included studies that described the quality of good/effective feedback in clinical skills assessment in the field of medicine. Four independent reviewers extracted determinants used to assess the quality of written feedback. The percentage agreement and kappa coefficients were calculated for each determinant. The ROBINS-I (Risk Of Bias In Non-randomized Studies of Interventions) tool was used to assess the risk of bias. Results: 14 studies were included in this systematic review. 10 determinants were identified for assessing feedback. The determinants with the highest agreement among reviewers were specific, described gap, balanced, constructive and behavioural; with kappa values of 0.79, 0.45, 0.33, 0.33 and 0.26 respectively. All other determinants had low agreement (kappa values below 0.22) indicating that even though they have been used in the literature, they might not be applicable for good quality feedback. The risk of bias was low or moderate overall. Conclusions: This work suggests that good quality written feedback should be specific, balanced, and constructive in nature, and should describe the gap in student learning as well as observed behavioural actions in the exams. Integrating these determinants in OSCE assessment will help guide and support educators for providing effective feedback for the learner.