BackgroundProgrammatic assessment that looks across a whole year may contribute to better decisions compared with those made from isolated assessments alone. The aim of this study is to describe and evaluate a programmatic system to handle student assessment results that is aligned not only with learning and remediation, but also with defensibility. The key components are standards based assessments, use of "Conditional Pass", and regular progress meetings.MethodsThe new assessment system is described. The evaluation is based on years 4-6 of a 6-year medical course. The types of concerns staff had about students were clustered into themes alongside any interventions and outcomes for the students concerned. The likelihoods of passing the year according to type of problem were compared before and after phasing in of the new assessment system.ResultsThe new system was phased in over four years. In the fourth year of implementation 701 students had 3539 assessment results, of which 4.1% were Conditional Pass. More in-depth analysis for 1516 results available from 447 students revealed the odds ratio (95% confidence intervals) for failure was highest for students with problems identified in more than one part of the course (18.8 (7.7-46.2) p < 0.0001) or with problems with professionalism (17.2 (9.1-33.3) p < 0.0001). The odds ratio for failure was lowest for problems with assignments (0.7 (0.1-5.2) NS). Compared with the previous system, more students failed the year under the new system on the basis of performance during the year (20 or 4.5% compared with four or 1.1% under the previous system (p < 0.01)).ConclusionsThe new system detects more students in difficulty and has resulted in less "failure to fail". The requirement to state conditions required to pass has contributed to a paper trail that should improve defensibility. Most importantly it has helped detect and act on some of the more difficult areas to assess such as professionalism.
We combined classroom training with observation of teaching in the clinical area, and by doing so were more able to translate classroom theory into authentic workplace practice.
Students had insight but less foresight. Experience and knowledge altered measures of both. This response system identified students who appeared to lack both.
Unlike previous work showing poorer accuracy of self-assessment for those with less experience or ability, we postulate that our finding of similar increases in correctness with increasing certainty even in the less experienced and lower performance groups, relates to certainty descriptors being worded in a way that is authentic to clinical practice, and in turn related to reflection-in-action.
Background Critically ill patients are at high risk for pressure ulcers. Successful prevention of pressure ulcers requires that caregivers have adequate knowledge of this complication.
Objective To assess intensive care nurses’ knowledge of pressure ulcers and the impact of an educational program on knowledge levels.
Methods A knowledge assessment test was developed. A cohort of registered nurses in a tertiary referral hospital in New Zealand had knowledge assessed 3 times: before an educational program, within 2 weeks after the program, and 20 weeks later. Multivariate analysis was performed to determine if attributes such as length of time since qualifying or level of intensive care unit experience were associated with test scores. The content and results of the assessment test were evaluated.
Results Completion of the educational program resulted in improved levels of knowledge. Mean scores on the assessment test were 84% at baseline and 89% following the educational program. The mean baseline score did not differ significantly from the mean 20-week follow-up score of 85%. No association was detected between demographic data and test scores. Content validity and standard setting were verified by using a variety of methods.
Conclusion Levels of knowledge to prevent and manage pressure ulcers were good initially and improved with an educational program, but soon returned to baseline.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.