Background Researchers developing personal health tools employ a range of approaches to involve prospective users in design and development. Objective The aim of this paper was to develop a validated measure of the human- or user-centeredness of design and development processes for personal health tools. Methods We conducted a psychometric analysis of data from a previous systematic review of the design and development processes of 348 personal health tools. Using a conceptual framework of user-centered design, our team of patients, caregivers, health professionals, tool developers, and researchers analyzed how specific practices in tool design and development might be combined and used as a measure. We prioritized variables according to their importance within the conceptual framework and validated the resultant measure using principal component analysis with Varimax rotation, classical item analysis, and confirmatory factor analysis. Results We retained 11 items in a 3-factor structure explaining 68% of the variance in the data. The Cronbach alpha was .72. Confirmatory factor analysis supported our hypothesis of a latent construct of user-centeredness. Items were whether or not: (1) patient, family, caregiver, or surrogate users were involved in the steps that help tool developers understand users or (2) develop a prototype, (3) asked their opinions, (4) observed using the tool or (5) involved in steps intended to evaluate the tool, (6) the process had 3 or more iterative cycles, (7) changes between cycles were explicitly reported, (8) health professionals were asked their opinion and (9) consulted before the first prototype was developed or (10) between initial and final prototypes, and (11) a panel of other experts was involved. Conclusions The User-Centered Design 11-item measure (UCD-11) may be used to quantitatively document the user/human-centeredness of design and development processes of patient-centered tools. By building an evidence base about such processes, we can help ensure that tools are adapted to people who will use them, rather than requiring people to adapt to tools.
Background: Given the widespread use of Multiple Mini Interviews (MMIs), their impact on the selection of candidates and the considerable resources invested in preparing and administering them, it is essential to ensure their quality. Given the variety of station formats used and the degree to which that factor resides in the control of training programmes that we know so little about, format's effect on MMI quality is a considerable oversight. This study assessed the effect of two popular station formats (interview vs. role-play) on the psychometric properties of MMIs. Methods: We analysed candidate data from the first 8 years of the Integrated French MMIs (IF-MMI) (2010-2017, n = 11 761 applicants), an MMI organised yearly by three francophone universities and administered at four testing sites located in two Canadian provinces. There were 84 role-play and 96 interview stations administered, totalling 180 stations. Mixed design analyses of variance (ANOVAs) were used to test the effect of station format on candidates' scores and stations' discrimination. Cronbach's alpha coefficients for interview and role-play stations were also compared. Predictive validity of both station formats was estimated with a mixed multiple linear regression model testing the relation between interview and role-play scores with average clerkship performance for those who gained entry to medical school (n = 462). Results: Role-play stations (M = 20.67, standard deviation [SD] = 3.38) had a slightly lower mean score than interview stations (M = 21.36, SD = 3.08), p < 0.01, Cohen's d = 0.2. The correlation between role-play and interview stations scores was r = 0.5 (p < 0.01). Discrimination coefficients, Cronbach's alpha and predictive validity statistics did not vary by station format. Conclusion:Interview and role-play stations have comparable psychometric properties, suggesting format to be interchangeable. Programmes should select station format based on match to the personal qualities for which they are trying to select.
-Contexte : Les mini entrevues multiples (MEM) sont un outil de sélection développé par une équipe de l'Université de McMaster pour évaluer les habiletés non-cognitives des candidats en médecine. La capacité des MEM à prédire les résultats aux études en médecine en contexte francophone reste à démontrer. But : L'objectif de cette étude est d'évaluer la validité prédictive des MEM en contexte francophone en utilisant les mini entrevues multiples francophones intégrées (MEMFI), développées conjointement par les trois facultés de médecine francophones du Québec. Méthodes : Nous avons utilisé un échantillon de 893 étudiants inscrits au programme de doctorat en médecine de l'Université Laval. Les variables indépendantes étaient les résultats aux MEMFI et les résultats académiques antérieurs. Les variables dépendantes étaient les résultats en médecine 1) aux cours intégrateurs, 2) aux cours systèmes, 3) à l'examen longitudinal annuel (ELA) et 4) à l'externat. Des analyses de régression linéaire « pas à pas » ont été réalisées afin de déterminer la valeur prédictive des deux critères de sélection sur les variables dépendantes. Résultats : Les résultats aux MEMFI sont surtout associés à ceux à l'externat (β = 0,268, p < 0,001). Ils prédisent également de manière statistiquement significative les résultats aux cours intégrateurs (β = 0,086, p = 0,020) et à l'ELA (β = 0,104, p = 0,019), mais leur pouvoir prédictif est faible et inférieur à celui des résultats académiques antérieurs. Conclusion : Les MEM font preuve de validité prédictive en contexte francophone.
BackgroundMedical students on clinical rotations have to be assessed on several competencies at the end of each clinical rotation, pointing to the need for short, reliable, and valid assessment instruments of each competency. Doctor patient communication is a central competency targeted by medical schools however, there are no published short (i.e. less than 10 items), reliable and valid instruments to assess doctor-patient communication competency. The Faculty of Medicine of Laval University recently developed a 5-item Doctor-Patient Communication Competency instrument for Medical Students (DPCC-MS), based on the Patient Centered Clinical Method conceptual framework, which provides a global summative end-of-rotation assessment of doctor-patient communication. We conducted a psychometric validation of this instrument and present validity evidence based on the response process, internal structure and relation to other variables using two years of assessment data.MethodsWe conducted the study in two phases. In phase 1, we drew on 4991 student DPCC-MS assessments (two years). We conducted descriptive statistics, a confirmatory factor analysis (CFA), and tested the correlation between the DPCC-MS and the Multiple Mini Interviews (MMI) scores. In phase 2, eleven clinical teachers assessed the performance of 35 medical students in an objective structured clinical examination station using the DPCC-MS, a 15-item instrument developed by Côté et al. (published in 2001), and a 2-item global assessment. We compared the DPCC-MS to the longer Côté et al. instrument based on internal consistency, coefficient of variation, convergent validity, and inter-rater reliability.ResultsPhase 1: Cronbach’s alpha was acceptable (.75 and .83). Inter-item correlations were positive and the discrimination index was above .30 for all items. CFA supported a unidimensional structure. DPCC-MS and MMI scores were correlated. Phase 2: The DPCC-MS and the Côté et al. instrument had similar internal consistency and convergent validity, but the DPCC-MS had better inter-rater reliability (mean ICC = .61).ConclusionsThe DPCC-MS provides an internally consistent and valid assessment of medical students’ communication with patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.