The American Board of Pediatrics (ABP) certifies that general and subspecialty pediatricians meet standards of excellence established by their peers. Certification helps demonstrate that a general pediatrician or pediatric subspecialist has successfully completed accredited training and fulfills continuous certification requirements (Maintenance of Certification [MOC]). One current component of the MOC program is a closed-book examination administered at a secure testing center (ie, the MOC Part 3 examination). In this article, we describe the development of an alternative to this examination termed the "Maintenance of Certification Assessment for Pediatrics" (MOCA-Peds) during 2015–2016. MOCA-Peds was conceptualized as an online, summative (ie, pass/fail), continuous assessment of a pediatrician’s knowledge that would also promote learning. The system would consist of a set number of multiple-choice questions delivered each quarter, with immediate feedback on questions, rationales clarifying correct and incorrect answers, references for further learning, and peer benchmarking. Questions would be delivered quarterly and taken at any time within the quarter in a setting with Internet connectivity and on any device. As part of the development process in 2015–2016, the ABP actively recruited pediatricians to serve as members of a yearlong user panel or single-session focus groups. Refinements to MOCA-Peds were made on the basis of their feedback. MOCA-Peds is being actively piloted with pediatricians in 2017–2018. The ABP anticipates an expected launch in January 2019 of MOCA-Peds for General Pediatrics, Pediatric Gastroenterology, Child Abuse, and Pediatric Infectious Diseases with launch dates for the remaining pediatric subspecialties between 2020 and 2022.
This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) commonitem equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for systematic differences between standard setting panels) has received almost no attention in the literature. Identity equating was also examined to provide context. Data from a standard setting form of a large national certification test (N examinees = 4,397; N panelists = 13) were split into content-equivalent subforms with common items, and resampling methodology was used to investigate the error introduced by each approach. Common-item equating (circle-arc and nominal weights mean) was evaluated at samples of size 10, 25, 50, and 100. The standard setting approaches (resetting and rescaling the standard) were evaluated by resampling (N = 8) and by simulating panelists (N = 8, 13, and 20). Results were inconclusive regarding the relative effectiveness of resetting and rescaling the standard. Small-sample equating, however, consistently produced new form cut scores that were less biased and less prone to random error than new form cut scores based on resetting or rescaling the standard.When an organization introduces a new test form, it is critical that test scores and classification decisions based on those scores maintain equivalence with previous forms to ensure the fair treatment of examinees. Because it is often impossible to guarantee that newly developed forms will be equal to previous forms with respect to difficulty level, it may be necessary to make slight adjustments to examinee scores (or the cut score) on the new form. Ideally, appropriate adjustments to scores would be identified through test equating, which was designed for that purpose. When a testing program has a limited number of examinees available for equating purposes, however, the decision regarding how to best maintain the equivalence of the cut score becomes a difficult one.In practice there are several approaches that can be used to establish the performance standard (i.e., cut score) for a newly developed form. This study examines the effectiveness of three of those approaches for maintaining equivalent performance standards and classification decisions across test forms with small samples:(1) common-item test equating, (2) resetting the standard (i.e., conducting a standard setting for each form), and (3) rescaling the standard (i.e., conducting a standard setting for each form and applying common-item equating methodology to the panels' standard setting ratings).
Objectives: To describe the practice analysis undertaken by a task force convened by the American Board of Pediatrics Pediatric Critical Care Medicine Sub-board to create a comprehensive document to guide learning and assessment within Pediatric Critical Care Medicine. Design: An in-depth practice analysis with a mixed-methods design involving a descriptive review of practice, a modified Delphi process, and a survey. Setting: Not applicable. Subjects: Seventy-five Pediatric Critical Care Medicine program directors and 2,535 American Board of Pediatrics Pediatric Critical Care Medicine diplomates. Interventions: A practice analysis document, which identifies the full breadth of knowledge and skill required for the practice of Pediatric Critical Care Medicine, was developed by a task force made up of seven pediatric intensivists and a psychometrician. The document was circulated to all 75 Pediatric Critical Care Medicine fellowship program directors for review and comment and their feedback informed modifications to the draft document. Concurrently, data from creation of the practice analysis draft document were also used to update the Pediatric Critical Care Medicine, was developed by a task force made up of seven pediatric intensivists and a psychometrician. The document was circulated to all 75 Pediatrics Pediatric Critical Care Medicine fellowship program directors for review and comment and their feedback informed modifications to the draft document. Concurrently, data from creation of the practice analysis draft document were also used to update the Pediatric Critical Care Medicine content outline, which was sent to all 2,535 American Board of Pediatrics Pediatric Critical Care Medicine diplomates for review during an open-comment period between January 2019 and February 2019, and diplomate feedback was used to make updates to both the content outline and the practice analysis document. Measurements and Main Results: After review and comment by 25 Pediatric Critical Care Medicine program directors (33.3%) and 619 board-certified diplomates (24.4%), a comprehensive practice analysis document was created through a two-stage process. The final practice analysis includes 10 performance domains which parallel previously published Entrustable Professional Activities in Pediatric Critical Care Medicine. These performance domains are made up of between three and eight specific tasks, with each task including the critical knowledge and skills that are necessary for successful completion. The final practice analysis document was also used by the American Board of Pediatrics Pediatric Critical Care Medicine Sub-board to update the Pediatric Critical Care Medicine content outline. Conclusions: A systematic approach to practice analysis, with stakeholder engagement, is essential for an accurate definition of Pediatric Critical Care Medicine practice in its totality. This collaborative process resulted in a dynamic document useful in guiding curriculum development for training programs, maintenance of certification, and lifetime professional development to enable safe and efficient patient care.
OBJECTIVES:The American Board of Pediatrics (ABP) and the Pediatric Hospital Medicine (PHM) subboard developed a content outline to serve as a blueprint for the inaugural certification examination through practice analysis. The systematic approach of practice analyses process is described in the study. METHODS:A diverse, representative panel of 12 pediatric hospitalists developed the draft content outline using multiple resources (publications, textbooks, PHM Core Competencies, PHM fellow's curriculum, etc). The panel categorized practice knowledge into 13 domains and 202 subdomains. By using the ABP database self-defined practicing pediatric hospitalists were identified. Participants rated the frequency and criticality of content domains and subdomains along with providing open-ended comments.RESULTS: In total, 1449 (12.1%) generalists in the ABP database self-identified as pediatric hospitalists, and 800 full-time pediatric hospitalists responded. The content domains that were rated as highly critical and frequently required in practice were weighted more heavily (ie, the percentage of examination questions associated with a domain) than the less critical and less frequently rated. Both community and noncommunity pediatric hospitalists rated domains similarly (P = .943). Subdomain and preliminary weights were rated with similar means and SDs in the majority of topics.CONCLUSIONS: There was concordance in the rating of domain and universal tasks among both community and noncommunity hospitalists. The areas of significant differences, although minor, could be explained by difference in practice settings. The practice analysis approach was structured, engaged the PHM community, reflected the breadth and depth of knowledge required for PHM practice, and used an iterative process to refine the final product.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.