Objectives: The American College of Emergency Physicians' geriatric emergency department (GED) guidelines recommend additional staff and geriatric equipment, which may not be financially feasible for every ED. Data from an accredited Level 1 GED was used to report equipment costs and to develop a business model for financial sustainability of a GED. Methods: Staff salaries including the cost of fringe benefits were obtained from a Midwestern hospital with an academic ED of 80,000 annual visits. Reimbursement assumptions included 100% Medicare/Medicaid insurance payor and 8-hour workdays with 4.5 weeks of leave annually. Equipment costs from hospital invoices were collated. Operational and patient safety metrics were compared before and after the GED. Results: A geriatric nurse practitioner in the ED is financially self-sustaining at 7.1 consultations, a pharmacist is self-sustaining at 7.7 medication reconciliation consultations, and physical and occupational therapist evaluations are self-sustaining at 5.7 and 4.6 consults per workday, respectively. Total annual equipment costs for mobility aids, delirium aids, sensory aids, and personal care items for the GED was $4,513. Comparing the 2 years before and after, in regard to operational metrics the proportions of patients with lengths of stay > 8 hours and patients placed in observation did not change.
ObjectivesEmergency departments (EDs) patient satisfaction metrics are highly valued by hospitals, health systems, and payers, yet these metrics are challenging to analyze and interpret. Accurate interpretation involves selection of the most appropriate peer group for benchmark comparisons. We hypothesized that the selection of different benchmark peer groups would yield different interpretations of Press Ganey (PG) patient satisfaction scores.MethodsEmergency department PG summary ratings of “doctors section” and “likelihood‐to‐recommend” raw scores and corresponding percentiles were derived for three benchmark peer groups from three academic years (2016, 2017, and 2018). The three benchmarks are: 1) the PG Large database; 2) the PG University HealthSystem Consortium (UHC) database; and 3) the Academy of Administrators in Academic Emergency Medicine (AAAEM) database, which is composed only of EDs from academic health centers with emergency medicine residency training programs. Raw scores were converted to percentile ranks for each distribution and then compared using Welch’s ANOVA and Games‐Howell pairwise comparisons.ResultsFor both patient satisfaction raw scores evaluated, the AAAEM database was noted to have significantly higher percentile ranks when compared to the PG Large and PG UHC databases. These results were consistent for all three time frames assessed.ConclusionsBenchmarking with different peer groups provides different results, with similar patient satisfaction raw scores resulting in higher percentile ranks using the AAAEM database compared to the two PG databases. The AAAEM database should be considered the most appropriate peer group for benchmarking academic EDs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.