BackgroundMedical applications (apps) for smart phones and tablet computers are growing in number and are commonly used in healthcare. In this context, there is a need for a diverse community of app users, medical researchers, and app developers to better understand the app landscape.MethodsIn mid-2012, we undertook an environmental scan and classification of the medical app landscape in the two dominant platforms by searching the medical category of the Apple iTunes and Google Play app download sites. We identified target audiences, functions, costs and content themes using app descriptions and captured these data in a database. We only included apps released or updated between October 1, 2011 and May 31, 2012, with a primary “medical” app store categorization, in English, that contained health or medical content. Our sample of Android apps was limited to the most popular apps in the medical category.ResultsOur final sample of Apple iOS (n = 4561) and Android (n = 293) apps illustrate a diverse medical app landscape. The proportion of Apple iOS apps for the public (35%) and for physicians (36%) is similar. Few Apple iOS apps specifically target nurses (3%). Within the Android apps, those targeting the public dominated in our sample (51%). The distribution of app functions is similar in both platforms with reference being the most common function. Most app functions and content themes vary considerably by target audience. Social media apps are more common for patients and the public, while conference apps target physicians.ConclusionsWe characterized existing medical apps and illustrated their diversity in terms of target audience, main functions, cost and healthcare topic. The resulting app database is a resource for app users, app developers and health informatics researchers.Electronic supplementary materialThe online version of this article (doi:10.1186/1756-0500-7-573) contains supplementary material, which is available to authorized users.
ContextChecklists are commonly used in the assessment of procedural competence. However, on most checklists, high scores are often unable to rule out incompetence as the commission of a few serious procedural errors typically results in only a minimal reduction in performance score. We hypothesised that checklists constructed based on procedural errors may be better at identifying incompetence.ObjectivesThis study sought to compare the efficacy of an error-focused checklist and a conventionally constructed checklist in identifying procedural incompetence.MethodsWe constructed a 15-item error-focused checklist for lumbar puncture (LP) based on input from 13 experts in four Canadian academic centres, using a modified Delphi approach, over three rounds of survey. Ratings of 18 video-recorded performances of LP on simulators using the error-focused tool were compared with ratings obtained using a published conventional 21-item checklist. Competence/incompetence decisions were based on global assessment. Diagnostic accuracy was estimated using the area under the curve (AUC) in receiver operating characteristic analyses.ResultsThe accuracy of the conventional checklist in identifying incompetence was low (AUC 0.11, 95% confidence interval [CI] 0.00–0.28) in comparison with that of the error-focused checklist (AUC 0.85, 95% CI 0.67–1.00). The internal consistency of the error-focused checklist was lower than that of the conventional checklist (α = 0.35 and α = 0.79, respectively). The inter-rater reliability of both tools was high (conventional checklist: intraclass correlation coefficient [ICC] 0.99, 95% CI 0.98–1.00; error-focused checklist: ICC 0.92, 95% CI 0.68–0.98).ConclusionsDespite higher internal consistency and inter-rater reliability, the conventional checklist was less accurate at identifying procedural incompetence. For assessments in which it is important to identify procedural incompetence, we recommend the use of an error-focused checklist.
The authors have created a software system called the CAVEman, for the visual integration and exploration of heterogeneous anatomical and biomedical data. The CAVEman can be applied for both education and research tasks. The main component of the system is a three-dimensional digital atlas of the adult male human anatomy, structured according to the nomenclature of Terminologia Anatomica. The underlying data-indexing mechanism uses standard ontologies to map a range of biomedical data types onto the atlas. The CAVEman system is now used to visualize genetic processes in the context of the human anatomy and to facilitate visual exploration of the data. Through the use of Javatrade mark software, the atlas-based system is portable to virtually any computer environment, including personal computers and workstations. Existing Java tools for biomedical data analysis have been incorporated into the system. The affordability of virtual-reality installations has increased dramatically over the last several years. This creates new opportunities for educational scenarios that model important processes in a patient's body, including gene expression patterns, metabolic activity, the effects of interventions such as drug treatments, and eventually surgical simulations.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.