Background In 2020, more than 250 eHealth solutions were added to app stores each day, or 90,000 in the year; however, the vast majority of these solutions have not undergone clinical validation, their quality is unknown, and the user does not know if they are effective and safe. We sought to develop a simple prescreening scoring method that would assess the quality and clinical relevance of each app. We designed this tool with 3 health care stakeholder groups in mind: eHealth solution designers seeking to evaluate a potential competitor or their own tool, investors considering a fundraising candidate, and a hospital clinician or IT department wishing to evaluate a current or potential eHealth solution. Objective We built and tested a novel prescreening scoring tool (the Medical Digital Solution scoring tool). The tool, which consists of 26 questions that enable the quick assessment and comparison of the clinical relevance and quality of eHealth apps, was tested on 68 eHealth solutions. Methods The Medical Digital Solution scoring tool is based on the 2021 evaluation criteria of the French National Health Authority, the 2022 European Society of Medical Oncology recommendations, and other provided scores. We built the scoring tool with patient association and eHealth experts and submitted it to eHealth app creators, who evaluated their apps via the web-based form in January 2022. After completing the evaluation criteria, their apps obtained an overall score and 4 categories of subscores. These criteria evaluated the type of solution and domain, the solution’s targeted population size, the level of clinical assessment, and information about the provider. Results In total, 68 eHealth solutions were evaluated with the scoring tool. Oncology apps (22%, 20/90) and general health solutions (23%, 21/90) were the most represented. Of the 68 apps, 32 (47%) were involved in remote monitoring by health professionals. Regarding clinical outcomes, 5% (9/169) of the apps assessed overall survival. Randomized studies had been conducted for 21% (23/110) of the apps to assess their benefit. Of the 68 providers, 38 (56%) declared the objective of obtaining reimbursement, and 7 (18%) out of the 38 solutions seeking reimbursement were assessed as having a high probability of reimbursement. The median global score was 11.2 (range 4.7-17.4) out of 20 and the distribution of the scores followed a normal distribution pattern (Shapiro-Wilk test: P =.33). Conclusions This multidomain prescreening scoring tool is simple, fast, and can be deployed on a large scale to initiate an assessment of the clinical relevance and quality of a clinical eHealth app. This simple tool can help a decision-maker determine which aspects of the app require further analysis and improvement.
Introduction: Segmentation of organs at risk (OARs) and target volumes need time and precision but are highly repetitive tasks. Radiation oncology has known tremendous technological advances in recent years, the latest being brought by artificial intelligence (AI). Despite the advantages brought by AI for segmentation, some concerns were raised by academics regarding the impact on young radiation oncologists’ training. A survey was thus conducted on young french radiation oncologists (ROs) by the SFjRO (Société Française des jeunes Radiothérapeutes Oncologues). Methodology: The SFjRO organizes regular webinars focusing on anatomical localization, discussing either segmentation or dosimetry. Completion of the survey was mandatory for registration to a dosimetry webinar dedicated to head and neck (H & N) cancers. The survey was generated in accordance with the CHERRIES guidelines. Quantitative data (e.g., time savings and correction needs) were not measured but determined among the propositions. Results: 117 young ROs from 35 different and mostly academic centers participated. Most centers were either already equipped with such solutions or planning to be equipped in the next two years. AI segmentation software was mostly useful for H & N cases. While for the definition of OARs, participants experienced a significant time gain using AI-proposed delineations, with almost 35% of the participants saving between 50–100% of the segmentation time, time gained for target volumes was significantly lower, with only 8.6% experiencing a 50–100% gain. Contours still needed to be thoroughly checked, especially target volumes for some, and edited. The majority of participants suggested that these tools should be integrated into the training so that future radiation oncologists do not neglect the importance of radioanatomy. Fully aware of this risk, up to one-third of them even suggested that AI tools should be reserved for senior physicians only. Conclusions: We believe this survey on automatic segmentation to be the first to focus on the perception of young radiation oncologists. Software developers should focus on enhancing the quality of proposed segmentations, while young radiation oncologists should become more acquainted with these tools.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.