With this novel tool, we offer a potentially scalable approach in supporting the pain-management clinical workflow, integration of technologies, and promoting of patient and/or parent engagement in the inpatient setting.
A series of mitigation efforts were implemented in response to the COVID-19 pandemic in Saudi Arabia, including the development of mobile health applications (mHealth apps) for the public. Assessing the acceptability of mHealth apps among the public is crucial. This study aimed to use Twitter to understand public perceptions around the use of six Saudi mHealth apps used during COVID-19: “Sehha”, “Mawid”, “Sehhaty”, “Tetamman”, “Tawakkalna”, and “Tabaud”. We used two methodological approaches: network and sentiment analysis. We retrieved Twitter data using specific mHealth apps-related keywords. After including relevant tweets, our final mHealth app networks consisted of a total of 4995 Twitter users and 8666 conversational relationships. The largest networks in size (i.e., the number of users) and volume (i.e., the conversational relationships) among all were “Tawakkalna” followed by “Tabaud”, and their conversations were led by diverse governmental accounts. In contrast, the four remaining mHealth networks were mainly led by the health sector and media. Our sentiment analysis approach included five classes and showed that most conversations were neutral, which included facts or information pieces and general inquires. For the automated sentiment classifier, we used Support Vector Machine with AraVec embeddings as it outperformed the other tested classifiers. The sentiment classifier showed an accuracy, precision, recall, and F1-score of 85%. Future studies can use social media and real-time analytics to improve mHealth apps’ services and user experience, especially during health crises.
The coronavirus disease 2019 (COVID-19) pandemic has impacted the use of telemedicine application (apps), which has seen an uprise. This study evaluated the usability of the user interface design of telemedicine apps deployed during the COVID-19 pandemic in Saudi Arabia. It also explored changes to the apps’ usability based on the pandemic timeline. Methods: We screened ten mHealth apps published by the National Digital Transformation Unit and selected three telemedicine apps: (1) governmental “Seha”® app, (2) stand-alone “Cura”® app, and (3) private “Dr. Sulaiman Alhabib”®app. We conducted the evaluations in April 2020 and in June 2021 by identifying positive app features, using Nielsen’s ten usability heuristics with a five-point severity rating scale, and documenting redesign recommendations. Results: We identified 54 user interface usability issues during both evaluation periods: 18 issues in “Seha” 14 issues in “Cura”, and 22 issues in “Dr. Sulaiman Alhabib”. The two most heuristic items violated in “Seha”, were “user control and freedom” and “recognition rather than recall”. In “Cura”, the three most heuristic items violated were “consistency and adherence to standards”, “esthetic and minimalist design”, and “help and documentation” In “Dr. Sulaiman Alhabib” the most heuristic item violated was “error prevention”. Ten out of the thirty usability issues identified from our first evaluation were no longer identified during our second evaluation. Conclusions: our findings indicate that all three apps have a room for improving their user interface designs to improve the overall user experience and to ensure the continuity of these services beyond the pandemic.
Despite the importance of electronic health records data, less attention has been given to data quality. This study aimed to evaluate the quality of COVID-19 patients’ records and their readiness for secondary use. We conducted a retrospective chart review study of all COVID-19 inpatients in an academic healthcare hospital for the year 2020, which were identified using ICD-10 codes and case definition guidelines. COVID-19 signs and symptoms were higher in unstructured clinical notes than in structured coded data. COVID-19 cases were categorized as 218 (66.46%) “confirmed cases”, 10 (3.05%) “probable cases”, 9 (2.74%) “suspected cases”, and 91 (27.74%) “no sufficient evidence”. The identification of “probable cases” and “suspected cases” was more challenging than “confirmed cases” where laboratory confirmation was sufficient. The accuracy of the COVID-19 case identification was higher in laboratory tests than in ICD-10 codes. When validating using laboratory results, we found that ICD-10 codes were inaccurately assigned to 238 (72.56%) patients’ records. “No sufficient evidence” records might indicate inaccurate and incomplete EHR data. Data quality evaluation should be incorporated to ensure patient safety and data readiness for secondary use research and predictive analytics. We encourage educational and training efforts to motivate healthcare providers regarding the importance of accurate documentation at the point-of-care.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.