Background In recent years, there has been rapid growth in the availability and use of mobile health (mHealth) apps around the world. A consensus regarding an accepted standard to assess the quality of such apps has yet to be reached. A factor that exacerbates the challenge of mHealth app quality assessment is variations in the interpretation of quality and its subdimensions. Consequently, it has become increasingly difficult for health care professionals worldwide to distinguish apps of high quality from those of lower quality. This exposes both patients and health care professionals to unnecessary risks. Despite progress, limited understanding of the contributions of researchers in low- and middle-income countries (LMICs) exists on this topic. Furthermore, the applicability of quality assessment methodologies in LMIC settings remains relatively unexplored. Objective This rapid review aims to identify current methodologies in the literature to assess the quality of mHealth apps, understand what aspects of quality these methodologies address, determine what input has been made by authors from LMICs, and examine the applicability of such methodologies in LMICs. Methods This review was registered with PROSPERO (International Prospective Register of Systematic Reviews). A search of PubMed, EMBASE, Web of Science, and Scopus was performed for papers related to mHealth app quality assessment methodologies, which were published in English between 2005 and 2020. By taking a rapid review approach, a thematic and descriptive analysis of the papers was performed. Results Electronic database searches identified 841 papers. After the screening process, 52 papers remained for inclusion. Of the 52 papers, 5 (10%) proposed novel methodologies that could be used to evaluate mHealth apps of diverse medical areas of interest, 8 (15%) proposed methodologies that could be used to assess apps concerned with a specific medical focus, and 39 (75%) used methodologies developed by other published authors to evaluate the quality of various groups of mHealth apps. The authors in 6% (3/52) of papers were solely affiliated to institutes in LMICs. A further 15% (8/52) of papers had at least one coauthor affiliated to an institute in an LMIC. Conclusions Quality assessment of mHealth apps is complex in nature and at times subjective. Despite growing research on this topic, to date, an all-encompassing appropriate means for evaluating the quality of mHealth apps does not exist. There has been engagement with authors affiliated to institutes across LMICs; however, limited consideration of current generic methodologies for application in LMIC settings has been identified. Trial Registration PROSPERO CRD42020205149; https://www.crd.york.ac.uk/prospero/display_record.php?RecordID=205149
Background Over 325,000 mobile health (mHealth) apps are available to download across various app stores. However, quality assurance in this field of medicine remains relatively undefined. Globally, around 84% of the population have access to mobile broadband networks. Given the potential for mHealth app use in health promotion and disease prevention, their role in patient care worldwide is ever apparent. Quality assurance regulations both nationally and internationally will take time to develop. Frameworks such as the Mobile App Rating Scale and Enlight Suite have demonstrated potential for use in the interim. However, these frameworks require adaptation to be suitable for international use. Objective This study aims to modify the Enlight Suite, a comprehensive app quality assessment methodology, to improve its applicability internationally and to assess the preliminary validity and reliability of this modified tool in practice. Methods A two-round Delphi study involving 7 international mHealth experts with varied backgrounds in health, technology, and clinical psychology was conducted to modify the Enlight Suite for international use and to improve its content validity. The Modified Enlight Suite (MES) was then used by 800 health care professionals and health care students in Ireland to assess a COVID-19 tracker app in an online survey. The reliability of the MES was assessed using Cronbach alpha, while the construct validity was evaluated using confirmatory factor analysis. Results The final version of the MES has 7 sections with 32 evaluating items. Of these items, 5 were novel and based on consensus for inclusion by Delphi panel members. The MES has satisfactory reliability with a Cronbach alpha score of .925. The subscales also demonstrated acceptable internal consistency. Similarly, the confirmatory factor analysis demonstrated a positive and significant factor loading for all 32 items in the MES with a modestly acceptable model fit, thus indicating the construct validity of the MES. Conclusions The Enlight Suite was modified to improve its international relevance to app quality assessment by introducing new items relating to cultural appropriateness, accessibility, and readability of mHealth app content. This study indicates both the reliability and validity of the MES for assessing the quality of mHealth apps in a high-income country, with further studies being planned to extrapolate these findings to low- and middle-income countries.
IntroductionHealthcare professionals (HCPs) often recommend their patients to use a specific mHealth app as part of health promotion, disease prevention and patient self-management. There has been a significant growth in the number of HCPs downloading and using mobile health (mHealth) apps. Most mHealth apps that are available in app stores employ a ‘star rating’ system. This is based on user feedback on an app, but is highly subjective. Thus, the identification of quality mHealth apps which are deemed fit for purpose can be a difficult task for HCPs. Currently, there is no unified, validated standard guidelines for assessment of mHealth apps for patient safety, which can be used by HCPs. The Modified Enlight Suite (MES) is a quality assessment framework designed to provide a means for HCPs to evaluate mHealth apps before they are recommended to patients. MES was adapted from the original Enlight Suite for international use through a Delphi method, followed by preliminary validation process among a population predominantly consisting of medical students. This study aims to evaluate the applicability and validity of the MES, by HCPs, in low, middle and high income country settings.Methods and analysisMES will be evaluated through a mixed-method study, consisting of qualitative (focus group) and quantitative (survey instruments) research, in three target countries: Malaŵi (low income), South Africa (middle income) and Ireland (high income). The focus groups will be conducted through Microsoft Teams (Microsoft, Redmond, Washington, USA) and surveys will be conducted online using Qualtrics (Qualtrics International, Seattle, Washington, USA). Participants will be recruited through the help of national representatives in Malawi (Mzuzu University), South Africa (University of Fort Hare) and Ireland (University College Cork) by email invitation. Data analysis for the focus group will be by the means of thematic analysis. Data analysis for the survey will use descriptive statistics and use Cronbach alpha as an indicator of internal consistency of the MES. The construct validity of the mHealth app will be assessed by computing the confirmatory factor analysis using Amos.Ethics and disseminationThe study has received ethical approval from the Social Research Ethics Committee (SREC) SREC/SOM/03092021/1 at University College Cork, Ireland, Malaŵi Research Ethics Committee (MREC), Malaŵi MZUNIREC/DOR/21/59 and Inter-Faculty Research Ethics Committee (IFREC) of University of Fort Hare (REC-2 70 710-028-RA). The results of the study will be disseminated through the internet, peer-reviewed journals and conference presentations.
BACKGROUND There has been a rapid growth in the availability and use of mobile health (mHealth) apps around the world in recent years. However, consensus regarding an accepted standard to assess the quality of such apps does not exist. Differing interpretations of quality add to this problem. Consequently, it has become increasingly difficult for healthcare professionals to distinguish apps of high quality from those of lower quality. This exposes both patients and healthcare professionals to unnecessary risk. Despite progress, limited understanding of contributions by those in low- and middle- income countries (LMIC) on this topic exists. As such, the applicability of quality assessment methodologies in LMIC settings remains unexplored. OBJECTIVE The objectives of this rapid review are to; 1) Identify current methodologies within the literature to assess the quality of mHealth apps. 2) Understand what aspects of quality these methodologies address. 3) Determine what input has been made by authors from LMICs. 4) Examine the applicability of such methodologies in low- and middle- income settings. METHODS The review is registered with Prospero (CRD42020205149). A search of PubMed, EMBASE, Web of Science and Scopus was performed for papers relating to mHealth app quality assessment methodologies, published in English between 2005 and the 28th of December, 2020. A thematic and descriptive analysis of methodologies and papers was performed. RESULTS Electronic database searches identified 841 papers. After the screening process, 53 papers remained for inclusion; 6 proposed novel methodologies which could be used to evaluate mHealth apps of diverse medical areas of interest; 8 proposed methodologies which could be used to assess apps concerned with a specific medical focus; 39 used methodologies developed by other published authors to evaluate the quality of various groups of mHealth apps. Authors of 3 papers were solely affiliated to institutes in LMICs. A further 8 papers had at least one co-author affiliated to an institute in a LMIC. CONCLUSIONS Quality assessment of mHealth apps is complex in nature and at times, subjective. Despite growing research on this topic, to date an all-encompassing, appropriate means for evaluating the quality of mHealth apps does not exist. There has been engagement with authors affiliated to institutes in LMICs, however limited consideration of current generic methodologies for application in a LMIC settings have been identified.
BACKGROUND Over 325,000 mobile health (mHealth) applications (apps) are available to download across various app stores. Quality assurance in this field of medicine remains relatively undefined, however. Globally around 84% of the population have access to mobile broadband networks. Given the potential for mHealth app use in health promotion and disease prevention, their role in medicine world-wide is ever apparent. Quality assurance regulations both nationally and internationally will take time to develop. Frameworks such as the Mobile App Rating Scale (MARS) and Enlight Suite have demonstrated potential for use in the interim. These frameworks require adaptation to be suitable for use in Low and Middle-Income Countries (LMIC) however. OBJECTIVE 1) Modify the Enlight Suite, an mHealth app quality assessment methodology, to improve its applicability internationally, and 2) to assess the preliminary validity and reliability of this modified tool in practice. METHODS A two-round Delphi study involving 7 mHealth experts with varied backgrounds in medicine, health and technology was conducted to modify and adapt the Enlight Suite for international use as well as to improve its content validity. The Modified Enlight suite (MES) was then used by 800 healthcare professionals and healthcare students to assess a COVID-19 tracker app in an online survey form. The reliability of the MES was assessed using the Cronbach alpha while the construct validity was evaluated using the confirmatory factor analysis. RESULTS The final version of the MES has 7 sections with 32 evaluating items. Of these items, 5 were novel and based on consensus for inclusion by Delphi panel members. The MES has a satisfactory reliability with an internal consistency Cronbach alpha score of 0.925. The sub-scales also demonstrate acceptable internal consistency. Similarly, the confirmatory factor analysis demonstrates a positive and significant factor loading for all of the 32 items in the MES with a modestly acceptable model fit: thus, indicating the construct validity of the MES. CONCLUSIONS Despite increasing use, access, and reliance on mHealth apps internationally, previous studies have failed to identify a quality assessment methodology which included factors known to hinder the use and uptake of apps in LMICs. This study indicates both the validity and initial reliability of the MES for assessing the quality of mHealth apps internationally. Further reliability assessments are required in LMICs to extrapolate these findings to seek its true potential.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.