Background
Increasing adoption of sensor-based digital health technologies (sDHTs) in recent years has cast light on the many challenges in implementing these tools into clinical trials and patient care at scale across diverse patient populations; however, the methodological approaches taken toward sDHT usability evaluation have varied markedly.
Objective
This review aims to explore the current landscape of studies reporting data related to sDHT human factors, human-centered design, and usability, to inform our concurrent work on developing an evaluation framework for sDHT usability.
Methods
We conducted a scoping review of studies published between 2013 and 2023 and indexed in PubMed, in which data related to sDHT human factors, human-centered design, and usability were reported. Following a systematic screening process, we extracted the study design, participant sample, the sDHT or sDHTs used, the methods of data capture, and the types of usability-related data captured.
Results
Our literature search returned 442 papers, of which 85 papers were found to be eligible and 83 papers were available for data extraction and not under embargo. In total, 164 sDHTs were evaluated; 141 (86%) sDHTs were wearable tools while the remaining 23 (14%) sDHTs were ambient tools. The majority of studies (55/83, 66%) reported summative evaluations of final-design sDHTs. Almost all studies (82/83, 99%) captured data from targeted end users, but only 18 (22%) out of 83 studies captured data from additional users such as care partners or clinicians. User satisfaction and ease of use were evaluated for 83% (136/164) and 91% (150/164) of sDHTs, respectively; however, learnability, efficiency, and memorability were reported for only 11 (7%), 4 (2%), and 2 (1%) out of 164 sDHTs, respectively. A total of 14 (9%) out of 164 sDHTs were evaluated according to the extent to which users were able to understand the clinical data or other information presented to them (understandability) or the actions or tasks they should complete in response (actionability). Notable gaps in reporting included the absence of a sample size rationale (reported for 21/83, 25% of all studies and 17/55, 31% of summative studies) and incomplete sociodemographic descriptive data (complete age, sex/gender, and race/ethnicity reported for 14/83, 17% of studies).
Conclusions
Based on our findings, we suggest four actionable recommendations for future studies that will help to advance the implementation of sDHTs: (1) consider an in-depth assessment of technology usability beyond user satisfaction and ease of use, (2) expand recruitment to include important user groups such as clinicians and care partners, (3) report the rationale for key study design considerations including the sample size, and (4) provide rich descriptive statistics regarding the study sample to allow a complete understanding of generalizability to other patient populations and contexts of use.