Purpose
Ethical and legal requirements for healthcare providers in the United States, stipulate that patients sign a consent form prior to undergoing medical treatment or participating in a research study. Currently, the majority of the hospitals obtain these consents using paper-based forms, which makes patient preference data cumbersome to store, search and retrieve. To address these issues, Health Sciences of South Carolina (HSSC), a collaborative of academic medical institutions and research universities in South Carolina, is developing an electronic consenting system, the Research Permissions Management System (RPMS). This article reports the findings of a study conducted to investigate the efficacy of the two proposed interfaces for this system – an iPad-based and touchscreen-based by comparing them to the paper-based and Topaz-based systems currently in use.
Methods
This study involved 50 participants: 10 hospital admission staff and 40 patients. The four systems were compared with respect to the time taken to complete the consenting process, the number of errors made by the patients, the workload experienced by the hospital staff and the subjective ratings of both patients and staff on post-test questionnaires.
Results
The results from the empirical study indicated no significant differences in the time taken to complete the tasks. More importantly, the participants found the new systems more usable than the conventional methods with the registration staff experiencing the least workload in the iPad and touchscreen-based conditions and the patients experiencing more privacy and control during the consenting process with the proposed electronic systems. In addition, they indicated better comprehension and awareness of what they were signing using the new interfaces.
Discussion
The results indicate the two methods proposed for capturing patient consents are at least as effective as the conventional methods, and superior in several important respects. While more research is needed, these findings suggest the viability of cautious adoption of electronic consenting systems, especially because these new systems appear to address the challenge of identifying the participants required for the complex research being conducted as the result of advances in the biomedical sciences.
A major challenge associated with converting paper-based consent to electronic versions is to assure that the level of comprehension offered by the electronic consenting systems is not reduced. A randomized between-subject trial comparing patient comprehension with four different electronic consenting formats of the same consent information presented on an Apple iPad was conducted using a non-clinical sample of 32 participants. The formats were Text-Based, text-based with Text Being Read out, Video-Based and Video-Based with Subtitles. The participants were asked to read and complete a consent form in one of the formats. The participants were subsequently asked to complete a semantic comprehension quiz, the NASA Task Load Index and the computer system usability questionnaire (CSUQ). Upon completing the questionnaires, the participants took part in a retrospective think-aloud session to understand any difficulties they had using the consent forms. Statistically significant differences among the formats were found for task completion time, the mental demand and frustration sub-components of the NASA-TLX, and the comprehension quiz. Video with subtitles to convey consent information appears to be the best format among the formats tested for electronic consent presentation.
The usability of text-based CAPTCHAs, featuring distorted letters, and image-based CAPTCHAs, featuring pictures, was explored on an Apple iPad. Five conditions were explored: Confident CAPTCHA with either voice or touch input, ESP-PIX with voice or touch input, and Google’s CAPTCHA with touch input. Usability was analyzed in terms of performance, perceived usability, workload, and preference rankings. Results showed that CAPTCHAs involving touch input scored better in almost every measure than CAPTCHAs involving voice input. In particular, Confident Touch is recommended based on preference and perceived performance, whereas ESP-PIX Touch is recommended for its short completion time. When image-based CAPTCHAs are not feasible, Google’s CAPTCHA is a satisfactory alternative based on usability ratings.
User-created passwords are typically higher in usability but lower in security than computer-generated passwords. This study evaluates the usability and security of a password scheme in which users are assigned computer-generated random passwords and corresponding computer-generated mnemonic phrases. Users created images that matched the phrases to cue password recall at the time of login. Different picture creation methods (drawing, online images, and a combination of both) and a control condition (no picture) were used to examine the effects of picture creation method on password memorability. Login success was measured the day of account creation and after approximately one week. To measure security during the second session, participants were also asked to simulate attackers by attempting to derive others' passwords from viewing their images. Analysis revealed that pictures enhanced password memorability but picture creation method had no effect on password memorability. There were no significant differences in security across the four conditions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.