Background Respondent engagement of questionnaires in health care is fundamental to ensure adequate response rates for the evaluation of services and quality of care. Conventional survey designs are often perceived as dull and unengaging, resulting in negative respondent behavior. It is necessary to make completing a questionnaire attractive and motivating. Objective The aim of this study is to compare the user experience of a chatbot questionnaire, which mimics intelligent conversation, with a regular computer questionnaire. Methods The research took place at the preoperative outpatient clinic. Patients completed both the standard computer questionnaire and the new chatbot questionnaire. Afterward, patients gave their feedback on both questionnaires by the User Experience Questionnaire, which consists of 26 terms to score. Results The mean age of the 40 included patients (25 [63%] women) was 49 (SD 18-79) years; 46.73% (486/1040) of all terms were scored positive for the chatbot. Patients preferred the computer for 7.98% (83/1040) of the terms and for 47.88% (498/1040) of the terms there were no differences. Completion (mean time) of the computer questionnaire took 9.00 minutes by men (SD 2.72) and 7.72 minutes by women (SD 2.60; P=.148). For the chatbot, completion by men took 8.33 minutes (SD 2.99) and by women 7.36 minutes (SD 2.61; P=.287). Conclusions Patients preferred the chatbot questionnaire over the computer questionnaire. Time to completion of both questionnaires did not differ, though the chatbot questionnaire on a tablet felt more rapid compared to the computer questionnaire. This is an important finding because it could lead to higher response rates and to qualitatively better responses in future questionnaires.
Cognitive impairment predisposes patients to the development of delirium and postoperative cognitive dysfunction. In particular, in older patients, the adverse sequelae of cognitive decline in the perioperative period may contribute to adverse outcomes after surgical procedures. Subtle signs of cognitive impairment are often not previously diagnosed. Therefore, the aim of this review is to describe the available cognitive screeners suitable for preoperative screening and their psychometric properties for identifying mild cognitive impairment, as preoperative workup may improve perioperative care for patients at risk for postoperative cognitive dysfunction. Electronic systematic and snowball searches of PubMed, PsycInfo, ClinicalKey, and ScienceDirect were conducted for the period 2015–2020. Major inclusion criteria for articles included those that discussed a screener that included the cognitive domain ‘memory’, that had a duration time of less than 15 min, and that reported sensitivity and specificity to detect mild cognitive impairment. Studies about informant-based screeners were excluded. We provided an overview of the characteristics of the cognitive screener, such as interrater and test-retest reliability correlations, sensitivity and specificity for mild cognitive impairment and cognitive impairment, and duration of the screener and cutoff points. Of the 4775 identified titles, 3222 were excluded from further analysis because they were published prior to 2015. One thousand four hundred and forty-eight titles did not fulfill the inclusion criteria. All abstracts of 52 studies on 45 screeners were examined of which 10 met the inclusion criteria. For these 10 screeners, a further snowball search was performed to obtain related studies, resulting in 20 articles. Screeners included in this review were the Mini-Cog, MoCA, O3DY, AD8, SAGE, SLUMS, TICS(-M), QMCI, MMSE2, and Mini-ACE. The sensitivity and specificity range to detect MCI in an older population is the highest for the MoCA, with a sensitivity range of 81–93% and a specificity range of 74–89%. The MoCA, with the highest combination of sensitivity and specificity, is a feasible and valid routine screening of pre-surgical cognitive function. This warrants further implementation and validation studies in surgical pathways with a large proportion of older patients.
BACKGROUND Respondent engagement of questionnaires in health care is fundamental to ensure adequate response rates for the evaluation of services and quality of care. Conventional survey designs are often perceived as dull and unengaging, resulting in negative respondent behavior. It is necessary to make completing a questionnaire attractive and motivating. OBJECTIVE The aim of this study is to compare the user experience of a chatbot questionnaire, which mimics intelligent conversation, with a regular computer questionnaire. METHODS The research took place at the preoperative outpatient clinic. Patients completed both the standard computer questionnaire and the new chatbot questionnaire. Afterward, patients gave their feedback on both questionnaires by the User Experience Questionnaire, which consists of 26 terms to score. RESULTS The mean age of the 40 included patients (25 [63%] women) was 49 (SD 18-79) years; 46.73% (486/1040) of all terms were scored positive for the chatbot. Patients preferred the computer for 7.98% (83/1040) of the terms and for 47.88% (498/1040) of the terms there were no differences. Completion (mean time) of the computer questionnaire took 9.00 minutes by men (SD 2.72) and 7.72 minutes by women (SD 2.60; <i>P</i>=.148). For the chatbot, completion by men took 8.33 minutes (SD 2.99) and by women 7.36 minutes (SD 2.61; <i>P</i>=.287). CONCLUSIONS Patients preferred the chatbot questionnaire over the computer questionnaire. Time to completion of both questionnaires did not differ, though the chatbot questionnaire on a tablet felt more rapid compared to the computer questionnaire. This is an important finding because it could lead to higher response rates and to qualitatively better responses in future questionnaires.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.