2021
DOI: 10.1891/j-pe-d-20-00027
|View full text |Cite
|
Sign up to set email alerts
|

The Perspectives of Women and Their Health-Care Providers Regarding Using an ECA to Support Mode of Birth Decisions

Abstract: This study used focus groups to assess the feasibility and acceptability of adapting an Embodied Conversational Agent (ECA) to support decision-making about mode of birth after previous cesarean. Twelve women with previous cesareans, and eight prenatal providers at an academic, tertiary-care medical center, viewed a prototype ECA and were asked to share feedback on the potential role in helping women prepare for decision-making. Both groups felt that although it was somewhat “robot-like,” the ECA could provide… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Sharing information in this way may be valued over using search engines as users do not have to search, appraise sources, or pick out answers from longer passages of text [ 17 , 18 , 29 , 30 , 35 , 49 , 57 ]. Chatbots may also check user understanding and well-being at various points in the conversation [ 18 , 30 , 35 , 38 , 50 , 58 ]. This allows users to evaluate whether their needs are being met by the chatbot and may feel like a more authentic conversational flow.…”
Section: Resultsmentioning
confidence: 99%
See 4 more Smart Citations
“…Sharing information in this way may be valued over using search engines as users do not have to search, appraise sources, or pick out answers from longer passages of text [ 17 , 18 , 29 , 30 , 35 , 49 , 57 ]. Chatbots may also check user understanding and well-being at various points in the conversation [ 18 , 30 , 35 , 38 , 50 , 58 ]. This allows users to evaluate whether their needs are being met by the chatbot and may feel like a more authentic conversational flow.…”
Section: Resultsmentioning
confidence: 99%
“…Incorrect answers could generate health risks where users act on inappropriate clinical advice or signposting [59,60]. Studies CMOCs [17,18,29,30,35,48,49,57,61] When chatbots provide access to accurate information in digestible form (C), chatbots may be preferred to search engines (O), as the chatbot can eliminate steps to search and filter web-based health information (M) [18,31,58,62] When the language cues used, make chatbots feel uncanny (not quite human), like replying too quickly, misunderstanding, or overly formal language (C), then users can disengage from connecting with the Chatbot (O), as humans are sensitive to language cues that do not "feel right" (M) [18,30,35,38,50,53,58] When chatbots interact with users by prompting further questions and checking in with them (C), users engage for longer with the chatbot (C), because interaction drives the "conversation" between the user and chatbot forward and feels more human (M) [28,30] Where chatbots repeat information, either during a single session over repeated sessions (C), users may engage with the information provided (O), because repetition reinforces understanding (M) [30,52,58] Where chatbots use language that validates users' feelings and needs (C), this may engage users in chatbot use (O), because the chatbot offers a feeling of being understood (M) [54,56,62,63] Where chatbots give complex information on SRH topics (C), users may be able to understand the information more easily (O), because the information is given in a dialogical structure that shares information in short segments of "chunks" (O)…”
Section: Chatbots Could Provide Complex Information In a Responsive A...mentioning
confidence: 99%
See 3 more Smart Citations