“…Incorrect answers could generate health risks where users act on inappropriate clinical advice or signposting [59,60]. Studies CMOCs [17,18,29,30,35,48,49,57,61] When chatbots provide access to accurate information in digestible form (C), chatbots may be preferred to search engines (O), as the chatbot can eliminate steps to search and filter web-based health information (M) [18,31,58,62] When the language cues used, make chatbots feel uncanny (not quite human), like replying too quickly, misunderstanding, or overly formal language (C), then users can disengage from connecting with the Chatbot (O), as humans are sensitive to language cues that do not "feel right" (M) [18,30,35,38,50,53,58] When chatbots interact with users by prompting further questions and checking in with them (C), users engage for longer with the chatbot (C), because interaction drives the "conversation" between the user and chatbot forward and feels more human (M) [28,30] Where chatbots repeat information, either during a single session over repeated sessions (C), users may engage with the information provided (O), because repetition reinforces understanding (M) [30,52,58] Where chatbots use language that validates users' feelings and needs (C), this may engage users in chatbot use (O), because the chatbot offers a feeling of being understood (M) [54,56,62,63] Where chatbots give complex information on SRH topics (C), users may be able to understand the information more easily (O), because the information is given in a dialogical structure that shares information in short segments of "chunks" (O)…”