BACKGROUND
It is well established in the literature that people experiencing homelessness have worse oral health outcomes and a huge health informational asymmetry compared to the general population. Screening programs present a viable option for this population, however, barriers to access, such as lower levels of health literacy, lack of information, and mistrust narrow their chances to participate.
OBJECTIVE
For designing an adequate health information guide that presents the oral cancer screening program as acceptable, available, and effective for this vulnerable population, the applicability of the generative Artificial Intelligence (AI) tool, OpenAI’s ChatGPT was investigated using co-design principles.
METHODS
Six text variants of a health information leaflet were created by the open-access version of ChatGPT 3.5 for a future oral cancer screening program targeting people experiencing homelessness in Budapest, Hungary. Prompts were applied in English, while the results were expected to be in Hungarian. Clients of homeless social services (N=23) were invited to three semi-structured focus group discussions between May and July 2024. General opinions regarding generative Artificial Intelligence technology and direct feedback on the AI-generated text variants were obtained using qualitative and quantitative methods including a short questionnaire developed by the research team.
RESULTS
Almost two-thirds of participants (N=17/23) stated that they had previously heard about AI, however, their self-assessment regarding the extent of their knowledge resulted in an average of 2.38 (N=16) on a 5-point Likert scale. Additionally, their answers concerning trust in medical applications of AI averaged 3.06 (N=16) on a similar scale. During the first focus group discussion with experts by experience, all six variants received a prominent score (between 4.63 and 4.92, N=6, on a 5-point Likert scale). In contrast, in the next two focus groups, when the pool was narrowed to four versions, participants remained positive, although scored the texts lower (between 3.77, N=13, and 3.50, N=12). During open discussions, text variants were considered understandable but, at the same time, certain difficulties with medical expressions, lengthiness of sentences, and stereotypical use of a subgroup among people experiencing homelessness (rough sleepers) were also reported.
CONCLUSIONS
The co-design process revealed that the participants in the focus groups wanted to actively shape the health information leaflet draft for the oral screening program. They shared their ideas and insights on how to finalize the draft so that it would appeal most to the target audience. Moreover, the involvement of generative AI technology in the co-design process revealed that the participants have heard about the concept of Artificial Intelligence and text generation as its potential function, and they have not rejected its use in healthcare settings. They actively suggested changes to the original text versions to reach the proper level of equitable use, targeting, understanding, and clarity.