BACKGROUND
Artificial intelligence (AI) tools are increasingly introduced into the healthcare system. Not only healthcare professionals but also patients and citizens are increasingly asked to use digital health technologies to perform preventative care tasks for e.g. health monitoring and prevention purposes. Previous research showed that digital health technologies do not always fit the needs and wishes of end-users.
OBJECTIVE
Therefore, in this paper we investigated how prospective users, in our case people in vulnerable situations, e.g. having migration backgrounds, poverty, low literacy or intersections of those examples, envision to (not) use a currently developed home-based AI screening technology for the for the early detection of conditions of the heart, kidney and type II diabetes called Check@Home.
METHODS
Data were collected through five in-depth focus groups. Participants were recruited trough being at spaces where people in vulnerable situations spend time e.g. local community centres. To validate the results from the focus groups, our findings were discussed in two reflection sessions with professionals (representatives from patient organisations and social workers).
RESULTS
We found three common themes. First, our results show that participants believed they could reduce the burden on the health care system trough participating in healthy behaviour, such as preventative digital testing. Second, we found that people in vulnerable situations feared the digitalisation of the healthcare system. They expect to have less access to healthcare, for example because the skills needed to use digital health applications cannot always be used, e.g. because of a lack of head space due to financial worries. Not having access to healthcare that would otherwise be available to them just because it was digitalised, added to their already existing idea that the healthcare system was not taking care of them. Third, we found that people living in vulnerable situations have to make continuous trade-offs about what has priority at that moment in their lives because of challenging everyday life situations. For some participants this meant not participating in the screening technology in the way it was intended or not at all.
CONCLUSIONS
Our results show that while respondents accept the digital screening technology, they cannot always use the technology because of their everyday life situations in which they lack the opportunities or capabilities to do so. Our paper adds to our understanding of when, how and why people in vulnerable situations expect to (not) use digital AI screening technologies. Articulating needs and wishes of prospectives users in the early stages of a technology development is useful, as it can provide building blocks for e.g. technology developers and policymakers to create more accessible and inclusive healthcare systems in the future.