In 2012 the United Kingdom's General Medical Council (GMC) commissioned research to develop guidance for medical schools on how best to support students with mental illness. One of the key findings from medical student focus groups in the study was students' strong belief that medical schools excluded students on mental health grounds. Students believed mental illness was a fitness to practice matter that led to eventual dismissal, although neither personal experience nor empirical evidence supported this belief. The objective of the present study was a deeper exploration of this belief and its underlying social mechanisms. This included any other beliefs that influenced medical students' reluctance to disclose a mental health problem, the factors that reinforced these beliefs, and the feared consequences of revealing a mental illness. The study involved a secondary analysis of qualitative data from seven focus groups involving 40 student participants across five UK medical schools in England, Scotland, and Wales. Student beliefs clustered around (1) the unacceptability of mental illness in medicine, (2) punitive medical school support systems, and (3) the view that becoming a doctor is the only successful career outcome. Reinforcing mechanisms included pressure from senior clinicians, a culture of "presenteeism," distrust of medical school staff, and expectations about conduct. Feared consequences centered on regulatory "fitness to practice" proceedings that would lead to expulsion, reputational damage, and failure to meet parents' expectations. The study's findings provide useful information for veterinary medical educators interested in creating a culture that encourages the disclosure of mental illness and contributes to the debate about mental illness within the veterinary profession.
Artificial intelligence (AI) and machine learning (ML) techniques occupy a prominent role in medical research in terms of the innovation and development of new technologies. However, while many perceive AI as a technology of promise and hope—one that is allowing for more early and accurate diagnosis—the acceptance of AI and ML technologies in hospitals remains low. A major reason for this is the lack of transparency associated with these technologies, in particular epistemic transparency, which results in AI disturbing or troubling established knowledge practices in clinical contexts. In this article, we describe the development process of one AI application for a clinical setting. We show how epistemic transparency is negotiated and co-produced in close collaboration between AI developers and clinicians and biomedical scientists, forming the context in which AI is accepted as an epistemic operator. Drawing on qualitative research with collaborative researchers developing an AI technology for the early diagnosis of a rare respiratory disease (pulmonary hypertension/PH), this paper examines how including clinicians and clinical scientists in the collaborative practices of AI developers de-troubles transparency. Our research shows how de-troubling transparency occurs in three dimensions of AI development relating to PH: querying of data sets, building software and training the model. The close collaboration results in an AI application that is at once social and technological: it integrates and inscribes into the technology the knowledge processes of the different participants in its development. We suggest that it is a misnomer to call these applications ‘artificial’ intelligence, and that they would be better developed and implemented if they were reframed as forms of sociotechnical intelligence.
The staged model, derived from analysis of existing interventions, provides a framework for evaluation of current provision and comparison of different methods of delivery. Moreover, it provides a framework for future research.
The role of Artificial Intelligence (AI) in clinical decision-making raises issues of trust. One issue concerns the conditions of trusting the AI which tends to be based on validation. However, little attention has been given to how validation is formed, how comparisons come to be accepted, and how AI algorithms are trusted in decision-making. Drawing on interviews with collaborative researchers developing three AI technologies for the early diagnosis of pulmonary hypertension (PH), we show how validation of the AI is jointly produced so that trust in the algorithm is built up through the negotiation of criteria and terms of comparison during interactions. These processes build up interpretability and interrogation, and co-constitute trust in the technology. As they do so, it becomes difficult to sustain a strict distinction between artificial and human/social intelligence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.