IntroductionWhilst a theoretical basis for implementation research is seen as advantageous, there is little clarity over if and how the application of theories, models or frameworks (TMF) impact implementation outcomes. Clinical artificial intelligence (AI) continues to receive multi-stakeholder interest and investment, yet a significant implementation gap remains. This bibliometric study aims to measure and characterize TMF application in qualitative clinical AI research to identify opportunities to improve research practice and its impact on clinical AI implementation.MethodsQualitative research of stakeholder perspectives on clinical AI published between January 2014 and October 2022 was systematically identified. Eligible studies were characterized by their publication type, clinical and geographical context, type of clinical AI studied, data collection method, participants and application of any TMF. Each TMF applied by eligible studies, its justification and mode of application was characterized.ResultsOf 202 eligible studies, 70 (34.7%) applied a TMF. There was an 8-fold increase in the number of publications between 2014 and 2022 but no significant increase in the proportion applying TMFs. Of the 50 TMFs applied, 40 (80%) were only applied once, with the Technology Acceptance Model applied most frequently (n = 9). Seven TMFs were novel contributions embedded within an eligible study. A minority of studies justified TMF application (n = 51,58.6%) and it was uncommon to discuss an alternative TMF or the limitations of the one selected (n = 11,12.6%). The most common way in which a TMF was applied in eligible studies was data analysis (n = 44,50.6%). Implementation guidelines or tools were explicitly referenced by 2 reports (1.0%).ConclusionTMFs have not been commonly applied in qualitative research of clinical AI. When TMFs have been applied there has been (i) little consensus on TMF selection (ii) limited description of selection rationale and (iii) lack of clarity over how TMFs inform research. We consider this to represent an opportunity to improve implementation science's translation to clinical AI research and clinical AI into practice by promoting the rigor and frequency of TMF application. We recommend that the finite resources of the implementation science community are diverted toward increasing accessibility and engagement with theory informed practices. The considered application of theories, models and frameworks (TMF) are thought to contribute to the impact of implementation science on the translation of innovations into real-world care. The frequency and nature of TMF use are yet to be described within digital health innovations, including the prominent field of clinical AI. A well-known implementation gap, coined as the “AI chasm” continues to limit the impact of clinical AI on real-world care. From this bibliometric study of the frequency and quality of TMF use within qualitative clinical AI research, we found that TMFs are usually not applied, their selection is highly varied between studies and there is not often a convincing rationale for their selection. Promoting the rigor and frequency of TMF use appears to present an opportunity to improve the translation of clinical AI into practice.