In an era dominated by artificial intelligence (AI), establishing customer confidence is crucial for the integration and acceptance of AI technologies. This interdisciplinary study examines factors influencing customer trust in AI systems through a mixed-methods approach, blending quantitative analysis with qualitative insights to create a comprehensive conceptual framework. Quantitatively, the study analyzes responses from 1248 participants using structural equation modeling (SEM), exploring interactions between technological factors like perceived usefulness and transparency, psychological factors including perceived risk and domain expertise, and organizational factors such as leadership support and ethical accountability. The results confirm the model, showing significant impacts of these factors on consumer trust and AI adoption attitudes. Qualitatively, the study includes 35 semi-structured interviews and five case studies, providing deeper insight into the dynamics shaping trust. Key themes identified include the necessity of explainability, domain competence, corporate culture, and stakeholder engagement in fostering trust. The qualitative findings complement the quantitative data, highlighting the complex interplay between technology capabilities, human perceptions, and organizational practices in establishing trust in AI. By integrating these findings, the study proposes a novel conceptual model that elucidates how various elements collectively influence consumer trust in AI. This model not only advances theoretical understanding but also offers practical implications for businesses and policymakers. The research contributes to the discourse on How to cite this paper: Oyekunle, D.,