In 2008, Chris Anderson, the head of the well-known TED talks, proclaimed that the scientific method was obsolete. Instead of verifying hypotheses through scientific experiments, future researchers would use computerized pattern recognition to find new scientific relationships. According to Anderson, ‘Correlation supersedes causation, and science can advance even without coherent models, unified theories, or really any mechanistic explanation at all’ (Anderson 2008).Today, science is producing ever larger amounts of data. New digital tools, methods and infrastructures create a growing flood of ‘big data’ that biomedicine wants to benefit from and analyze. In order to manage this growing flood of data, many data-driven research projects in biomedicine are turning to new methods developed in AI-research. We are consequently seeing an explosive introduction of AI-techniques in the sciences. Thus, AI seems to promise a whole new way of producing knowledge about the world. But what are the consequences of introducing AI-analyses for biomedical knowledge production? What happens to biomedicine when human judgment and the traditional scientific method are supplemented with, and sometimes replaced by, AI and the analysis of large amounts of data? This paper explores these epistemic questions from the point of view of agency and human judgment. What happens to human judgment in scientific experiments with the introduction of AI? How is theory developed in the science of AI? The paper discusses these changes from the point of view of the concepts hybrid agency and onto-epistemology, asking how the AI and data revolutions are reshaping how science is done.