Technologies based on “artificial intelligence” (AI) are transforming every part of our society, including healthcare and medical institutions. An example of this trend is the novel field in oncology and radiology called radiomics, which is the extracting and mining of large-scale quantitative features from medical imaging by machine-learning (ML) algorithms. This paper explores situated work with a radiomics software platform, QuantImage (v2), and interaction around it, in educationally framed hands-on trial sessions where pairs of novice users (physicians and medical radiology technicians) work on a radiomics task consisting of developing a predictive ML model with a co-present tutor. Informed by ethnomethodology and conversation analysis (EM/CA), the results show that learning about radiomics more generally and learning how to use this platform specifically are deeply intertwined. Common-sense knowledge (e.g., about meanings of colors) can interfere with the visual representation standards established in the professional domain. Participants' skills in using the platform and knowledge of radiomics are routinely displayed in the assessment of performance measures of the resulting ML models, in the monitoring of the platform's pace of operation for possible problems, and in the ascribing of independent actions (e.g., related to algorithms) to the platform. The findings are relevant to current discussions about the explainability of AI in medicine as well as issues of machinic agency.