This article makes two key contributions to methodological debates in automation research. First, we argue for and demonstrate how methods in this field must account for intersections of social difference, such as race, class, ethnicity, culture, and disability, in more nuanced ways. Second, we consider the complexities of bringing together computational and qualitative methods in an intersectional methodological approach while also arguing that in their respective subjects (machines and human subjects) and conceptual scope they enable a specific dialogue on intersectionality and automation to be articulated. We draw on field reflections from a project that combines an analysis of intersectional bias in language models with findings from a community workshop on the frustrations and aspirations produced through engagement with everyday artificial intelligence (AI)–driven technologies in the context of care.