The current AI ethics discourse focuses on developing computational interpretations of ethical concerns, normative frameworks, and concepts for socio-technical innovation. There is less emphasis on understanding how AI practitioners themselves understand ethics and socially organize to operationalize ethical concerns. This is particularly true for AI start-ups, despite their significance as a conduit for the cultural production of innovation and progress, especially in the US and European context.This gap in empirical research intensifies the risk of a disconnect between scholarly research, innovation and application. This risk materializes acutely as mounting pressures to identify and mitigate the potential harms of AI systems have created an urgent need to rapidly assess and implement socio-technical innovation focused on fairness, accountability, and transparency.In this paper, we address this need. Building on social practice theory, we propose a framework that allows AI researchers, practitioners, and regulators to systematically analyze existing cultural understandings, histories, and social practices of "ethical AI" to define appropriate strategies for effectively implementing sociotechnical innovations. We argue that this approach is needed because socio-technical innovation "sticks" better if it sustains the cultural meaning of socially shared (ethical) AI practices, rather than breaking them. By doing so, it creates pathways for technical and socio-technical innovations to be integrated into already existing routines.Against that backdrop, our contributions are threefold: (1) we introduce a practice-based approach for understanding "ethical AI";(2) we present empirical findings from our study on the operationalization of "ethics" in German AI start-ups to underline that AI ethics and social practices must be understood in their specific cultural and historical contexts; and (3) based on our empirical findings,