As machine learning (ML) models are integrated more and more into critical systems, questions regarding their ethical use intensify. This paper advocates for a loss-driven engineering approach, incorporating concepts from systems-theoretic process analysis (STPA), to identify external systems, technologies, and processes essential for ethical ML deployment and therefore crticial to assessing AI ethics. STPA facilitates a deep analysis of potential hazards and system-level vulnerabilities, generating actionable insights for designing support systems and safeguards. Resilience engineering principles can be utilized to convert these insights into testable requirements for assessing AI ethics. This innovative, multi-disciplinary approach addresses a critical gap in current ML practices by extending ethical evaluation beyond the model, offering a robust framework for the responsible development and deployment of AI technologies.