This paper does not call into question the existing ethical guidelines for artificial intelligence, but suggests rethinking their priorities in today’s era of humanity under threat. In December 2020, the United Nations’ Secretary General described the anthropogenic degradation of the planet, claiming that “the planet is broken.” Adopting the approach of disaster risk reduction, we assert that the ongoing destruction of the planet is a disaster, which leads us to think in terms of resilience. We have examined existing works on ethical guidelines for AI through the lens of philosophy, namely, the imperative of responsibility toward the distant future of nature, including humanity, and an ethics of care articulated around maintenance, continuance and repairs. We have identified five ethical principles: respect for nature, respect for human rights, AI usefulness, AI transparency, and AI trustworthiness, which are explained through 19 subprinciples. We conclude with the difficulty of being concretely nature-friendly in today’s era of humanity under threat.