The general idea that we have of artificial intelligence (AI) consists of the belief that machines will be able to develop conscious thoughts such as those possessed by human beings, and, as computing advances, such thinking will also advance until intelligence to surpass the human being, with which the advancement of AI represents ethical risks in the future. In reality, such a belief hides a cognitive assumption, which assumes that computational engineering explains human intelligence through the mind-computer metaphor. According to this assumption, technology explains cognition, and philosophy, through ethics, reflects on the impact of said technology. However, in this article, I contradict such an assumption and defend that the philosophy in AI is not reduced to the ethics that is present after the use and impact of AI in the world. I intend to expose that a good ethics of AI is the one that reflects on the appropriate risks facing AI, and for this, philosophy, beforehand, must make a cognitive analysis about the possibilities that computing has to create intelligent machines, namely, whether or not the mindcomputer metaphor makes sense. My thesis consists in defending that the philosophical analysis about AI must be carried out both on a cognitive level and on an ethical level, but that the philosophical priority in the cognitive analysis over the ethical priority, since the ethical risks of AI depend of the possibilities of technology, and only the cognitive approach can account for this.