Given the development of artificial intelligence (AI) and the conditions of vulnerability of large sectors of the population, the question emerges: what are the ethical limits of technologies in patient care? This paper examines this question in the light of the “language of nature” and of Aristotelian causal analysis, in particular the concept of means and ends. Thus, it is possible to point out the root of the distinction between the identity of the person and the entity of any technology. Nature indicates that the person is always an end in itself. Technology, on the contrary, should only be a means to serve the person. The diversity of their respective natures also explains why their respective agencies enjoy diverse scopes. Technological operations (artificial agency, artificial intelligence) find their meaning in the results obtained through them (poiesis). Moreover, the person is capable of actions whose purpose is precisely the action itself (praxis), in which personal agency and, ultimately, the person themselves, is irreplaceable. Forgetting the distinction between what, by nature, is an end and what can only be a means is equivalent to losing sight of the instrumental nature of AI and, therefore, its specific meaning: the greatest good of the patient. It is concluded that the language of nature serves as a filter that supports the effective subordination of the use of AI to its specific purpose, the human good. The greatest contribution of this work is to draw attention to the nature of the person and technology, and about their respective agencies. In other words: listening to the language of nature, and attending to the diverse nature of the person and technology, personal agency, and artificial agency.