Artificial intelligence (AI) is gradually becoming an important force driving the development of biomedicine. However, as AI increasingly relies on complex and opaque machine learning algorithms, a critical ethical issue known as the "algorithmic black-box" problem has emerged. Despite the development of several explainability tools, they have not been widely adopted in the medical field due to their inability to provide satisfactory explanations in clinical practice. Furthermore, different stakeholders including algorithm experts, medical professionals, patients, and the general public have varying requirements for explainability and transparency. As a result, this has created a series of internal, internalexternal interaction, and external-level ethical issues in data, algorithmic, and social dimensions.In the ethical challenges associated with the increasing complexity and opacity of medical artificial intelligence, constructing medical artificial moral agents has been proposed as a viable solution. To implement ethical frameworks in this domain, three approaches have been identified: The top-down approach, the bottom-up approach, and the hybrid approach. The top-down approach prioritizes moral design based on specific ethical principles. However, this approach faces difficulties in responding appropriately to complex ethical situations due to the lack of consensus among ethical experts, contradictions between ethical principles and practical goals, and the abstract nature of moral principles. The bottom-up approach, on the other hand, requires medical artificial intelligence to develop a set of operating methods that align with human moral intuition in a series of case-based reinforcement learning scenarios. Nonetheless, this approach is only effective in retrospective regulation, and converging moral reasoning to a certain pattern remains challenging.In light of the current state of artificial intelligence development, it is imperative to adopt a "hybrid approach" that integrates both top-down and bottom-up approaches throughout the process of developing medical artificial intelligence. This involves establishing a flexible ethical framework via the top-down approach that takes into account contextual factors to enhance algorithm transparency, and leveraging the strengths of medical artificial intelligence to develop diverse models of moral reasoning through the bottom-up approach that incorporates multiple contextual information. While some scholars may argue that the hybrid approach is redundant, given the contemporary demand for moral pluralism and contextualism, this path toward reflective equilibrium can better address moral disagreements in the real world, ensuring that the ethical behavior of medical artificial intelligence aligns with the value judgments of relevant stakeholders.From an internal perspective, the hybrid approach involves algorithm engineers developing tools for explainability that are independent of the underlying machine learning models, assessing ethical risks, or constructing algor...