“…Given that health care providers may hold different perceptions toward different AI systems of varying performance and reliability, it would be helpful if the studies provide a transparent description of the AI system’s architecture, accuracy or reliability performance, and possible risks. Unfortunately, in our review, 21 studies did not provide adequate information about the architecture of the AI applications [ 25 , 29 , 32 , 34 , 37 , 44 , 46 , 50 , 52 , 54 , 56 - 62 , 65 , 68 , 70 , 75 ] and 22 studies did not reveal the performance and possible risks of AI under evaluation [ 26 , 29 , 34 , 37 , 39 , 46 , 48 - 50 , 52 , 54 , 56 - 62 , 64 , 65 , 68 , 69 ]. Further, considering that some self-evolving adaptive clinical AI applications continuously incorporate the latest clinical practice data and published evidence, it is important to undertake periodic monitoring and recalibration of AI applications to ensure that they are working as expected.…”