“…One team explained that “a desire to ensure we had an interpretable model further influenced our choice to pursue regression rather than tree-based models ( Engstrom et al ).” The other team explained that “most AI models that operate as “black-box models” are unsuitable for mission-critical domains, such as healthcare, because they pose risk scenarios where problems that occur can remain masked and therefore undetectable and unfixable” ( Harris et al ). This perspective offers a contrasting view from prior work examining the use of “black-box models” in clinical care ( 17 ), the limitations of current explainability methods ( 18 ), and the approach of regulators at the U.S. Food and Drug Administration ( 19 ). The research topic exposes the urgent need for research and policies that help organizations understand whether or not to prioritize AI software interpretability and explainability.…”