“…As resource-granting stakeholders seek an understanding of how and why people are being influenced and are affected by these AI-made predictions and the resulting machine behavior or decision making, the stakeholders make an assessment as to whether they are meaningful in the context of the prevalent beliefs, logics, and categories (Suchman, 1995). Considering the "black box" nature of many AI models, which makes it difficult, if not impossible, for humans to understand exactly how machine learning algorithms make predictions and arrive at certain decisions, recommendations, or behaviors (Coglianese & Lehr, 2019), making such predictions explainable is extremely difficult in some cases (Mayenberger, 2019;Preece, 2018). However, only if the explainability of AI-made predictions is achieved can stakeholders assess the meaningfulness of these predictions and renew their trust and commitment to grant the critical resources that help ensure a strong relationship between platform AI capability and perceived user value (Rossi, 2018).…”