“…Data experts design algorithms purely as technical problems resulting in unusable and unexplainable recommendations in forest decision-making Wagstaff, 2012;Padarian et al, 2020 Inclusion of social and technical context while designing algorithms Predictive algorithms fail to capture social and technical contexts and make simplistic assumptions about social actors, institutions, and their interactions Wagstaff, 2012;Dutta et al, 2016;Mueller et al, 2019;Selbst et al, 2019 Interpretation of ML results in specific contexts to support decision-making Little scholarly tradition within ML community to interpret results in their specific socio-economic and political contexts narrows model interpretability Aertsen et al, 2010;Wagstaff, 2012;Mueller et al, 2019 Uniform model-based predictions to support a given decision Predictive models lack uniformity in their predictions. For the same set of input features and prediction tasks, complex models can generate multiple accurate models with varying details of explanations Adadi and Berrada, 2018;Hall and Gill, 2018 Robust and verified unique causal solutions to a given problem Predictive algorithms are only evaluated by their predictive success and are not optimized to answer causal questions Drake et al, 2006;Aertsen et al, 2010;Nunes and Görgens, 2016;Pearl and Mackenzie, 2018 Full understanding of how predictive algorithm is making decisions Black-box nature of many ML algorithms make it difficult for humans to understand their decisions Naidoo et al, 2012;Mascaro et al, 2014;Kar et al, 2017;Mueller et al, 2019 Big, accurate and appropriate data to support interpretable decisions Lack of data, class imbalance, data sparsity, noise in data quality and presence of spatial and temporal correlation further limits the development of interpretable ML models in forest management Lippitt et al, 2008;Ali et al, 2015;Curtis et al, 2018;Franklin and Ahmed, 2018;Gurumurthy et al, 2018;Gholami et al, 2019;Hethcoat et al, 2019;…”