Proceedings of the 1st ACM SIGCAS Conference on Computing and Sustainable Societies 2018
DOI: 10.1145/3209811.3209879
|View full text |Cite
|
Sign up to set email alerts
|

Exploiting Data and Human Knowledge for Predicting Wildlife Poaching

Abstract: Poaching continues to be a significant threat to the conservation of wildlife and the associated ecosystem. Estimating and predicting where the poachers have committed or would commit crimes is essential to more effective allocation of patrolling resources. The real-world data in this domain is often sparse, noisy and incomplete, consisting of a small number of positive data (poaching signs), a large number of negative data with label uncertainty, and an even larger number of unlabeled data. Fortunately, domai… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 13 publications
(14 citation statements)
references
References 12 publications
0
14
0
Order By: Relevance
“…Data experts design algorithms purely as technical problems resulting in unusable and unexplainable recommendations in forest decision-making Wagstaff, 2012;Padarian et al, 2020 Inclusion of social and technical context while designing algorithms Predictive algorithms fail to capture social and technical contexts and make simplistic assumptions about social actors, institutions, and their interactions Wagstaff, 2012;Dutta et al, 2016;Mueller et al, 2019;Selbst et al, 2019 Interpretation of ML results in specific contexts to support decision-making Little scholarly tradition within ML community to interpret results in their specific socio-economic and political contexts narrows model interpretability Aertsen et al, 2010;Wagstaff, 2012;Mueller et al, 2019 Uniform model-based predictions to support a given decision Predictive models lack uniformity in their predictions. For the same set of input features and prediction tasks, complex models can generate multiple accurate models with varying details of explanations Adadi and Berrada, 2018;Hall and Gill, 2018 Robust and verified unique causal solutions to a given problem Predictive algorithms are only evaluated by their predictive success and are not optimized to answer causal questions Drake et al, 2006;Aertsen et al, 2010;Nunes and Görgens, 2016;Pearl and Mackenzie, 2018 Full understanding of how predictive algorithm is making decisions Black-box nature of many ML algorithms make it difficult for humans to understand their decisions Naidoo et al, 2012;Mascaro et al, 2014;Kar et al, 2017;Mueller et al, 2019 Big, accurate and appropriate data to support interpretable decisions Lack of data, class imbalance, data sparsity, noise in data quality and presence of spatial and temporal correlation further limits the development of interpretable ML models in forest management Lippitt et al, 2008;Ali et al, 2015;Curtis et al, 2018;Franklin and Ahmed, 2018;Gurumurthy et al, 2018;Gholami et al, 2019;Hethcoat et al, 2019;…”
Section: Usable and Explainable Recommendationsmentioning
confidence: 99%
See 4 more Smart Citations
“…Data experts design algorithms purely as technical problems resulting in unusable and unexplainable recommendations in forest decision-making Wagstaff, 2012;Padarian et al, 2020 Inclusion of social and technical context while designing algorithms Predictive algorithms fail to capture social and technical contexts and make simplistic assumptions about social actors, institutions, and their interactions Wagstaff, 2012;Dutta et al, 2016;Mueller et al, 2019;Selbst et al, 2019 Interpretation of ML results in specific contexts to support decision-making Little scholarly tradition within ML community to interpret results in their specific socio-economic and political contexts narrows model interpretability Aertsen et al, 2010;Wagstaff, 2012;Mueller et al, 2019 Uniform model-based predictions to support a given decision Predictive models lack uniformity in their predictions. For the same set of input features and prediction tasks, complex models can generate multiple accurate models with varying details of explanations Adadi and Berrada, 2018;Hall and Gill, 2018 Robust and verified unique causal solutions to a given problem Predictive algorithms are only evaluated by their predictive success and are not optimized to answer causal questions Drake et al, 2006;Aertsen et al, 2010;Nunes and Görgens, 2016;Pearl and Mackenzie, 2018 Full understanding of how predictive algorithm is making decisions Black-box nature of many ML algorithms make it difficult for humans to understand their decisions Naidoo et al, 2012;Mascaro et al, 2014;Kar et al, 2017;Mueller et al, 2019 Big, accurate and appropriate data to support interpretable decisions Lack of data, class imbalance, data sparsity, noise in data quality and presence of spatial and temporal correlation further limits the development of interpretable ML models in forest management Lippitt et al, 2008;Ali et al, 2015;Curtis et al, 2018;Franklin and Ahmed, 2018;Gurumurthy et al, 2018;Gholami et al, 2019;Hethcoat et al, 2019;…”
Section: Usable and Explainable Recommendationsmentioning
confidence: 99%
“…The lack of accurate and adequate data in forestry further limits developing interpretable models (Lippitt et al, 2008;Kar et al, 2017;O'Connor et al, 2017;Curtis et al, 2018;Franklin and Ahmed, 2018;Gurumurthy et al, 2018;Gholami et al, 2019;Hethcoat et al, 2019). Scholars have noticed significant class imbalance, sparsity, and noise in the patrolling datasets they use in predicting wildlife poaching (Bland et al, 2015;Kar et al, 2017;Gurumurthy et al, 2018;Gholami et al, 2019). They also identified geographic and language barriers in collecting and synthesizing data for forest conservation decisions (Gurumurthy et al, 2018).…”
Section: Models Often Lack Transparency Restrictingmentioning
confidence: 99%
See 3 more Smart Citations