Following the large-scale 2015–2016 migration crisis that shook Europe, deploying big data and social media harvesting methods became gradually popular in mass forced migration monitoring. These methods have focused on producing ‘real-time’ inferences and predictions on individual and social behavioral, preferential, and cognitive patterns of human mobility. Although the volume of such data has improved rapidly due to social media and remote sensing technologies, they have also produced biased, flawed, or otherwise invasive results that made migrants’ lives more difficult in transit. This review article explores the recent debate on the use of social media data to train machine learning classifiers and modify thresholds to help algorithmic systems monitor and predict violence and forced migration. Ultimately, it identifies and dissects five prevalent explanations in the literature on limitations for the use of such data for A.I. forecasting, namely ‘policy-engineering mismatch’, ‘accessibility/comprehensibility’, ‘legal/legislative legitimacy’, ‘poor data cleaning’, and ‘difficulty of troubleshooting’. From this review, the article suggests anonymization, distributed responsibility, and ‘right to reasonable inferences’ debates as potential solutions and next research steps to remedy these problems.