Algorithms have become ubiquitous in our day-to-day activities. Their presence ranges from low-stakes decisions such as what music we should listen to, to high-stakes decisions regarding promotion decisions. Despite the frequency and range with which we interact with algorithms, academic research, as well as public outcries, suggest that humans often express distrust of algorithms and their decisions. Previous research suggests that there may be ways to mitigate this distrust through educational efforts that help explain how algorithms operate. With a sample of 1,841 participants across 19 countries, the present study investigates these notion by examining variations in algorithmic trust across low and high-stakes decisions, and the role of explainability and statistical literacy in shaping reactions. Broadly, results suggest a relationship between how much one understands about algorithms and their propensity to trust. Implicationsof these findings are discussed.