We investigate the problem of determining the predictive confidence (or, conversely, uncertainty) of a neural classifier through the lens of low-resource languages. By training models on sub-sampled datasets in three different languages, we assess the quality of estimates from a wide array of approaches and their dependence on the amount of available data. We find that while approaches based on pre-trained models and ensembles achieve the best results overall, the quality of uncertainty estimates can surprisingly suffer with more data. We also perform a qualitative analysis of uncertainties on sequences, discovering that a model's total uncertainty seems to be influenced to a large degree by its data uncertainty, not model uncertainty. All model implementations are opensourced in a software package. 1 The model zoo is available under https://github.com/ Kaleidophon/nlp-uncertainty-zoo, with the code for the experiments available under https://github.com/ Kaleidophon/nlp-low-resource-uncertainty.2 That is, unless the model class we chose is too restrictive.