Humans are born with very low contrast sensitivity, meaning that developing infants experience the world "in a blur". Is this solely a byproduct of maturational processes or is there some functional advantage for beginning life with poor vision? We explore whether reduced visual acuity as a consequence of low contrast sensitivity facilitates the acquisition of basic-level visual categories and, if so, whether this advantage also enhances subordinate-level category learning as visual acuity improves. Using convolutional neural networks (CNNs) and the ecoset dataset to simulate basic-level category learning, we manipulated model training curricula along three dimensions: presence of blurred inputs early in training, rate of blur removal over time, and grayscale versus color inputs. We found that a training regimen where blur starts high and is gradually reduced over time - as in human development - improves basic-level categorization performance relative to a regimen in which non-blurred inputs are used throughout. However, this pattern was observed only when grayscale images were used (analogous to the low sensitivity to color infants experience during early development). Importantly, the observed improvements in basic-level performance generalized to subordinate-level categorization as well: when models were fine-tuned on a dataset including subordinate-level categories (ImageNet), we found that models initially trained with blurred inputs showed a greater performance benefit than models trained solely on non-blurred inputs. Consistent with several other recent studies, we conclude that poor visual acuity in human newborns confers multiple advantages, including, as demonstrated here, more rapid and accurate acquisition of visual object categories at multiple hierarchical levels.