Syntactic annotation of corpora in the form of part-of-speech ( ) tags is a key requirement for both linguistic research and subsequent automated natural language processing ( ) tasks. This problem is commonly tackled using machine learning methods, i.e., by training a tagger on a sufficiently large corpus of labeled data. While the problem of tagging can essentially be considered as solved for modern languages, historical corpora turn out to be much more difficult, especially due to the lack of native speakers and sparsity of training data. Moreover, most texts have no sentences as we know them today, nor a common orthography. These irregularities render the task of automated tagging more difficult and error-prone. Under these circumstances, instead of forcing the tagger to predict and commit to a single tag, it should be enabled to express its uncertainty. In this paper, we consider tagging within the framework of set-valued prediction, which allows the tagger to express its uncertainty via predicting a set of candidate tags instead of guessing a single one. The goal is to guarantee a high confidence that the correct tag is included while keeping the number of candidates small. In our experimental study, we find that extending state-of-the-art taggers to set-valued prediction yields more precise and robust taggings, especially for unknown words, i.e., words not occurring in the training data.