In this paper, we present two Czech datasets for automated factchecking, which is a task commonly modeled as a classification of textual claim veracity w.r.t. a corpus of trusted ground truths. We consider 3 classes: SUPPORTS, REFUTES complemented with evidence documents or NEI (Not Enough Info) alone. Our first dataset, CsFEVER, has 127,328 claims. It is an automatically generated Czech version of the large-scale FEVER dataset built on top of Wikipedia corpus. We take a hybrid approach of machine translation and document alignment; the approach, and the tools we provide, can be easily applied to other languages. The second dataset, CTKFacts of 3,097 claims, is annotated using the corpus of 2.2M articles of Czech News Agency. We present its extended annotation methodology based on the FEVER approach. We analyze both datasets for spurious cues -annotation patterns leading to model overfitting. CTKFacts is further examined for inter-annotator agreement, thoroughly cleaned,