Prediction of transcription factor (TF) activities from the gene expression of their targets (i.e. TF regulon) is becoming a widely-used approach to characterize the functional status of transcriptional regulatory circuits. Several strategies and datasets have been proposed to link the target genes likely regulated by a TF, each one providing a different level of evidence. The most established ones are: (i) manually curated repositories, (ii) interactions derived from ChIP-seq binding data, (iii) in silico prediction of TF binding on gene promoters, and (iv) reverse-engineered regulons from large gene expression datasets. However, it is not known how these different sources of regulons affect the TF activity estimations, and thereby downstream analysis and interpretation. Here we compared the accuracy and biases of these strategies to define human TF regulons by means of their ability to predict changes in TF activities in three reference benchmark datasets. We assembled a collection of TF-target interactions among 1,541 TFs, and evaluated how the different molecular and regulatory properties of the TFs, such as the DNA-binding domain, specificities or mode of interaction with the chromatin, affect the predictions of TF activity changes. We assessed their coverage and found little overlap on the regulons derived from each strategy and better performance by literature-curated information followed by ChIP-seq data. We provide an integrated resource of all TF-target interactions derived through these strategies with a confidence score, as a resource for enhanced prediction of TF activities.