Causal Bayesian networks have become a powerful technology for reasoning under uncertainty in areas that require transparency and explainability, by relying on causal assumptions that enable us to simulate hypothetical interventions. The graphical structure of such models can be estimated by structure learning algorithms, domain knowledge, or a combination of both. Various knowledge approaches have been proposed in the literature that enables us to specify prior knowledge that constrains or guides these algorithms. This paper introduces some novel, and also describes some existing, knowledge-based approaches that enable us to combine structure learning with knowledge obtained from heterogeneous sources. We investigate the impact of these approaches on structure learning across different algorithms, case studies and settings that we might encounter in practice. Each approach is assessed in terms of effectiveness and efficiency, including graphical accuracy, model fitting, complexity, and runtime; making this the first paper that provides a comparative evaluation of a wide range of knowledge approaches for structure learning. Because the value of knowledge depends on what data are available, we illustrate the results both with limited and big data. While the overall results show that knowledge becomes less important with big data due to higher learning accuracy rendering knowledge less important, some of the knowledge approaches are found to be more important with big data. Amongst the main conclusions is the observation that reduced search space obtained from knowledge does not always imply reduced computational complexity, perhaps because the relationships implied by the data and knowledge are in tension.