Transparency is one of the "Ethical Principles in the Context of AI Systems" as described in the Ethics Guidelines for Trustworthy Artificial Intelligence (TAI). It is closely linked to four other principles -respect for human autonomy, prevention of harm, traceability and explainability -and involves numerous ways in which opaqueness can have undesirable impacts, such as discrimination, inequality, segregation, marginalisation, and manipulation. The opaqueness of many AI tools and the inability to understand the underpinning black boxes contradicts these principles as well as prevents people from fully trusting them. In this paper we discuss the PSyKE technology, a platform providing general-purpose support to symbolic knowledge extraction from different sorts of black-box predictors via many extraction algorithms. The extracted knowledge results are easily injectable into existing AI assets making them meet the transparency TAI requirement.