Colourisation is the process of synthesising colours in black and white images without altering the image’s structural content and semantics. The authors explore the concept of colourisation, aiming to colourise the multi-modal medical data through X-rays. Colourized X-ray images have a better potential to portray anatomical information than their conventional monochromatic counterparts. These images contain precious anatomical information that, when colourised, will become very valuable and potentially display more information for clinical diagnosis. This will help improve understanding of these X-rays and significantly contribute to the arena of medical image analysis. The authors have implemented three models, a basic auto-encoder architecture, and two combined learnings of the autoencoder module with transfer learning of pre-trained neural networks. The unique feature of this proposed framework is that it can colourise any medical modality in the medical imaging domain. The framework’s performance is evaluated on a chest x-ray image dataset, and it has produced benchmark results enabling high-quality colourisation. The biggest challenge is the need for a correct solution for the mapping between intensity and colour. This makes human interaction and external information from medical professionals crucial for interpreting the results.