Re-colorization of images or movies is a challenging problem due to the infinite RGB solutions for a monochrome object. In general, the process is assisted by humans, either by providing colorization hints or relevant training data for ML/AI algorithms. Our intention is to develop a mechanism for fully unguided (and with no training data used) colorization of movies. In other words, we aim to create acceptable colored counterparts of movies in domains where only monochrome visualizations physically exist (e.g. IR, UV, MRI, etc. data). Following our past approach to image colorization, the method assumes arbitrary rgb2gray models and utilizes a few probabilistic heuristics. Additionally, we maintain the temporal stability of colorization by locally using structural similarity (SSIM) between adjacent frames. The paper explains the details of the method, presents exemplary results and compares them to the state-of-the art solutions.NOTE: All figures are best viewed in color and high resolution.