For the imaging signal processing (ISP) pipeline of digital image devices, it is of high significance to remove undesirable illuminant effects and obtain color invariance, commonly known as 'computational color constancy'. Achieving the computational color constancy requires going through two phases: the illumination estimation, which will be the primary focus of this work, and the human visual perception-based chromatic adaptation. At the first phase, illumination estimation is to predict RGB triplets, the numeric representations of incident illuminant colors, by calculating the values of image pixels. How much the network can increase its estimation accuracy is a key to realizing computational color constancy. With recent advances in deep learning (DL), a lot of deep learning-based approaches have been suggested, bringing higher accuracy to computer vision applications, but there are still quite a few obstacles to overcome such as instability of learning. In an attempt to address this ill-posed problem in the illumination estimation space, this article presents a novel deep learning-based approach, the Cascading Residual Network Architecture (CRNA), which incorporates the ResNet and cascading mechanism into the deep convolutional neural network (DCNN). The cascading mechanism helps the proposed network to restrain from suddenly varying in size, serves to mitigate learning instability, and thereby reduces the quality degradation. This is attributed to the ability of the cascading mechanism to fine-tune the pre-trained DCNN. Considerable amounts of datasets and comparative experiments highlight that the proposed approach delivers more stable and robust results and imply the potential for generalization of the proposed approach across deep learning applications.INDEX TERMS image signal processing pipeline, computational color constancy, learning instability, cascading mechanism, illumination estimation.