Single image colorization is an inherently ill-posed problem that has attracted more research interest recently. It challenges to learn a luminance-chroma mapping and manage multi-modal uncertainty. However, directly learning a luminance-chroma mapping often results in issues such as semantic mismatch and color bleeding while reconstructing the chroma channels. Furthermore, the problem of output diversity remains unaddressed. To this end, we propose CodeColorist, a novel multi-stage network that effectively tackles the colorization process. To enhance the reconstruction of the chroma channels and overcome issues like semantic mismatch and color bleeding, in the first stage, we focus on learning a sophisticated and context-aware codebook by leveraging the interplay between the features extracted from luminance and chroma channels. In the second stage, we reframe the colorization problem as a code prediction task and learn to predict the code sequence for accurate codebook lookup. In addition, we address the issue of stochastic diverse output in colorization by navigating through an expanded hidden representation space in the final stage. This allows us to produce a wide range of diverse and visually appealing colorized images. Thanks to our tailored feature extraction architecture, expressive codebook, and designed lookup scheme, we are able to achieve vibrant and realistic colorized images that outperform previous works. Source code will be released in https://github.com/OHaiYo-lzy/CodeColorist.