Deep learning-based image signal processor (ISP) models for mobile cameras can generate high-quality images that rival those of professional DSLR cameras. However, their computational demands often make them unsuitable for mobile settings. Additionally, modern mobile cameras employ non-Bayer color filter arrays (CFA) such as Quad Bayer, Nona Bayer, and Q×Q Bayer to enhance image quality, yet most existing deep learning-based ISP (or demosaicing) models focus primarily on standard Bayer CFAs. In this study, we present PyNET-Q×Q, a lightweight demosaicing model specifically designed for Q×Q Bayer CFA patterns, which is derived from the original PyNET. We also propose a knowledge distillation method called progressive distillation to train the reduced network more effectively. Consequently, PyNET-Q×Q contains less than 2.5% of the parameters of the original PyNET while preserving its performance. Experiments using Q×Q images captured by a prototype Q×Q camera sensor show that PyNET-Q×Q outperforms existing conventional algorithms in terms of texture and edge reconstruction, despite its significantly reduced parameter count. Code and partial datasets can be found at https://github.com/Minhyeok01/PyNET-QxQ.INDEX TERMS Bayer filter, color filter array (CFA), demosaicing, image signal processor (ISP), knowledge distillation, non-Bayer CFA, Q×Q Bayer CFA.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.