“…Invertible neural networkbased architecture (Cai et al 2022;Helminger et al 2021;Ho et al 2021;Ma et al 2019Ma et al , 2022aXie, Cheng, and Chen 2021) and transformer-based architecture (Qian et al 2022;Zhu, Yang, and Cohen 2022;Zou, Song, and Zhang 2022;Liu, Sun, and Katto 2023) also have been utilized to enhance the modeling capacity of the transforms. Some other works aim to improve the efficiency of entropy coding, e.g., scale hyperprior entropy model (Ballé et al 2018), channel-wise entropy model (Minnen and Singh 2020), context model (Lee, Cho, and Beack 2019;Mentzer et al 2018;Minnen, Ballé, and Toderici 2018), 3D-context model (Guo et al 2020b), multi-scale hyperprior entropy model (Hu et al 2022), discretized Gaussian mixture model (Cheng et al 2020), checkerboard context model (He et al 2021), split hierarchical variational compression (SHVC) (Ryder et al 2022), information transformer (Informer) entropy model (Kim, Heo, and Lee 2022), bi-directional conditional entropy model (Lei et al 2022), unevenly grouped space-channel context model (ELIC) (He et al 2022), neural data-dependent transform (Wang et al 2022a), multi-level cross-channel entropy model (Guo et al 2022), and multivariate Gaussian mixture model . By constructing more accurate entropy models, these methods have achieved greater compression efficiency.…”