“…For facilitating feature extraction, most studies only utilize high-quality images to establish deep learning systems (Cheung et al, 2021;Esteva et al, 2017;Li et al, 2020aLi et al, , 2020bLi et al, , 2020cLi et al, , 2020dLi et al, 2021aLi et al, , 2021bLi et al, , 2021cLi et al, , 2021dLuo et al, 2019;Xie et al, 2020;Zhang et al, 2020). Although deep learning acquires good performance in high-quality images, its performance was poor in low-quality images, which were inevitable in real clinical scenarios due to many factors such as patient noncompliance, hardware imperfections, and operator errors (Li et al, 2020a(Li et al, , 2020b(Li et al, , 2020c(Li et al, , 2020dLi et al, 2021aLi et al, , 2021bLi et al, , 2021cLi et al, , 2021dTrucco et al, 2013). For instance, in screening for lattice degeneration/retinal breaks, glaucomatous optic neuropathy, and retinal exudation/drusen, deep learning systems achieved area under the receiver operating characteristic curves (AUCs) of 0.990, 0.995, and 0.982 in high-quality fundus images, respectively, whereas achieved AUCs of 0.635, 0.853, and 0.779 in low-quality fundus images, respectively (Li et al, 2020a(Li et al, , 2020b(Li et al, , 2020c(Li et al, , 2020d.…”