In vivo imaging of human retinal pigment epithelial (RPE) cells has been demonstrated through multiple adaptive optics (AO)-based modalities. However, whether consistent and complete information regarding the cellular structure of the RPE mosaic is obtained across these modalities remains uncertain due to limited comparisons performed in the same eye. Here, an imaging platform combining multimodal AO-scanning light ophthalmoscopy (AO-SLO) with AO-optical coherence tomography (AO-OCT) is developed to make a side-by-side comparison of the same RPE cells imaged across four modalities: AO-darkfield, AO-enhanced indocyanine green (AO-ICG), AO-infrared autofluorescence (AO-IRAF), and AO-OCT. Co-registered images were acquired in five subjects, including one patient with choroideremia. Multimodal imaging provided multiple perspectives of the RPE mosaic that were used to explore variations in RPE cell contrast in a subject-, location-, and even cell-dependent manner. Estimated cell-to-cell spacing and density were found to be consistent both across modalities and with normative data. Multimodal images from a patient with choroideremia illustrate the benefit of using multiple modalities to infer the cellular structure of the RPE mosaic in an affected eye, in which disruptions to the RPE mosaic may locally alter the signal strength, visibility of individual RPE cells, or even source of contrast in unpredictable ways.
Adaptive optics scanning light ophthalmoscopy (AOSLO) allows non-invasive visualization of the living human eye at the microscopic scale; but even with correction of the ocular wavefront aberrations over a large pupil, the smallest cells in the photoreceptor mosaic cannot always be resolved. Here, we synergistically combine annular pupil illumination with sub-Airy disk confocal detection to demonstrate a 33% improvement in transverse resolution (from 2.36 to 1.58 µm) and a 13% axial resolution enhancement (from 37 to 32 µm), an important step towards the study of the complete photoreceptor mosaic in heath and disease. Interestingly, annular pupil illumination also enhanced the visualization of the photoreceptor mosaic in non-confocal detection schemes such as split detection AOSLO, providing a strategy for enhanced multimodal imaging of the cone and rod photoreceptor mosaic.
Data augmentation is an important strategy for enlarging training datasets in deep learning-based medical image analysis. This is because large, annotated medical datasets are not only difficult and costly to generate, but also quickly become obsolete due to rapid advances in imaging technology. Image-to-image conditional generative adversarial networks (C-GAN) provide a potential solution for data augmentation. However, annotations used as inputs to C-GAN are typically based only on shape information, which can result in undesirable intensity distributions in the resulting artificially-created images. In this paper, we introduce an active cell appearance model (ACAM) that can measure statistical distributions of shape and intensity and use this ACAM model to guide C-GAN to generate more realistic images, which we call A-GAN. A-GAN provides an effective means for conveying anisotropic intensity information to C-GAN. A-GAN incorporates a statistical model (ACAM) to determine how transformations are applied for data augmentation. Traditional approaches for data augmentation that are based on arbitrary transformations might lead to unrealistic shape variations in an augmented dataset that are not representative of real data. A-GAN is designed to ameliorate this. To validate the effectiveness of using A-GAN for data augmentation, we assessed its performance on cell analysis in adaptive optics retinal imaging, which is a rapidly-changing medical imaging modality. Compared to C-GAN, A-GAN achieved stability in fewer iterations. The cell detection and segmentation accuracy when assisted by A-GAN augmentation was higher than that achieved with C-GAN. These findings demonstrate the potential for A-GAN to substantially improve existing data augmentation methods in medical image analysis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.