Purpose The purpose of this study was to investigate the utility of automated focal plane merging with the collection of gonio-photographs with different depths of field (DOF) using an established focus-stacking algorithm. Methods A cross-sectional study was conducted at Shimane University Hospital, Izumo, Japan. Sixteen eyes from 16 subjects from the glaucoma clinic were included in this study. Image processing was performed for the images of 16 eyes from 16 angle sector following the successful gonio-photography. The 256 sets of focus-stacked and best-focused images were prepared in random order and were compared for the DOF and informativeness to diagnose angle pathology by masked observers in each set as the subjective assessments. Moreover, the energy of the Laplacian (average |Δ I |), which is an indicator of image sharpness between the photographs with and without the focus-stacking processing was also analyzed with the Laplacian filter as the objective assessment. Results The automated image processing was successfully performed in all stacks of images. The significant deepening of DOF and improvement of informativeness achieved in 255 (99.6%) and 216 (84.4%) images ( P < 0.0001 for both, sign test) and the energy of the Laplacian also significantly increased in 243 (94.9%) images ( P < 0.0001, sign test). Conclusions Focal plane merging by the automated algorithm can make the gonio-images deeper focus compared with the paired best-focused images subjectively and objectively, which would be useful for angle pathological assessment in clinical practice. Translational Relevance Focal plane merging algorithm for the automated gonio-photography can facilitate the angle assessment by providing informative deep-focus image, which would be useful for glaucoma care.
SUMMARYWe previously proposed a query-by-sketch image retrieval system that uses an edge relation histogram (ERH). However, it is difficult for this method to retrieve partial objects from an image, because the ERH is a feature of the entire image, not of each object. Therefore, we propose an object-extraction method that uses edge-based features in order to enable the query-by-sketch system to retrieve partial images. This method is applied to 20,000 images from the Corel Photo Gallery. We confirm that retrieval accuracy is improved by using the edge-based features for extracting objects, enabling the query-by-sketch system to retrieve partial images. key words: query-by-sketch image retrieval, object extraction, partial image retrieval, edge-based feature
Aim/background To aim of this study is to develop an artificial intelligence (AI) that aids in the thought process by providing retinal clinicians with clinically meaningful or abnormal findings rather than just a final diagnosis, i.e., a “wayfinding AI.” Methods Spectral domain optical coherence tomography B-scan images were classified into 189 normal and 111 diseased eyes. These were automatically segmented using a deep-learning based boundary-layer detection model. During segmentation, the AI model calculates the probability of the boundary surface of the layer for each A-scan. If this probability distribution is not biased toward a single point, layer detection is defined as ambiguous. This ambiguity was calculated using entropy, and a value referred to as the ambiguity index was calculated for each OCT image. The ability of the ambiguity index to classify normal and diseased images and the presence or absence of abnormalities in each layer of the retina were evaluated based on the area under the curve (AUC). A heatmap, i.e., an ambiguity-map, of each layer, that changes the color according to the ambiguity index value, was also created. Results The ambiguity index of the overall retina of the normal and disease-affected images (mean ± SD) were 1.76 ± 0.10 and 2.06 ± 0.22, respectively, with a significant difference (p < 0.05). The AUC used to distinguish normal and disease-affected images using the ambiguity index was 0.93, and was 0.588 for the internal limiting membrane boundary, 0.902 for the nerve fiber layer/ganglion cell layer boundary, 0.920 for the inner plexiform layer/inner nuclear layer boundary, 0.882 for the outer plexiform layer/outer nuclear layer boundary, 0.926 for the ellipsoid zone line, and 0.866 for the retinal pigment epithelium/Bruch’s membrane boundary. Three representative cases reveal the usefulness of an ambiguity map. Conclusions The present AI algorithm can pinpoint abnormal retinal lesions in OCT images, and its localization is known at a glance when using an ambiguity map. This will help diagnose the processes of clinicians as a wayfinding tool.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.