The optical setup for holographic projection on the scatterings in fluorescent liquids is presented. Such media can be used as volumetric screens for near-eye holographic displays, solving the problem of speckle noise and very small exit pupils in existing setups. Three different oils (canola, olive and engine oil) with 532 nm laser and tonic water with 405 nm laser are used for projecting holographic fields, the quality of such images is investigated. The laser wavelength is cut out from acquisition on a camera and only filtered fluorescent light is observed. The best and brightest results are obtained with engine oil. Full Text: PDF ReferencesX. Li, C. P. Chen, H. Gao, et al. "Video-Rate Holographic Display Using Azo-Dye-Doped Liquid Crystal", Journal of display technology 10(6), 438-443 (2014). CrossRef X. Li, Z. Song, F. Li, X. Dong, W. Liu, "79‐3: Video‐rate Holographic Display in ZnSe layer‐assisted Quantum Dot Doped Liquid Crystal with High‐photorefractive Sensitivity", SID Symposium Digest of Technical Papers. Vol. 48. No. 1. 2017, CrossRef Sasaki, Takeo, et al. "Real-time dynamic hologram in photorefractive ferroelectric liquid crystal with two-beam coupling gain coefficient of over 800 cm–1 and response time of 8 ms", Applied Physics Letters 6(2) (2013) CrossRef N. Tsutsumi, K. Kinashi, A. Nomura, W. Sasaki, "Quickly Updatable Hologram Images Using Poly(N-vinyl Carbazole) (PVCz) Photorefractive Polymer Composite", Materials 5.8: 1477-1486 (2012) CrossRef M. Makowski, "Simple holographic projection in color", et al. Optics express 20.22: 25130-25136 (2012) CrossRef A. Yagi, M. Imura, Y, Kuroda, O. Oshiro, "360-degree fog projection interactive display", SIGGRAPH Asia 2011 Emerging Technologies. ACM, (2011) CrossRef C.H. Hsu, K. L. Hua, W. H. Cheng. "Omni-Tube: a low-cost portable omnidirectional interactive 3D display", SIGGRAPH Asia 2012 Posters. ACM, (2012) CrossRef Z. Zeng, H. Zheng, X. Lu, H. Gao, Y. Yu, "Dynamic holographic three-dimensional projection based on liquid crystal spatial light modulator and cylindrical fog screen", Opt Rev (2015) 22: 853 CrossRef I. Rakkolainen, "Feasible mid-air virtual reality with the immaterial projection screen technology", 3DTV-Conference, Tampere (2010) CrossRef S. Yanfeng, et al. "A multi-plane optical see-through holographic three-dimensional display for augmented reality applications", Optik 157: 190-196 (2018) CrossRef G. Li, D. Lee, Y. Jeong, J. Cho, B. Lee, "Holographic display for see-through augmented reality using mirror-lens holographic optical element", Opt. Lett. 41(11), 2486-2489 (2016) CrossRef C. L. Lin, Y. Z. Su, M. W. Hung, K. C. Huang "Augmented reality system", Proc. SPIE 7798, Applications of Digital Image Processing XXXIII, 779826 (2010) CrossRef A. Maimone, A. Georgiou, J. S. Kollin, "Holographic near-eye displays for virtual and augmented reality", ACM Trans. Graph. 36, 4, 1-16 (2017) CrossRef M. Quinten, Optical properties of nanoparticle systems: Mie and beyond (John Wiley & Sons 2010). CrossRef J.-W. Liaw, S.-W. Tsai, H.-H. Lin, T.-C. Yen, B.-R. Chen, "Wavelength-dependent Faraday–Tyndall effect on laser-induced microbubble in gold colloid", Journal of Quantitative Spectroscopy and Radiative Transfer 113(17), 2234-2242 (2012), CrossRef T. Mu et al. "Classification of edible oils using 532 nm laser-induced fluorescence combined with support vector machine", Anal. Methods 5, 6960 (2013) CrossRef T. Mu et al. "Classification of Motor Oil Using Laser-Induced Fluorescence and Phosphorescence", Analytical Letters 49:8, 1233-1239 (2015) CrossRef V. Rostampour, M. J. Lynch, "Quantitative Techniques To Discriminate Petroleum Oils Using LED-induced Fluorescence", WIT Transactions on Ecology and the Environment 95, 265 262 (2006) CrossRef F. Wyrowski and O. Bryngdahl, "Iterative Fourier-transform algorithm applied to computer holography", Opt. Soc. Am. A 5(7), 1058-1065 (1988) CrossRef
1549 Background: The presence of genetic mutations is a vital prognostic in many types of cancer. However, genomic testing is expensive and challenging to perform. In contrast, hematoxylin and eosin (H&E) staining is relatively inexpensive and straightforward. Thus, in this study, we propose a method of predicting the presence of genetic mutations using H&E-stained whole-slide images (WSIs). Methods: We divided each H&E–stained WSI into small pieces or “patches.” We used a deep learning model to classify each patch based on the presence of tumor-containing regions. We then extracted image features from each tumor-containing patch using a deep learning-based feature extractor. We created image features for the entire WSI by concatenating the features of the patches. We then trained genetic mutation classification models using the WSI features as the input and the presence or absence of genetic mutations as the output. Finally, we evaluated the performance of these models using the area under the receiver operating characteristic curve (AUC). Results: First, we evaluated our methods using The Cancer Genome Atlas (TCGA) colorectal cancer dataset. We used H&E–stained WSIs and data associated with Microsatellite Instability ( MSI) and BRAF gene mutations, which are directly relevant to therapeutic strategies, obtained from an independent clinical cohort of 566 patients with TCGA colon and rectum adenocarcinoma. We divided the data into training, validation, and test splits, comprising 367, 90, and 109 patients, respectively. We used the training and validation splits for model training and selection, and the test split for model evaluation. The AUC values of the classification models and associated 95% confidence intervals (CIs) were 0.721 (CI = 0.572–0.870) for MSI and 0.712 (CI = 0.547–0.877) for BRAF gene mutations. We also applied our approach to MUC16, KRAS, and ALK mutations using the TCGA lung cancer dataset. We divided 909 TCGA lung adenocarcinoma and lung squamous cell carcinoma patients into training, validation, and test splits, comprising 582, 146, and 181 patients, respectively. In contrast with those of the colorectal dataset, WSI image features were generated using all patches. The AUC values on the test splits were 0.897 (CI = 0.85–0.95) for MUC16, 0.845 (CI = 0.75–0.94) for KRAS, and 0.756 (CI = 0.57–0.94) for ALK mutations. Conclusions: We proposed an approach to predict the presence of genetic mutations using only H&E–stained WSIs and evaluated its performance using colorectal and lung cancer datasets. Our model has the potential to predict the presence of certain genetic mutations with superior performance. These predictions can be used to improve the accuracy of prognostic prediction using WSIs alone.
3119 Background: Previous studies have shown that the presence or absence of genetic mutations is critical for colorectal cancer prognosis. However, genomic testing can be expensive and difficult to perform on all samples. In contrast, hematoxylin and eosin (H&E) staining is relatively inexpensive and can be performed on all tissue specimens. In this study, we designed a novel prognostic method using spatial image features extracted from H&E–stained whole slide images (WSIs) and genetic mutation prediction neural networks. Methods: We obtained H&E–stained WSIs and data on Microsatellite Instability ( MSI), BRAF, TTN and APC gene mutations from a clinical cohort of 548 patients with The Cancer Genome Atlas (TCGA) Colon adenocarcinoma and rectum adenocarcinoma. We divided them into training (n=361), validation (n=90), and test (n=115) groups. Classification models were trained to predict the presence or absence of MSI, BRAF, TTN, and APC mutations. The model input comprised features of the H&E–stained WSIs, as obtained via a deep learning–based feature extractor. All resultant models were incorporated into a prognostic model (overall survival: > 60 months (low risk)/< 60 months (high risk)). Our prognostic model’s performance was evaluated against TCGA colorectal dataset, and a survival analysis was performed on the model using the Kaplan–Meier method. Finally, we compared our model’s performance with the end–to–end prognostic prediction of a convolutional neural network (CNN) that also used H&E–stained WSIs as input and provided prognostic prediction as output. Results: Our deep learning–based prognostic prediction model achieved an AUC score of 0.834 with a 95% confidence interval (CI) of 0.734–1.000 alongside TCGA dataset; the survival analysis compared the survival distributions of low–risk and high–risk groups, as predicted by our model; a p–value < 0.01 was obtained. The model could classify low– and high–risk patients and accurately predict patient status as alive (low risk) or deceased (high risk) at 60 months. In contrast, the CNN–based model achieved an AUC score of only 0.502 (95% CI: 0.315–0.690) on the same TCGA dataset, and the p–value obtained for it under the Kaplan–Meier log–rank test was greater than 0.5. The CNN–based method was unable to distinguish between low– and high–risk patients, confirming that our method using spatial imaging features extracted from WSIs was a more effective approach. Conclusions: We developed a novel prognostic prediction method using spatial image features extracted from WSIs and genetic mutation prediction neural networks. Our results demonstrated the advantage of using image features over gene mutation data for prognostic prediction in colorectal cancer patients.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.