Fourier ptychography is a recently developed imaging approach for large field-of-view and high-resolution microscopy. Here we model the Fourier ptychographic forward imaging process using a convolutional neural network (CNN) and recover the complex object information in a network training process. In this approach, the input of the network is the point spread function in the spatial domain or the coherent transfer function in the Fourier domain. The object is treated as 2D learnable weights of a convolutional or a multiplication layer. The output of the network is modeled as the loss function we aim to minimize. The batch size of the network corresponds to the number of captured low-resolution images in one forward/backward pass. We use a popular open-source machine learning library, TensorFlow, for setting up the network and conducting the optimization process. We analyze the performance of different learning rates, different solvers, and different batch sizes. It is shown that a large batch size with the Adam optimizer achieves the best performance in general. To accelerate the phase retrieval process, we also discuss a strategy to implement Fourier-magnitude projection using a multiplication neural network model. Since convolution and multiplication are the two most-common operations in imaging modeling, the reported approach may provide a new perspective to examine many coherent and incoherent systems. As a demonstration, we discuss the extensions of the reported networks for modeling single-pixel imaging and structured illumination microscopy (SIM). 4-frame resolution doubling is demonstrated using a neural network for SIM. The link between imaging systems and neural network modeling may enable the use of machine-learning hardware such as neural engine and tensor processing unit for accelerating the image reconstruction process. We have made our implementation code open-source for researchers.
A whole slide imaging (WSI) system has recently been approved for primary diagnostic use in the US. The image quality and system throughput of WSI is largely determined by the autofocusing process. Traditional approaches acquire multiple images along the optical axis and maximize a figure of merit for autofocusing. Here we explore the use of deep convolution neural networks (CNNs) to predict the focal position of the acquired image without axial scanning. We investigate the autofocusing performance with three illumination settings: incoherent Kohler illumination, partially coherent illumination with two plane waves, and one-plane-wave illumination. We acquire ~130,000 images with different defocus distances as the training data set. Different defocus distances lead to different spatial features of the captured images. However, solely relying on the spatial information leads to a relatively bad performance of the autofocusing process. It is better to extract defocus features from transform domains of the acquired image. For incoherent illumination, the Fourier cutoff frequency is directly related to the defocus distance. Similarly, autocorrelation peaks are directly related to the defocus distance for two-plane-wave illumination. In our implementation, we use the spatial image, the Fourier spectrum, the autocorrelation of the spatial image, and combinations thereof as the inputs for the CNNs. We show that the information from the transform domains can improve the performance and robustness of the autofocusing process. The resulting focusing error is ~0.5 µm, which is within the 0.8-µm depth-of-field range. The reported approach requires little hardware modification for conventional WSI systems and the images can be captured on the fly without focus map surveying. It may find applications in WSI and time-lapse microscopy. The transform- and multi-domain approaches may also provide new insights for developing microscopy-related deep-learning networks. We have made our training and testing data set (~12 GB) open-source for the broad research community.
Achieving high spatial resolution is the goal of many imaging systems. Designing a high-resolution lens with diffraction-limited performance over a large field of view remains a difficult task in imaging system design. On the other hand, creating a complex speckle pattern with wavelength-limited spatial features is effortless and can be implemented via a simple random diffuser. With this observation and inspired by the concept of near-field ptychography, we report a new imaging modality, termed near-field Fourier ptychography, for tackling highresolution imaging challenges in both microscopic and macroscopic imaging settings. The meaning of 'near-field' is referred to placing the object at a short defocus distance with a large Fresnel number. In our implementations, we project a speckle pattern with fine spatial features on the object instead of directly resolving the spatial features via a high-resolution lens. We then translate the object (or speckle) to different positions and acquire the corresponding images using a low-resolution lens. A ptychographic phase retrieval process is used to recover the complex object, the unknown speckle pattern, and the coherent transfer function at the same time. In a microscopic imaging setup, we use a 0.12 numerical aperture (NA) lens to achieve a NA of 0.85 in the reconstruction process. In a macroscale photographic imaging setup, we achieve ~7-fold resolution gain using a photographic lens. The final achievable resolution is not determined by the collection optics. Instead, it is determined by the feature size of the speckle pattern, similar to our recent demonstration in fluorescence imaging settings (Guo et al., Biomed. Opt. Express, 9(1), 2018). The reported imaging modality can be employed in light, coherent X-ray, and transmission electron imaging systems to increase resolution and provide quantitative absorption and phase contrast of the object.
We report the development of a high-throughput whole slide imaging (WSI) system by adapting a cost-effective optomechanical add-on kit to existing microscopes. Inspired by the phase detection concept in professional photography, we attached two pinhole-modulated cameras at the eyepiece ports for instant focal plane detection. By adjusting the positions of the pinholes, we can effectively change the view angle for the sample, and as such, we can use the translation shift of the two pinhole-modulated images to identify the optimal focal position. By using a small pinhole size, the focal-plane-detection range is on the order of millimeter, orders of magnitude longer than the objective's depth of field. We also show that, by analyzing the phase correlation of the pinhole-modulated images, we can determine whether the sample contains one thin section, folded sections, or multiple layers separated by certain distances - an important piece of information prior to a detailed z scan. In order to achieve system automation, we deployed a low-cost programmable robotic arm to perform sample loading and $14 stepper motors to drive the microscope stage to perform x-y scanning. Using a 20X objective lens, we can acquire a 2 gigapixel image with 14 mm by 8 mm field of view in 90 seconds. The reported platform may find applications in biomedical research, telemedicine, and digital pathology. It may also provide new insights for the development of high-content screening instruments.
Ptychography is an enabling coherent diffraction imaging technique for both fundamental and applied sciences. Its applications in optical microscopy, however, fall short for its low imaging throughput and limited resolution. Here, we report a resolution-enhanced parallel coded ptychography technique that achieves the highest numerical aperture and an imaging throughput orders of magnitude greater than previous demonstrations. In this platform, we translate the samples across the disorder-engineered surfaces for lensless diffraction data acquisition. The engineered surface consists of chemically etched micron-level phase scatters and printed subwavelength intensity absorbers. It is designed to unlock an optical space with spatial extent (x, y) and frequency content (k x , k y ) that is inaccessible using conventional lens-based optics. To achieve the best resolution performance, we also report a new coherent diffraction imaging model by considering both the spatial and angular responses of the pixel readouts. Our low-cost prototype can directly resolve a 308 nm line width on the resolution target without aperture synthesizing. Gigapixel high-resolution microscopic images with a 240 mm 2 effective field of view can be acquired in 15 s. For demonstrations, we recover slow-varying 3D phase objects with many 2π wraps, including optical prism and convex lens. The low-frequency phase contents of these objects are challenging to obtain using other existing lensless techniques. For digital pathology applications, we perform accurate virtual staining by using the recovered phase as attention guidance in a deep neural network. Parallel optical processing using the reported technique enables novel optical instruments with inherent quantitative nature and metrological versatility.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.