At the foundation of the problem of light propagation through optical turbulence is the classical Obukhov-Kolmogorov theory. It rests in the requirement that the refractive index fluctuations should be homogeneous and isotropic. These, with other necessary assumptions, lead to the very well-known -11/3-power exponent spectrum on the inertial range; although departures have been found, they are usually associated with partially developed turbulence or its intrinsic intermittency. Recently, in optics, the interest in anisotropic fluctuations of the refractive index has gained attention. These studies are mostly theoretical, and reduce anisotropic effects to a dilatation along a coordinate direction in the three-dimensional wavenumber space. Few experimental works exists, but all of them employ simulated turbulence. In this Letter, we describe an experiment to produce anisotropic turbulence under controlled conditions; moreover, we observe anisotropy by studying the spectral power exponent of a temporal series of laser beam wandering.
This work shows the design and training of a convolutional neural network to improve the linear response of a modulated pyramid wavefront sensor, allowing to estimate and compensate for the optical gain in real time.
The deep learning wavefront sensor (DLWFS) allows the direct estimate
of Zernike coefficients of aberrated wavefronts from intensity images.
The main drawback of this approach is related to the use of massive
convolutional neural networks (CNNs) that are lengthy to train or
estimate. In this paper, we explore several options to reduce both the
training and estimation time. First, we develop a CNN that can be
rapidly trained without compromising accuracy. Second, we explore the
effects given smaller input image sizes and different amounts of
Zernike modes to be estimated. Our simulation results demonstrate that
the proposed network using images of either
8
×
8
,
16
×
16
, or
32
×
32
will dramatically reduce training
time and even boost the estimation accuracy of Zernike coefficients.
From our experimental results, we can confirm that a
16
×
16
DLWFS can be quickly trained and is
able to estimate the first 12 Zernike coefficients–skipping piston,
tip, and tilt–without sacrificing accuracy and significantly speeding
up the prediction time to facilitate low-cost, real-time adaptive
optics systems.
In this work, we evaluate a especially crafted deep convolutional neural network to provide with estimations of the wavefront aberration modes directly from pyramidal wavefront sensor (PyWFS) images. Overall, the use of deep neural networks allow to improve the estimation performance as well as the operational range of the PyWFS, especially when considering cases of strong turbulence or bad seeing ratios D 0 /r 0 . Our preliminary results provide with evidence that by using neural nets, instead of the classic linear estimation methods, we can obtain a low modulation sensitivity response while extending the linearity range of the PyWFS, reducing the residual variance by a factor of 1.6 when dealing with a r 0 as low as a few centimeters.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.