The emerging field of nanoscale infrared (nano-IR) offers
label-free
molecular contrast, yet its imaging speed is limited by point-by-point
traverse acquisition of a three-dimensional (3D) data cube. Here,
we develop a spatial–spectral network (SS-Net), a miniaturized
deep-learning model, together with compressive sampling to accelerate
the nano-IR imaging. The compressive sampling is performed in both
the spatial and spectral domains to accelerate the imaging process.
The SS-Net is trained to learn the mapping from small nano-IR image
patches to the corresponding spectra. With this elaborated mapping
strategy, the training can be finished quickly within several minutes
using the subsampled data, eliminating the need for a large-labeled
dataset of common deep learning methods. We also designed an efficient
loss function, which incorporates the image and spectral similarity
to enhance the training. We first validate the SS-Net on an open stimulated
Raman-scattering dataset; the results exhibit the potential of 10-fold
imaging speed improvement with state-of-the-art performance. We then
demonstrate the versatility of this approach on atomic force microscopy
infrared (AFM-IR) microscopy with 7-fold imaging speed improvement,
even on nanoscale Fourier transform infrared (nano-FTIR) microscopy
with up to 261.6 folds faster imaging speed. We further showcase the
generalization of this method on AFM-force volume-based multiparametric
nanoimaging. This method establishes a paradigm for rapid nano-IR
imaging, opening new possibilities for cutting-edge research in materials,
photonics, and beyond.