Controllably tuned infrared emissivity has attracted great interest for potential application in adaptive thermal camouflage. In this work, we report a flexible multilayer graphene based infrared device on a porous polyethylene membrane, where the infrared emissivity could be tuned by ionic liquid intercalation. The Fermi level of surface multilayer graphene shifts to a high energy level through ionic liquid intercalation, which blocks electronic transition below the Fermi level. Thus, the optical absorptivity/emissivity of graphene could be controlled by intercalation. Experimentally, the infrared emissivity of surface graphene was found to be tuned from 0.57 to 0.41 after ionic liquid intercalation. Meanwhile, the relative reflectivity Rv/R0 of surface graphene increased from 1.0 to 1.15. The strong fluorescence background of Raman spectra, the upshift of the G peak (~23 cm−1), and the decrease of sheet resistance confirmed the successful intercalation of ionic liquid into the graphene layers. This intercalation control of the infrared emissivity of graphene in this work displays a new way of building an effective thermal camouflage system.
Recently, deep learning has achieved breakthroughs in hyperspectral image (HSI) classification. Deep-learning-based classifiers require a large number of labeled samples for training to provide excellent performance. However, the availability of labeled data is limited due to the significant human resources and time costs of labeling hyperspectral data. Unsupervised learning for hyperspectral image classification has thus received increasing attention. In this paper, we propose a novel unsupervised framework based on a contrastive learning method and a transformer model for hyperspectral image classification. The experimental results prove that our model can efficiently extract hyperspectral image features in unsupervised situations.
Hyperspectral image (HSI) classification is the subject of intense research in remote sensing. The tremendous success of deep learning in computer vision has recently sparked the interest in applying deep learning in hyperspectral image classification. However, most deep learning methods for hyperspectral image classification are based on convolutional neural networks (CNN). Those methods require heavy GPU memory resources and run time. Recently, another deep learning model, the transformer, has been applied for image recognition, and the study result demonstrates the great potential of the transformer network for computer vision tasks. In this paper, we propose a model for hyperspectral image classification based on the transformer, which is widely used in natural language processing. Besides, we believe we are the first to combine the metric learning and the transformer model in hyperspectral image classification. Moreover, to improve the model classification performance when the available training samples are limited, we use the 1-D convolution and Mish activation function. The experimental results on three widely used hyperspectral image data sets demonstrate the proposed model’s advantages in accuracy, GPU memory cost, and running time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.