RF fingerprinting is an emerging technology for identifying hardware-specific features of wireless transmitters and may find important applications in wireless security. In this study, the authors present a new RF fingerprinting scheme using deep neural networks. In particular, a long short-term memory based recurrent neural network is proposed and used for automatically identifying hardware-specific features and classifying transmitters. Experimental studies using identical RF transmitters showed very high detection accuracy in the presence of strong noise (signal-to-noise ratio as low as −12 dB) and demonstrated the effectiveness of the proposed scheme.Introduction: 'RF fingerprinting' generally refers to the process of identifying the unique characteristics of a wireless transmitter hardware imposed on the transmitted signals. RF fingerprinting can be used to effectively prevent node impersonation, in which legitimate security credentials are obtained by an adversary to compromise the security [1].Many hardware-dependent features have been explored for RF fingerprinting. These features exist due to variations in the manufacture process of wireless transmitters. These variations are small enough to meet the requirements of communication standards but allow for unique device-dependent features to be identified. Examples of such features include the turn-on transient phase of the signals [2, 3], power amplifier imperfections [4,5], magnitude and phase errors, I/Q dc offset [6], carrier frequency differences, phase offset, and second-order cyclostationary features [7], clock offset [8].Existing fingerprinting algorithms include white-list based algorithms and unsupervised learning based algorithms. The former requires legitimate devices to register and training a prior to setup a database for their feature space. The latter does not require such prior knowledge, and as such does not differentiate legitimate features from illegitimate ones. Both methods are useful in detecting and identifying spoofing. However, all existing works on RF fingerprinting depend on a set of human engineered features from various layers of the protocol stack [1]. In this work, we will demonstrate that deep neural networks can be used to effectively implement device identification with high accuracy through automatic learning of device-dependent RF fingerprints. In contrast to existing works, the proposed approach does not require human intervention in defining what features should be used in the RF fingerprinting process.
Existing dialog system models require extensive human annotations and are difficult to generalize to different tasks. The recent success of large pre-trained language models has suggested the effectiveness of incorporating language priors in down-stream NLP tasks. However, how much pre-trained language models can help dialog response generation is still under exploration. In this paper, we propose a simple, general, and effective framework: Alternating Recurrent Dialog Model (ARDM) 1 . ARDM models each speaker separately and takes advantage of large pre-trained language models. It requires no supervision from human annotations such as belief states or dialog acts to achieve effective conversations. ARDM outperforms or is on par with the state-of-theart methods on two popular task-oriented dialog datasets: CamRest676 and MultiWOZ. Moreover, we can generalize ARDM to more challenging, non-collaborative tasks such as persuasion. In the PersuasionForGood task, ARDM is capable of generating human-like responses to persuade people to donate to a charity.
With training data of insufficient information, soft sensor models inevitably show some inaccurate predictions in their industrial applications. This work aims to develop an active learning method to sequentially select a data set with significant information to enhance latent variable model (LVM)-based soft sensors. Using the Gaussian process model to link the relationships between the score variables of LVM and the input process variables, the prediction variance can be formulated. And an uncertainty index of LVM is presented. It contains the variances of the predicted outputs and the changes of the predicted outputs per unit change in the designed inputs. Without any prior knowledge of the process, the index is sequentially used to adequately find out from which regions the new informative data should be adopted to enhance the model quality. Additionally, an evaluation criterion is proposed to monitor the active learning procedure. Consequently, the active learning procedures of exploration and exploitation analysis of the current model can effectively discover the meaningful data to be included into the soft sensor model. The proposed strategy can be applied to any types of LVMs. The effectiveness and the promising results are demonstrated through a numerical example and a real industrial plant in Taiwan with multiple outputs.
Many social media news writers are not professionally trained. Therefore, social media platforms have to hire professional editors to adjust amateur headlines to attract more readers. We propose to automate this headline editing process through neural network models to provide more immediate writing support for these social media news writers. To train such a neural headline editing model, we collected a dataset which contains articles with original headlines and professionally edited headlines. However, it is expensive to collect a large number of professionally edited headlines. To solve this low-resource problem, we design an encoder-decoder model which leverages large scale pre-trained language models. We further improve the pre-trained model's quality by introducing a headline generation task as an intermediate task before the headline editing task. Also, we propose Self Importance-Aware (SIA) loss to address the different levels of editing in the dataset by down-weighting the importance of easily classified tokens and sentences. With the help of Pre-training, Adaptation, and SIA, the model learns to generate headlines in the professional editor's style. Experimental results show that our method significantly improves the quality of headline editing comparing against previous methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.