This paper studies decentralized, Fountain and network-coding based strategies for facilitating data collection in circular wireless sensor networks, which rely on the stochastic diversity of data storage. The goal is to allow for a reduced delay collection by a data collector who accesses the network at a random position and random time. Data dissemination is performed by a set of relays which form a circular route to exchange source packets. The storage nodes within the transmission range of the route's relays linearly combine and store overheard relay transmissions using random decentralized strategies. An intelligent data collector first collects a minimum set of coded packets from a subset of storage nodes in its proximity, which might be sufficient for recovering the original packets and, by using a message-passing decoder, attempts recovering all original source packets from this set. Whenever the decoder stalls, the source packet which restarts decoding is polled/doped from its original source node. The random-walk-based analysis of the decoding/doping process furnishes the collection delay analysis with a prediction on the number of required doped packets. The number of doped packets can be surprisingly small when employed with an Ideal Soliton code degree distribution and, hence, the doping strategy may have the least collection delay when the density of source nodes is sufficiently large. Furthermore, we demonstrate that network coding makes dissemination more efficient at the expense of a larger collection delay. Not surprisingly, a circular network allows for a significantly more (analytically and otherwise) tractable strategies relative to a network whose model is a random geometric graph.
While research on adversarial examples in machine learning for images has been prolific, similar attacks on deep learning (DL) for radio frequency (RF) signals and their mitigation strategies are scarcely addressed in the published work, with only one recent publication in the RF domain [1]. RF adversarial examples (AdExs) can cause drastic, targeted misclassification results mostly in spectrum sensing/ survey applications (e.g. BPSK mistaken for OFDM) with minimal waveform perturbation. It is not clear if the RF AdExs maintain their effects in the physical world, i.e., when AdExs are delivered over-theair (OTA). Our research on deep learning AdExs and proposed defense mechanisms are RF-centric, and incorporate physicalworld, OTA effects. We here present defense mechanisms based on statistical tests. One test to detect AdExs utilizes Peak-to-Average-Power-Ratio (PAPR) of the DL data points delivered OTA, while another statistical test uses the Softmax outputs of the DL classfier, which corresponds to the probabilities the classifier assigns to each of the trained classes. The former test leverages the RF nature of the data, and the latter is universally applicable to AdExs regardless of their origin. Both solutions are shown as viable mitigation methods to subvert adversarial attacks against communications and radar sensing systems.
Adversarial examples in machine learning for images are widely publicized and explored. Illustrations of misclassifications caused by slightly perturbed inputs are abundant and commonly known (e.g., a picture of panda imperceptibly perturbed to fool the classifier into incorrectly labeling it as a gibbon). Similar attacks on deep learning (DL) for radio frequency (RF) signals and their mitigation strategies are scarcely addressed in the published work. Yet, RF adversarial examples (AdExs) with minimal waveform perturbations can cause drastic, targeted misclassification results, particularly against spectrum sensing/survey applications (e.g. BPSK is mistaken for 8-PSK). Our research on deep learning AdExs and proposed defense mechanisms are RF-centric, and incorporate physicalworld, over-the-air (OTA) effects. We herein present defense mechanisms based on pre-training the target classifier using an autoencoder. Our results validate this approach as a viable mitigation method to subvert adversarial attacks against deep learning-based communications and radar sensing systems. I. INTROA new research direction is emerging in the field of wireless communications, aiming to develop and evaluate deep learning (DL) approaches against classical detection and estimation methods in the radio frequency (RF) realm. Spectrum sensing, especially in the context of cognitive radio, encompasses most of the radio signal detection problems that are being addressed. The approach to DL in the RF domain differs greatly from the common current DL applications (e.g. image recognition, natural language processing) and requires special knowledge of RF signal processing and wireless communications and/or radar, depending on the signal utilization. While research on adversarial examples in machine learning for images has been prolific, similar attacks on deep learning of radio frequency (RF) signals and the mitigation strategies are scarcely addressed in the published work, with only a couple of recent publications on RF [1], [2]. Adversarial examples (AdExs) are slightly perturbed inputs that are classified incorrectly by the Machine Learning (ML) model [3]. This perturbation is achieved by mathematical processing of the signal, e.g., by adding an incremental value in the direction of the classifiers gradient with respect to the inputs (as in the FGSM attack illustrated in Fig. 3 A), or by solving a constrained optimization problem. Popular deep learning (DL) models are even more vulnerable to AdExs as DL networks learn input-output mappings that are fairly discontinuous. Consider the images in Figure 1 [4]. The image on the left is the original image of a panda from the ImageNet dataset [5], while the one on the right is derived from it by applying Fig. 1. Famous panda illustration of an adversarial image example against a DL classifier where a visually imperceptible, noise-like perturbation can fool the classifier to label it as gibbon an FGSM attack of very low intensity. The perturbation of 0.007 added in the direction of the loss gradie...
Abstract-We analyze the benefits of an automatic repeatrequest (ARQ) scheme employed in doped belief-propagation decoding for increasing the throughput of fountain encoded data transmissions. The proposed doping mechanism selects doping symbols randomly from the set of input symbols contributing to degree-two output symbols. Here an output symbol is the encoded symbol whose code-graph links to input symbols decoded thus far have been severed. This doping approach always ensures releasing of at least one output symbol, thus increasing the number of degree-one output symbols (the ripple). Using a random walk analysis, we study the belief propagation decoding with degree-two random doping for a fountain code with symbols drawn from an Ideal Soliton distribution. We show that the decoding process is a renewal process whereas the process starts all over afresh after each doping. The approximate interdoping process analysis revolves around a random walk model for the ripple size. We model the sequence of the ripple size increments (due to doping and/or decoding) as an iid sequence of shifted and truncated Poisson random variables. This model furnishes a prediction on the number of required doping symbols and, furthermore, the ARQ throughput cost analysis. We also find that the Ideal Soliton significantly outperforms the Robust Soliton distribution in our ARQ-doping scheme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.