In this paper, we look at making backscatter practical for ultra-low power on-body sensors by leveraging radios on existing smartphones and wearables (e.g. WiFi and Bluetooth). The difficulty lies in the fact that in order to extract the weak backscattered signal, the system needs to deal with self interference from the wireless carrier (WiFi or Bluetooth) without relying on built-in capability to cancel or reject the carrier interference.Frequency-shifted backscatter (or FS-Backscatter) is based on a novel idea -the backscatter tag shifts the carrier signal to an adjacent non-overlapping frequency band (i.e. adjacent WiFi or Bluetooth band) and isolates the spectrum of the backscattered signal from the spectrum of the primary signal to enable more robust decoding. We show that this enables communication of up to 4.8 meters using commercial WiFi and Bluetooth radios as the carrier generator and receiver. We also show that we can support a range of bitrates using packet-level and bit-level decoding methods. We build on this idea and show that we can also leverage multiple radios typically present on mobile and wearable devices to construct multi-carrier or multi-receiver scenarios to improve robustness. Finally, we also address the problem of designing an ultra-low power tag that can frequency shift by 20MHz while consuming tens of micro-watts. Our results show that FS-Backscatter is practical in typical mobile and static on-body sensing scenarios while only using commodity radios and antennas.
The reemergence of Deep Neural Networks (DNNs) has lead to high-performance supervised learning algorithms for the Electro-Optical (EO) domain classification and detection problems. This success is because generating huge labeled datasets has become possible using modern crowdsourcing labeling platforms such as Amazon’s Mechanical Turk that recruit ordinary people to label data. Unlike the EO domain, labeling the Synthetic Aperture Radar (SAR) domain data can be much more challenging, and for various reasons, using crowdsourcing platforms is not feasible for labeling the SAR domain data. As a result, training deep networks using supervised learning is more challenging in the SAR domain. In the paper, we present a new framework to train a deep neural network for classifying Synthetic Aperture Radar (SAR) images by eliminating the need for a huge labeled dataset. Our idea is based on transferring knowledge from a related EO domain problem, where labeled data are easy to obtain. We transfer knowledge from the EO domain through learning a shared invariant cross-domain embedding space that is also discriminative for classification. To this end, we train two deep encoders that are coupled through their last year to map data points from the EO and the SAR domains to the shared embedding space such that the distance between the distributions of the two domains is minimized in the latent embedding space. We use the Sliced Wasserstein Distance (SWD) to measure and minimize the distance between these two distributions and use a limited number of SAR label data points to match the distributions class-conditionally. As a result of this training procedure, a classifier trained from the embedding space to the label space using mostly the EO data would generalize well on the SAR domain. We provide a theoretical analysis to demonstrate why our approach is effective and validate our algorithm on the problem of ship classification in the SAR domain by comparing against several other competing learning approaches.
Abstract-Reconstruction of multidimensional signals from the samples of their partial derivatives is known to be a standard problem in inverse theory. Such and similar problems routinely arise in numerous areas of applied sciences, including optical imaging, laser interferometry, computer vision, remote sensing and control. Though being ill-posed in nature, the above problem can be solved in a unique and stable manner, provided proper regularization and relevant boundary conditions. In this paper, however, a more challenging setup is addressed, in which one has to recover an image of interest from its noisy and blurry version, while the only information available about the imaging system at hand is the amplitude of the generalized pupil function (GPF) along with partial observations of the gradient of GPF's phase. In this case, the phase-related information is collected using a simplified version of the Shack-Hartmann interferometer, followed by recovering the entire phase by means of derivative compressed sensing. Subsequently, the estimated phase can be combined with the amplitude of the GPF to produce an estimate of the point spread function (PSF), whose knowledge is essential for subsequent image deconvolution. In summary, the principal contribution of this work is twofold. First, we demonstrate how to simplify the construction of the Shack-Hartmann interferometer so as to make it less expensive and hence more accessible. Second, it is shown by means of numerical experiments that the above simplification and its associated solution scheme produce image reconstructions of the quality comparable to those obtained using dense sampling of the GPF phase.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.