Sampling is a fundamental aspect of any implementation of compressive sensing. Typically, the choice of sampling method is guided by the reconstruction basis. However, this approach can be problematic with respect to certain hardware constraints and is not responsive to domain-specific context. We propose a method for defining an order for a sampling basis that is optimal with respect to capturing variance in data, thus allowing for meaningful sensing at any desired level of compression. We focus on the Walsh-Hadamard sampling basis for its relevance to hardware constraints, but our approach applies to any sampling basis of interest. We illustrate the effectiveness of our method on the Physical Sciences Inc. Fabry-Pérot interferometer sensor multispectral dataset, the Johns Hopkins Applied Physics Lab FTIR-based longwave infrared sensor hyperspectral dataset, and a Colorado State University Swiss Ranger depth image dataset. The spectral datasets consist of simulant experiments, including releases of chemicals such as GAA and SF6. We combine our sampling and reconstruction with the adaptive coherence estimator (ACE) and bulk coherence for chemical detection and we incorporate an algorithmic threshold for ACE values to determine the presence or absence of a chemical. We compare results across sampling methods in this context. We have successful chemical detection at a compression rate of 90%. For all three datasets, we compare our sampling approach to standard orderings of sampling basis such as random, sequency, and an analog of sequency that we term 'frequency.' In one instance, the peak signal to noise ratio was improved by over 30% across a test set of depth images.
One of the fundamental assumptions of compressive sensing (CS) is that a signal can be reconstructed from a small number of samples by solving an optimization problem with the appropriate regularization term. Two standard regularization terms are the L1 norm and the total variation (TV) norm. We present a comparison of CS reconstruction results based on these two approaches in the context of chemical detection, and we demonstrate that optimization based on the L1 norm outperforms optimization based on the TV norm. Our comparison is driven by CS sampling, reconstruction, and chemical detection in two real-world datasets: the Physical Sciences Inc. Fabry-Pérot interferometer sensor multispectral dataset and the Johns Hopkins Applied Physics Lab FTIRbased longwave infrared sensor hyperspectral dataset. Both datasets contain the release of a chemical simulant such as glacial acetic acid, triethyl phosphate, and sulfur hexafluoride. For chemical detection we use the adaptive coherence estimator (ACE) and bulk coherence, and we propose algorithmic ACE thresholds to define the presence or absence of a chemical of interest in both un-compressed data cubes and reconstructed data cubes. The un-compressed data cubes provide an approximate ground truth. We demonstrate that optimization based on either the L1 norm or TV norm results in successful chemical detection at a compression rate of 90%, but we show that L1 optimization is preferable. We present quantitative comparisons of chemical detection on reconstructions from the two methods, with an emphasis on the number of pixels with an ACE value above the threshold.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.