When monitoring spatial phenomena with wireless sensor networks, selecting the best sensor placements is a fundamental task. Not only should the sensors be informative, but they should also be able to communicate efficiently. In this paper, we present a data-driven approach that addresses the three central aspects of this problem: measuring the predictive quality of a set of sensor locations (regardless of whether sensors were ever placed at these locations), predicting the communication cost involved with these placements, and designing an algorithm with provable quality guarantees that optimizes the NP-hard tradeoff. Specifically, we use data from a pilot deployment to build non-parametric probabilistic models called Gaussian Processes (GPs) both for the spatial phenomena of interest and for the spatial variability of link qualities, which allows us to estimate predictive power and communication cost of unsensed locations. Surprisingly, uncertainty in the representation of link qualities plays an important role in estimating communication costs. Using these models, we present a novel, polynomial-time, data-driven algorithm, pSPIEL, which selects Sensor Placements at Informative and cost-Effective Locations. Our approach exploits two important properties of this problem: submodularity, formalizing the intuition that adding a node to a small deployment can help more than adding a node to a large deployment; and locality, under which nodes that are far from each other provide almost independent information. Exploiting these properties, we prove strong approximation guarantees for our pSPIEL approach. We also provide extensive experimental validation of this practical approach on several real-world placement problems, and built a complete system implementation on 46 Tmote Sky motes, demonstrating significant advantages over existing methods.
This paper presents a technique to select a representative set of test cases from a test suite that provides the same coverage as the entire test suite. This selection is performed by identifying, and then eliminating, the redundant and obsolete test cases in the test suite. The representative set replaces the original test suite and thus, potentially produces a smaller test suite. The representative set can also be used to identify those test cases that should be rerun to test the program after it has been changed. Our technique is independent of the testing methodology and only requires an association between a testing requirement and the test cases that satisfy the requirement. We illustrate the technique using the data flow testing methodology. The reduction that is possible with our technique is illustrated by experimental results.
After changes are made to a previously tested program, a goal of regression testing is to perform retesting based only on the modifcation while maintaining the same testing coverage as completely retesting the program. We present a novel approach to data flow based regression testing that uses slicing type algorithms to explicitly detect definition-use pairs that are affected by a program change. An important benefit of our slicing technique is, unlike previous techniques, no data flow history is needed nor is the recomputation of data flow for the entire program required to detect affected definition-use pairs. The program changes drive the recomputation of the required partial data flow through slicing. Another advantage is that our technique achieves the same testing coverage as a complete retest of the program without maintaining a test suite. Thus, the overhead of maintaining and updating a test suite is eliminated.
By analyzing the behavior of a set of benchmarks, we demonstrate that a small number of distinct values tend to occur very frequently in memory. On an average, only eight of these frequent values were found to occupy 48% of memory locations for the benchmarks studied. In addition, we demonstrate that the identity of frequent values remains stable over the entire execution of the program and these values are scattered fairly uniformly across the allocated memory. We present three different algorithms for finding frequent values and experimentally demonstrate their effectiveness. Each of these algorithms is designed to suit a different application scenario. Since the contents of memory exhibit frequent value locality, it is expected that frequent values will be observed in data streams that flow across different points in the memory hierarchy. We exploit this observation for developing two low-power designs: a low-power level-one data cache and a low-power external data bus. In each of these applications a different form of encoding of frequent values is employed to obtain a low-power design. We also experimentally demonstrate the effectiveness of these designs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.