We are motivated by the need, in some applications, for impromptu or as-you-go deployment of wireless sensor networks. A person walks along a line, starting from a sink node (e.g., a base-station), and proceeds towards a source node (e.g., a sensor) which is at an a priori unknown location. At equally spaced locations, he makes link quality measurements to the previous relay, and deploys relays at some of these locations, with the aim to connect the source to the sink by a multihop wireless path. In this paper, we consider two approaches for impromptu deployment: (i) the deployment agent can only move forward (which we call a pure as-you-go approach), and (ii) the deployment agent can make measurements over several consecutive steps before selecting a placement location among them (the explore-forward approach). We consider a very light traffic regime, and formulate the problem as a Markov decision process, where the trade-off is among the power used by the nodes, the outage probabilities in the links, and the number of relays placed per unit distance. We obtain the structures of the optimal policies for the pure as-you-go approach as well as for the explore-forward approach. We also consider natural heuristic algorithms, for comparison. Numerical examples show that the explore-forward approach significantly outperforms the pure asyou-go approach in terms of network cost. Next, we propose two learning algorithms for the explore-forward approach, based on Stochastic Approximation, which asymptotically converge to the set of optimal policies, without using any knowledge of the radio propagation model. We demonstrate numerically that the learning algorithms can converge (as deployment progresses) to the set of optimal policies reasonably fast and, hence, can be practical model-free algorithms for deployment over large regions. Finally, we demonstrate the end-to-end traffic carrying capability of such networks via field deployment.
In this paper, we develop Gibbs sampling based techniques for learning the optimal placement of contents in a cellular network. We consider the situation where a finite collection of base stations are scattered on the plane, each covering a cell (possibly overlapping with other cells). Mobile users request for downloads from a finite set of contents according to some popularity distribution which may be known or unknown to the base stations. Each base station has a fixed memory space that can store only a strict subset of the contents at a time; hence, if a user requests for a content that is not stored at any of its serving base stations, the content has to be downloaded from the backhaul. Hence, we consider the problem of optimal content placement which minimizes the rate of download from the backhaul, or equivalently maximize the cache hit rate. It is known that, when multiple cells can overlap with one another (e.g., under dense deployment of base stations in small cell networks), it is not optimal to place the most popular contents in each base station. However, the optimal content placement problem is NP-complete. Using ideas of Gibbs sampling, we propose simple sequential content update rules that decide whether to store a content at a base station (if required from the base station) and which content has to be removed from the corresponding cache, based on the knowledge of contents stored in its neighbouring base stations. The update rule is shown to be asymptotically converging to the optimal content placement for all nodes under the knowledge of content popularity. Next, we extend the algorithm to address the situation where content popularities and cell topology are initially unknown, but are estimated as new requests arrive to the base stations; we show that our algorithm working with the running estimates of content popularities and cell topology also converges asymptotically to the optimal content placement. Finally, we demonstrate the improvement in cache hit rate compared to most popular content placement and independent content placement strategies via numerical exploration.
In this paper, secure, remote estimation of a linear Gaussian process via observations at multiple sensors is considered. Such a framework is relevant to many cyberphysical systems and internet-of-things applications. Sensors make sequential measurements that are shared with a fusion center; the fusion center applies a certain filtering algorithm to make its estimates. The challenge is the presence of a few unknown malicious sensors which can inject anomalous observations to skew the estimates at the fusion center. The set of malicious sensors may be time-varying. The problems of malicious sensor detection and secure estimation are considered. First, an algorithm for secure estimation is proposed. The proposed estimation scheme uses a novel filtering and learning algorithm, where an optimal filter is learnt over time by using the sensor observations in order to filter out malicious sensor observations while retaining other sensor measurements. Next, a novel detector to detect injection attacks on an unknown sensor subset is developed. Numerical results demonstrate up to 3 dB gain in the mean squared error and up to 75% higher attack detection probability under a small false alarm rate constraint, against a competing algorithm that requires additional side information.Index Terms-Secure remote estimation, CPS security, false data injection attack, Kalman filter, stochastic approximation.
Our work is motivated by the need for impromptu (or "as-you-go") deployment of relay nodes (for establishing a packet communication path with a control centre) by firemen/commandos while operating in an unknown environment. We consider a model, where a deployment operative steps along a random lattice path whose evolution is Markov. At each step, the path can randomly either continue in the same direction or take a turn "North" or "East," or come to an end, at which point a data source (e.g., a temperature sensor) has to be placed that will send packets to a control centre at the origin of the path. A decision has to be made at each step whether or not to place a wireless relay node. Assuming that the packet generation rate by the source is very low, and simple link-by-link scheduling, we consider the problem of relay placement so as to minimize the expectation of an end-to-end cost metric (a linear combination of the sum of convex hop costs and the number of relays placed). This impromptu relay placement problem is formulated as a total cost Markov decision process. First, we derive the optimal policy in terms of an optimal placement set and show that this set is characterized by a boundary beyond which it is optimal to place. Next, based on a simpler alternative one-step-look-ahead characterization of the optimal policy, we propose an algorithm which is proved to converge to the optimal placement set in a finite number of steps and which is faster than the traditional value iteration. We show by simulations that the distance based heuristic, usually assumed in the literature, is close to the optimal provided that the threshold distance is carefully chosen.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.