Datalink layer framing in wireless sensor networks usually faces a trade-off between large frame sizes for high channel bandwidth utilization and small frame sizes for effective error recovery. Given the high error rates of intermote communications, TinyOS opts in favor of small frame sizes at the cost of extremely low channel bandwidth utilization. In this paper, we describe Seda: a streaming datalink layer that resolves the above dilemma by decoupling framing from error recovery. Seda treats the packets from the upper layer as a continuous stream of bytes. It breaks the data stream into blocks, and retransmits erroneous blocks only (as opposed to the entire erroneous frame). Consequently, the frame-error-rate (FER), the main factor that bounds the frame size in the current design, becomes irrelevant to error recovery. A frame can therefore be sufficiently large in great favor of high utilization of the wireless channel bandwidth, without compromising the effectiveness of error recovery. Meanwhile, the size of each block is configured according to the error characteristics of the wireless channel to optimize the performance of error recovery. Seda has been implemented as a new datalink layer in the TinyOS, and evaluated through both simulations and experiments in a testbed of 48 MicaZ motes. Our results show that, by increasing the TinyOS frame size from the default 29 bytes to 100 bytes (limited by the buffer space at MicaZ firmware), Seda improves the throughput around 25% under typical wireless channel conditions. Seda also reduces the retransmission traffic volume by more than 50%, compared to a frame-based retransmission scheme. Our analysis also exposes that future sensor motes should be equipped with radios with more packet buffer space on the radio firmware to achieve optimal utilization of the channel capacity.
Abstract-The explosive growth in social network content suggests that the largest "sensor network" yet might be human.Extending the participatory sensing model, this paper explores the prospect of utilizing social networks as sensor networks, which gives rise to an interesting reliable sensing problem. In this problem, individuals are represented by sensors (data sources) who occasionally make observations about the physical world. These observations may be true or false, and hence are viewed as binary claims. The reliable sensing problem is to determine the correctness of reported observations. From a networked sensing standpoint, what makes this sensing problem formulation different is that, in the case of human participants, not only is the reliability of sources usually unknown but also the original data provenance may be uncertain. Individuals may report observations made by others as their own. The contribution of this paper lies in developing a model that considers the impact of such information sharing on the analytical foundations of reliable sensing, and embed it into a tool called Apollo that uses Twitter as a "sensor network" for observing events in the physical world. Evaluation, using Twitter-based case-studies, shows good correspondence between observations deemed correct by Apollo and ground truth.
The continued explosion of Twitter data has opened doors for many applications, such as location-based advertisement and entertainment using smartphones. Unfortunately, only about 0.58 percent of tweets are geo-tagged to date. To tackle the location sparseness problem, this paper presents a methodical approach to increasing the number of geotagged tweets by predicting the fine-grained location of those tweets in which their location can be inferred with high confidence. In order to predict the fine-grained location of tweets, we first build probabilistic models for locations using unstructured short messages tightly coupled with semantic locations. Based on the probabilistic models, we propose a 3-step technique (Filtering-Ranking-Validating) for tweet location prediction. In the filtering step, we introduce text analysis techniques to filter out those location-neutral tweets, which may not be related to any location at all. In the ranking step, we utilize ranking techniques to select the best candidate location for a tweet. Finally, in the validating step, we develop a classification-based prediction validation method to verify the location of where the tweet was actually written. We conduct extensive experiments using tweets covering three months and the results show that our approach can increase the number of geo-tagged tweets 4.8 times compared to the original Twitter data and place 34% of predicted tweets within 250m from their actual location.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.