2006 IEEE International Conference on Mobile Ad Hoc and Sensor Sysetems 2006
DOI: 10.1109/mobhoc.2006.278578
|View full text |Cite
|
Sign up to set email alerts
|

Apples, Oranges, and Testbeds

Abstract: Abstract-Research into wireless sensor networks is rapidly moving from simulations to realistic testbeds. The widely varying characteristics (e.g., radio hardware, #nodes, topology) of various testbeds raises concerns about the validity of results across different testbeds. This paper presents empirical data of an experiment involving one application (Surge), two routing protocols (MultiHop and MintRoute), and two testbeds (MoteLab and MistLab). The outcome is somewhat mixed. When increasing the data rate, con… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
6
0

Year Published

2008
2008
2015
2015

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(6 citation statements)
references
References 12 publications
0
6
0
Order By: Relevance
“…In general, failure in uncontrolled deployments is rarely visible in fine detail to protocol developers: "We frequently failed to understand [..] performance results and could not determine who was to blame (i.e., the testbed characteristics, or the routing layer?)" [11]. The performance of certain deployments of precursors of CTP has been found to be puzzling and counter to intuition, and different deployments were found not to paint a unified picture of the protocol performance: "comparing [evaluation] results from different testbeds is much like comparing apples and oranges" [11].…”
Section: Verification Of Topological Metrics For Ddrmentioning
confidence: 99%
See 1 more Smart Citation
“…In general, failure in uncontrolled deployments is rarely visible in fine detail to protocol developers: "We frequently failed to understand [..] performance results and could not determine who was to blame (i.e., the testbed characteristics, or the routing layer?)" [11]. The performance of certain deployments of precursors of CTP has been found to be puzzling and counter to intuition, and different deployments were found not to paint a unified picture of the protocol performance: "comparing [evaluation] results from different testbeds is much like comparing apples and oranges" [11].…”
Section: Verification Of Topological Metrics For Ddrmentioning
confidence: 99%
“…[11]. The performance of certain deployments of precursors of CTP has been found to be puzzling and counter to intuition, and different deployments were found not to paint a unified picture of the protocol performance: "comparing [evaluation] results from different testbeds is much like comparing apples and oranges" [11]. A problem was found in a deployment related to the management of the routing table in unexpectedly dense networks, similar to our topological problem T 1 : "On the field about 70 nodes form a single cell around the gateway, which forces MintRoute to make a selection since it has room for only 16 nodes in its neighbor list.…”
Section: Verification Of Topological Metrics For Ddrmentioning
confidence: 99%
“…We expect our approach to be more beneficial if integrated with routing protocols supporting high traffic rates. Moreover, the room for throughput improvement in a bandwidth limited system, like a sensornets, is very limited: Langendoen [23] reports a maximum link throughput of 3 KB/s for CC2420 without routing in TinyOS. Therefore, in addition to our primary goal of reducing the number of transmission, the throughput increase revealed in Figure 12 is a welcome improvement in a multihop sensornet.…”
Section: 33mentioning
confidence: 99%
“…[17]. Some WSN protocols which performed well in controlled environments had as low as 2% data delivery in the field [9,26].…”
Section: Introductionmentioning
confidence: 99%
“…WSN testing is often done on indoor WSN testbeds [4,14], which form well-connected networks unlikely to reproduce the type of communication interruptions encountered later in environmental deployments. Furthermore, "experimental results obtained on a single testbed are very difficult to generalize" [17]. Since worst-case scenarios are statistically rare events in the state-space of the problem, non-exhaustive methods, such as testbed analysis [14,22] or random testing, are likely to miss them.…”
Section: Introductionmentioning
confidence: 99%