Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative w ork) are based upon a group communications model, i.e., they require packet delivery from one or more authorized senders to a very large number of authorized receivers. As a result, securing group communications (i.e., providing con dentiality, integrity, and authenticity of messages delivered between group members) will become a critical networking issue.In this paper, we present a novel solution to the scalability problem of group/multicast key management. We formalize the notion of a secure group as a triple (U K R) where U denotes a set of users, K a set of keys held by the users, and R a user-key relation. We then introduce key graphs to specify secure groups. For a special class of key graphs, we present three strategies for securely distributing rekey messages after a join/leave, and specify protocols for joining and leaving a secure group. The rekeying strategies and join/leave protocols are implemented in a prototype group key server we have built. We present measurement results from experiments and discuss performance comparisons. We s h o w that our group key management service, using any of the three rekeying strategies, is scalable to large groups with frequent joins and leaves. In particular, the average measured processing time per join/leave increases linearly with the logarithm of group size.1 Introduction Most network applications are based upon the client-server paradigm and make use of unicast (or point-to-point) packet delivery. Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive s i m ulation, and collaborative w ork), on the other hand, are based upon a group communications model. That is, they require packet delivery from one or more authorized sender(s) to a large number of authorized receivers. In the Internet, multicast has been used successfully to provide an
Abstract. Intrusion detection is a surveillance problem of practical import that is well suited to wireless sensor networks. In this paper, we study the application of sensor networks to the intrusion detection problem and the related problems of classifying and tracking targets. Our approach is based on a dense, distributed, wireless network of multi-modal resource-poor sensors combined into loosely coherent sensor arrays that perform in situ detection, estimation, compression, and exfiltration. We ground our study in the context of a security scenario called "A Line in the Sand" and accordingly define the target, system, environment, and fault models. Based on the performance requirements of the scenario and the sensing, communication, energy, and computation ability of the sensor network, we explore the design space of sensors, signal processing algorithms, communications, networking, and middleware services. We introduce the influence field, which can be estimated from a network of binary sensors, as the basis for a novel classifier. A contribution of our work is that we do not assume a reliable network; on the contrary, we quantitatively analyze the effects of network unreliability on application performance. Our work includes multiple experimental deployments of over 90 sensors nodes at MacDill Air Force Base in Tampa, Florida, as well as other field experiments of comparable scale. Based on these experiences, we identify a set of key lessons and articulate a few of the challenges facing extreme scaling to tens or hundreds of thousands of sensor nodes.
Many emerging applications (e.g., teleconference, real-time information services, pay per view, distributed interactive simulation, and collaborative work) are based upon a group communications model, i.e.
We give a formal de nition of what it means for a system to \tolerate" a class of \faults". The de nition consists of two conditions: One, if a fault occurs when the system state is within a set of \legal" states, the resulting state is within some larger set and, if faults continue occurring, the system state remains within that larger set (Closure). And two, if faults stop occurring, the system eventually reaches a state within the legal set (Convergence). We demonstrate the applicability of our de nition for specifying and verifying the fault-tolerance properties of a variety of digital and computer systems. Further, using the de nition, we obtain a simple classi cation of fault-tolerant systems and discuss methods for their systematic design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.