The massive production of microphones for consumer electronics, and the shift from dedicated processing hardware to PC-based systems, opens the way to build affordable, extensive noise measurement networks. Applications include e.g. noise limit and urban soundscape monitoring, and validation of calculated noise maps. Microphones are the critical components of such a network. Therefore, in a first step, some basic characteristics of 8 microphones, distributed over a wide range of price classes, were measured in a standardized way in an anechoic chamber. In a next step, a thorough evaluation was made of the ability of these microphones to be used for environmental noise monitoring. This was done during a continuous, half-year lasting outdoor experiment, characterized by a wide variety of meteorological conditions. While some microphones failed during the course of this test, it was shown that it is possible to identify cheap microphones that highly correlate to the reference microphone during the full test period. When the deviations are expressed in total A-weighted (road traffic) noise levels, values of less than 1 dBA are obtained, in excess to the deviation amongst reference microphones themselves.
Requirements for static (prediction of L(den) and diurnal averaged noise pattern) and dynamic (prediction of 15 min and 60 min evolution of L(Aeq) and statistical levels L(A90,)L(A50) and L(A10)) noise level monitoring are investigated in this paper. Noise levels are measured for 72 consecutive days at 5 neighboring streets in an inner-city noise measurement network in Gent, Flanders, Belgium. We present a method to make predictions based on a fixed monitoring station, combined with short-term sampling at temporary stations. It is shown that relying on a fixed station improves the estimation of L(den) at other locations, and allows for the reduction of the number of samples needed and their duration; L(den) is estimated with an error that does not exceed 1.5 dB(A) to 3.4 dB(A) according to the location, for 90% of the 3 × 15 min samples. Also the diurnal averaged noise pattern can be estimated with a good accuracy in this way. It was shown that there is an optimal location for the fixed station which can be found by short-term measurements only. Short-term level predictions were shown to be more difficult; 7 day samples were needed to build models able to estimate the evolution of L(Aeq,60min) with a RMSE ranging between 1.4 dB(A) and 3.7 dB(A). These higher values can be explained by the very pronounced short-term variations appearing in typical streets, which are not correlated between locations. On the other hand, moderately accurate predictions can be achieved, even based on short-term sampling (a 3 × 15 minute sampling duration seems to be sufficient for many of the accuracy goals set related to static and dynamic monitoring). Finally, the method proposed also allows for the prediction of the evolution of statistical indicators.
Geosensor networks and sensor webs are two technologies widely used for determining our exposure to pollution levels and ensuring that this information is publicly available. However, most of these networks are independent from each other and often designed for specific domains, hindering the integration of sensor data from different sources. We contributed to the integration of several environmental sensor networks in the context of the IDEA project. The objective of this project was to measure noise and air quality pollution levels in urban areas in Belgium using low-cost sensors. This paper presents the IDEA Environmental Measurement Cloud as a proof-of-concept Data-as-a-Service (DaaS) cloud platform that integrates environmental sensor networks with a sensor web. Our DaaS platform implements a federated two-layer architecture to loosely couple together sensor networks deployed over a wide geographical area with web services. It offers several data access, discovery, and visualization services to the public while serving as a scientific tool for noise pollution research. After one year of operation, it hosts approximately 6.5 TB of environmental data and offers to the public near real-time noise pollution measurements from over 40 locations in Belgium.
Advances in embedded systems and mobile communication have led to the emergence of smaller, cheaper, and more intelligent sensing units. As of today, these devices have been used in many sensor network applications focused at monitoring environmental parameters in areas with relative large geographical extent. However, in many of these applications, management is often centralized and hierarchical. This approach imposes some major challenges in the context of large-scale and highly distributed sensor networks. In this paper, we present a multilayered, middleware platform for sensor networks offering transparent data aggregation, control, and management mechanisms to the application developer. Furthermore, we propose the use of multiagent systems (MASs) to create a computing environment capable of managing and optimizing tasks autonomously. In order to ensure the scalability of the distributed data fusion, we propose a three-step procedure to balance the workload among machines using mobile agent technology.
The last few years, we have witnessed an exponential growth in available content, much of which is user generated (e.g. pictures, videos, blogs, reviews, etc.). The downside of this overwhelming amount of content is that it becomes increasingly difficult for users to identify the content they really need, resulting into considerable research efforts concerning personalised search and content retrieval.On the other hand, this enormous amount of content raises new possibilities: existing services can be enriched using this content, provided that the content items used match the user's personal interests. Ideally, these interests should be obtained in an automatic, transparent way for an optimal user experience.In this paper two models representing user profiles are presented, both based on keywords and with the goal to enrich real-time communication services. The first model consists of a light-weight keyword tree which is very fast, while the second approach is based on a keyword ontology containing extra temporal relationships to capture more details of the user's behavior, however exhibiting lower performance. The profile models are supplemented with a set of algorithms, allowing to learn user interests and retrieving content from personal content repositories.In order to evaluate the performance, an enhanced instant messaging communication service was designed. Through simulations the two models are assessed in terms of real-time behavior and extensibility. User evaluations Preprint submitted to Journal of Network and Computer Applications October 26, 2009 allow to estimate the added value of the approach taken. The experiments conducted indicate that the algorithms succeed in retrieving content matching the user's interests and both models exhibit a linear scaling behavior. The algorithms perform clearly better in finding content matching several user interests when benefiting from the extra temporal information in the ontology based model.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.