Abstract-We consider the problem of using Wireless Sensor Networks (WSNs) to measure the temporal-spatial field of some scalar physical quantities. Our goal is to obtain a sufficiently accurate approximation of the temporal-spatial field with as little energy as possible. We propose an adaptive algorithm, based on the recently developed theory of adaptive compressive sensing, to collect information from WSNs in an energy efficient manner. The key idea of the algorithm is to perform "projections" iteratively to maximise the amount of information gain per energy expenditure. We prove that this maximisation problem is NPhard and propose a number of heuristics to solve this problem. We evaluate the performance of our proposed algorithms using data from both simulation and an outdoor WSN testbed. The results show that our proposed algorithms are able to give a more accurate approximation of the temporal-spatial field for a given energy expenditure.
A noise map facilitates the monitoring of environmental noise pollution in urban areas. It can raise citizen awareness of noise pollution levels, and aid in the development of mitigation strategies to cope with the adverse effects. However, state-of-the-art techniques for rendering noise maps in urban areas are expensive and rarely updated (for months or even years), as they rely on population and traffic models rather than on real data. Smart phone based urban sensing can be leveraged to create an open and inexpensive platform for rendering up-todate noise maps. In this paper, we present the design, implementation and performance evaluation of an end-to-end, context-aware, noise mapping system called Ear-Phone.Ear-Phone investigates the use of different interpolation and regularization methods to address the fundamental problem of recovering the noise map from incomplete and random samples obtained by crowdsourcing data collection. Ear-Phone, implemented on Nokia N95, N97 and HP iPAQ, HTC One mobile devices, also addresses the challenge of collecting accurate noise pollution readings at a mobile device. A major challenge of using smart phones as sensors is that even at the same location, the sensor reading may vary depending on the phone orientation and user context (for example, whether the user is carrying the phone in a bag or holding it in her palm). To address this problem, Ear-Phone leverages context-aware sensing. We develop classifiers to accurately determine the phone sensing context. Upon context discovery, Ear-Phone automatically decides whether to sense or not. Ear-phone also implements in-situ calibration which performs simple calibration that can be carried out without any technical skills whatsoever required on the user's part. Extensive simulations and outdoor * Corresponding author. experiments demonstrate that Ear-Phone is a feasible platform to assess noise pollution, incurring reasonable system resource consumption at mobile devices and providing high reconstruction accuracy of the noise map.
Learning the latent representation of data in unsupervised fashion is a very interesting process that provides relevant features for enhancing the performance of a classifier. For speech emotion recognition tasks, generating effective features is crucial. Currently, handcrafted features are mostly used for speech emotion recognition, however, features learned automatically using deep learning have shown strong success in many problems, especially in image processing. In particular, deep generative models such as Variational Autoencoders (VAEs) have gained enormous success for generating features for natural images. Inspired by this, we propose VAEs for deriving the latent representation of speech signals and use this representation to classify emotions. To the best of our knowledge, we are the first to propose VAEs for speech emotion classification. Evaluations on the IEMOCAP dataset demonstrate that features learned by VAEs can produce state-of-the-art results for speech emotion classification.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.