From ancient Europe, the renaissance and industrialisation eras, to the modern times, urban planning paradigms have evolved in many ways, advancing the environments where people live in. Nevertheless, the recent development of wireless and wired communication network technologies and low-power miniature sensors for various application domains provide us with another chance of revolutionising cities by making them smarter. Smart cities, propelled by a city-scale infrastructure, where information provided from different application systems integrate together, will initiate the development of new applications that can benefit our everyday lives. This article presents an overview of representative applications that consist a smart city with their respective challenges and application requirements. Furthermore, we share our experiences obtained from designing and deploying examples of such smart systems in multiple application domains and summarise the remaining challenges for making the vision of designing smart cities a reality.
Egocentric hand pose estimation is significant for wearable cameras since the hand interactions are captured from an egocentric viewpoint. Several studies on hand pose estimation have recently been presented based on RGBD or RGB sensors. Although these methods provide accurate hand pose estimation, they have several limitations. For example, RGB-based techniques have intrinsic difficulty in converting relative 3D poses into absolute 3D poses, and RGBD-based techniques only work in indoor environments. Recently, stereo-sensor-based techniques have gained increasing attention owing to their potential to overcome these limitations. However, to the best of our knowledge, there are few techniques and no real datasets available for egocentric stereo vision. In this paper, we propose a top-down pipeline for estimating absolute 3D hand poses using stereo sensors, as well as a novel dataset for training. Our top-down pipeline consists of two steps: hand detection and hand pose estimation. Hand detection detects hand areas and then is followed by hand pose estimation, which estimates the positions of the hand joints. In particular, for hand pose estimation with a stereo camera, we propose an attention-based architecture called StereoNet, a geometry-based loss function called StereoLoss, and a novel 2D disparity map called StereoDMap for effective stereo feature learning. To collect the dataset, we proposed a novel annotation method that helps reduce human annotation efforts. Our dataset is publicly available at https://github.com/seo0914/SEH. We conducted comprehensive experiments to demonstrate the effectiveness of our approach compared with the state-of-the-art methods.INDEX TERMS Hand pose estimation, stereo vision, wearable sensors, egocentric view.
A novel adaptive dual‐prediction scheme is introduced for minimising the data communication load for wireless sensor networks as a way to maximise the lifetime of resource‐limited sensor nodes. Specifically, the proposed scheme exposes the fact that when sensing context prediction is used at both the sink node and the sensor nodes, the amount of data that need to be transmitted can be minimised. Furthermore, the transmission data quantity is reduced even more by exploiting the spatial correlation among different sensor nodes. On using this adaptive dual prediction scheme, the evaluations show that the amount of data transmissions can be compressed by as much as 20% against a basic dual prediction scheme, suggesting that the lifetime of sensor nodes can increase significantly in practical systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.