The Internet of Things (IoT) is widely regarded as a key component of the Internet of the future and thereby has drawn significant interests in recent years. IoT consists of billions of intelligent and communicating ''things'', which further extend borders of the world with physical and virtual entities. Such ubiquitous smart things produce massive data every day, posing urgent demands on quick data analysis on various smart mobile devices. Fortunately, the recent breakthroughs in deep learning have enabled us to address the problem in an elegant way. Deep models can be exported to process massive sensor data and learn underlying features quickly and efficiently for various IoT applications on smart devices. In this article, we survey the literature on leveraging deep learning to various IoT applications. We aim to give insights on how deep learning tools can be applied from diverse perspectives to empower IoT applications in four representative domains, including smart healthcare, smart home, smart transportation, and smart industry. A main thrust is to seamlessly merge the two disciplines of deep learning and IoT, resulting in a wide-range of new designs in IoT applications, such as health monitoring, disease analysis, indoor localization, intelligent control, home robotics, traffic prediction, traffic monitoring, autonomous driving, and manufacture inspection. We also discuss a set of issues, challenges, and future research directions that leverage deep learning to empower IoT applications, which may motivate and inspire further developments in this promising field.
Social networked applications have been more and more popular, and have brought great challenges to the network engineering, particularly the huge demands of bandwidth and storage for social media. The recently emerged content clouds shed light on this dilemma. Towards the migration to clouds, partitioning the social contents has drawn significant interests from the literature. Yet the existing works focus on preserving the social relationship only, while an important factor, user access pattern, is largely overlooked.In this paper, by examining a large collection of YouTube video data, we first demonstrate that partitioning the network entirely based on social relationship would lead to unbalanced partitions in terms of access. We further analyze the role of social relationship in the social media applications, and conclude that user access pattern should be taken into account and social relationship should be dynamically preserved. We formulate the problem as a constrained kmedoids clustering problem, and propose a novel Weighted Partitioning Around Medoids (wPAM) solution. We present a dissimilarity/similarity metric to facilitate the preservation of the social relationship. We compare our solution with other state-of-the-art algorithms, and the preliminary results show that it significantly decreases the access deviation in each cloud server, and flexibly preserves the social relationship.
Abstract-Today's lightening fast data generation from massive sources is calling for efficient big data processing, which imposes unprecedented demands on the computing and networking infrastructures. State-of-the-art tools, most notably MapReduce, are generally performed on dedicated server clusters to explore data parallelism. For grass root users or non-computing professionals, the cost for deploying and maintaining a large-scale dedicated server clusters can be prohibitively high, not to mention the technical skills involved. On the other hand, public clouds allow general users to rent virtual machines (VMs) and run their applications in a pay-as-you-go manner with ultra-high scalability and yet minimized upfront costs. This new computing paradigm has gained tremendous success in recent years, becoming a highly attractive alternative to dedicated server clusters.This article discusses the critical challenges and opportunities when big data meet the public cloud. We identify the key differences between running big data processing in a public cloud and in dedicated server clusters. We then present two important problems for efficient big data processing in the public cloud, resource provisioning, i.e., how to rent VMs and, VMMapReduce job/task scheduling, i.e., how to run MapReduce after the VMs are constructed. Each of these two questions have a set of problems to solve. We present solution approaches for certain problems, and offer optimized design guidelines for others. Finally, we discuss our implementation experiences.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2025 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.