Recent studies have shown the latency and energy consumption of deep neural networks can be significantly improved by splitting the network between the mobile device and cloud. This paper introduces a new deep learning architecture, called BottleNet, for reducing the feature size needed to be sent to the cloud. Furthermore, we propose a training method for compensating for the potential accuracy loss due to the lossy compression of features before transmitting them to the cloud. BottleNet achieves on average 30× improvement in end-to-end latency and 40× improvement in mobile energy consumption compared to the cloud-only approach with negligible accuracy loss.
This paper presents a novel blood pressure (BP) estimation method based on pulse transit time (PTT) and pulse arrival time (PAT) to estimate the systolic blood pressure (SBP) and diastolic blood pressure (DBP). A data acquisition hardware is designed for high-resolution sampling of phonocardiogram (PCG), photoplethysmogram (PPG) and electrocardiogram (ECG). PCG and ECG perform as the proximal timing reference to obtain PTT and PAT indexes, respectively. In order to derive a BP estimator model, a calibration procedure including a supervised physical exercise is conducted for each individual which causes changes in their BP and then, a number of reference BPs are measured alongside the acquisition of the signals per subject. It is suggested to use a force-sensing resistor (FSR) that is placed under the cuff of the BP reference device to mark the exact moments of reference BP measurements, which are corresponding to the inflation of the cuff. Additionally, a novel BP estimator nonlinear model, based on the theory of elastic tubes, is introduced to estimate the BP using PTT/PAT values precisely. The proposed method is evaluated on 32 subjects. Using the PTT index, the correlation coefficients for SBP and DBP estimation are 0.89 and 0.84, respectively. Using the PAT index, the correlation coefficients for SBP and DBP estimation are 0.95 and 0.84, respectively. The results show that the proposed method, exploiting the introduced nonlinear model with the use of PAT index or PTT index, provides a reliable estimation of SBP and DBP.Index Terms-cuff-less blood pressure, mobile health (mHealth), pulse transit time (PTT), pulse arrival time (PAT), vital signals
Modern mobile devices are equipped with highperformance hardware resources such as graphics processing units (GPUs), making the end-side intelligent services more feasible. Even recently, specialized silicons as neural engines are being used for mobile devices. However, most mobile devices are still not capable of performing real-time inference using very deep models. Computations associated with deep models for today's intelligent applications are typically performed solely on the cloud. This cloud-only approach requires significant amounts of raw data to be uploaded to the cloud over the mobile wireless network and imposes considerable computational and communication load on the cloud server. Recent studies have shown that the latency and energy consumption of deep neural networks in mobile applications can be notably reduced by splitting the workload between the mobile device and the cloud. In this approach, referred to as collaborative intelligence, intermediate features computed on the mobile device are offloaded to the cloud instead of the raw input data of the network, reducing the size of the data needed to be sent to the cloud. In this paper, we design a new collaborative intelligence friendly architecture by introducing a unit responsible for reducing the size of the feature data needed to be offloaded to the cloud to a greater extent, where this unit is placed after a selected layer of a deep model. This unit is referred to as the butterfly unit. The butterfly unit consists of the reduction unit and the restoration unit. The outputs of the reduction unit is offloaded to the cloud server on which the computations associated with the restoration unit and the rest of the inference network are performed. Both the reduction and restoration units use a convolutional layer as their main component. The inference outcomes are sent back to the mobile device. The new network architecture, including the introduced butterfly unit after a selected layer of the underlying deep model, is trained end-to-end. Our proposed method, across different wireless networks, achieves on average 53× improvements for end-to-end latency and 68× improvements for mobile energy consumption compared to the status quo cloud-only approach for ResNet-50, while the accuracy loss is less than 2%.
Energy efficiency is one of the most critical design criteria for modern embedded systems such as multiprocessor system-on-chips (MPSoCs). Dynamic voltage and frequency scaling (DVFS) and dynamic power management (DPM) are two major techniques for reducing energy consumption in such embedded systems. Furthermore, MPSoCs are becoming more popular for many real-time applications. One of the challenges of integrating DPM with DVFS and task scheduling of real-time applications on MPSoCs is the modeling of idle intervals on these platforms. In this paper, we present a novel approach for modeling idle intervals in MPSoC platforms which leads to a mixed integer linear programming (MILP) formulation integrating DPM, DVFS, and task scheduling of periodic task graphs subject to a hard deadline. We also present a heuristic approach for solving the MILP and compare its results with those obtained from solving the MILP.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.