Approximate computing has received significant attention as a promising strategy to decrease power consumption of inherently error tolerant applications. In this paper, we focus on hardware level approximation by introducing the Partial Product Perforation technique for designing approximate multiplication circuits. We prove in a mathematically rigorous manner that in partial product perforation the imposed errors are bounded and predictable, depending only on the input distribution. Through extensive experimental evaluation, we apply the partial product perforation method on different multiplier architectures and expose the optimal architecture-perforation configuration pairs for different error constraints. We show that, compared with the respective exact design, the partial product perforation delivers reductions of up to 50% in power consumption, 45% in area and 35% in critical delay. Also, the product perforation method is compared with state-of-the-art approximation techniques, i.e. truncation, Voltage Over-Scaling and logic approximation, showing that it outperforms them in terms of power dissipation and error.
With the proliferation of portable and mobile IoT devices and their increasing processing capability, we witness that the edge of network is moving to the IoT gateways and smart devices. To avoid Big Data issues (e.g. high latency of cloud based IoT), the processing of the captured data is starting from the IoT edge node. However, the available processing capabilities and energy resources are still limited and do not allow to fully process the data on-board. It calls for offloading some portions of computation to the gateway or servers. Due to the limited bandwidth of the IoT gateways, choosing the offloading levels of connected devices and allocating bandwidth to them is a challenging problem. This paper proposes a technique for managing computation offloading in a local IoT network under bandwidth constraints. The existing bandwidth allocation and computation offloading management techniques underutilize the gateway's resources (e.g. bandwidth) due to the fragmentation issue. This issue stems from the discrete coarse-grained choices (i.e. offloading levels) on the IoT end nodes. Our proposed technique addresses this issue, and utilizes the available resources of the gateway effectively. The experimental results show on average 1 hour (up to 1.5 hour) improvement in battery life of edge devices. The utilization of gateway's bandwidth increased by 40%. 1
Internet-of-Things (IoT) envisions an infrastructure of ubiquitous networked smart devices offering advanced monitoring and control services. Current art in IoT architectures utilizes gateways to enable application-specific connectivity to IoT devices. In typical configurations, an IoT gateway is shared among several IoT devices. However, given the limited available bandwidth and processing capabilities of an IoT gateway, the quality of service (QoS) of IoT devices must be adjusted over time not only to fulfill the needs of individual IoT device users, but also to tolerate the QoS needs of the other IoT devices sharing the same gateway.In this paper, we address the problem of QoS management for IoT devices under bandwidth, battery, and processing constraints. We first formulate the problem of resourceaware QoS tailored to the IoT paradigm and then propose an efficient problem decomposition that enables the adoption of a recurrent dynamic programming approach with reduced execution time overhead. We evaluate the efficiency of the proposed approach with a case study and through extensive experimentation over different IoT system configurations regarding to the number and type of the employed IoTdevices. Experiments show that our solution improves the overall QoS by 50% compared to an unsupervised system while both meet the constraints.
Healthcare is one of the most rapidly expanding application areas of the Internet of Things (IoT) technology. IoT devices can be used to enable remote health monitoring of patients with chronic diseases such as cardiovascular diseases (CVD). In this paper we develop an algorithm for ECG analysis and classification for heartbeat diagnosis, and implement it on an IoT-based embedded platform. This algorithm is our proposal for a wearable ECG diagnosis device, suitable for 24-hour continuous monitoring of the patient. We use Discrete Wavelet Transform (DWT) for the ECG analysis, and a Support Vector Machine (SVM) classifier. The best classification accuracy achieved is 98.9%, for a feature vector of size 18, and 2493 support vectors. Different implementations of the algorithm on the Galileo board, help demonstrate that the computational cost is such, that the ECG analysis and classification can be performed in real-time.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.