The emergence and spread of Internet of Things (IoT) technologies along with the edge computing paradigm has led to an increase in the computational load on sensor end-devices. These devices are now expected to provide high-level information instead of just raw sensor measurements. Therefore, the processing tasks must share the processor time with the communication tasks, and both of them may have strict timing constraints. In this work, we present an empirical study, from the edge computing perspective, of the process management carried out by an IoT Operating System (OS), showing the cross-influence between the processing and communication tasks in end-devices. We have conducted multiple tests in two real scenarios with a specific OS and a set of wireless protocols. In these tests, we have varied the processing and communication tasks timing parameters, as well as their assigned priority levels. The results obtained from these tests demonstrate that there is a close relationship between the characteristics of the processing tasks and the communication performance, especially when the processing computational load is high. In addition, these results also show that the computational load is not the only factor responsible for the communication performance degradation, as the relationship between the processing tasks and the communication protocols timing parameters also plays a role. These conclusions should be taken into account for future OSs and protocol developments.
In recent years, wireless sensor networks (WSNs) have experienced a significant growth as a fundamental part of the Internet of Things (IoT). WSNs nodes constitute part of the end-devices present in the IoT, and in many cases location data of these devices is expected by IoT applications. For this reason, many localization algorithms for WSNs have been developed in the last years, although in most cases the results provided are obtained from simulations that do not consider the resource constraints of the end-devices. Therefore, in this work we present an experimental evaluation of a received signal strength indicator (RSSI)-based localization algorithm implemented on IoT end-devices, comparing its results with those obtained from simulations. We have implemented the fuzzy ring-overlapping range-free (FRORF) algorithm with some modifications to make its operation feasible on resource-constrained devices. Multiple tests have been carried out to obtain the localization accuracy data in three different scenarios, showing the difference between simulation and real results. While the overall behaviour is similar in simulations and in real tests, important differences can be observed attending to quantitative accuracy results. In addition, the execution time of the algorithm running in the nodes has been evaluated. It ranges from less than 10 ms to more than 300 ms depending on the fuzzification level, which demonstrates the importance of evaluating localization algorithms in real nodes to prevent the introduction of large overheads that may not be affordable by resource-constrained nodes.
The continuous increase in the number of mobile and Internet of Things (IoT) devices, as well as in the wireless data traffic they generate, represents an essential challenge in terms of spectral coexistence. As a result, these devices are now expected to make efficient and dynamic use of the spectrum by employing Cognitive Radio (CR) techniques. In this work, we focus on the Automatic Modulation Classification (AMC). AMC is essential to carry out multiple CR techniques, such as dynamic spectrum access, link adaptation and interference detection, aimed at improving communications throughput and reliability and, in turn, spectral efficiency. In recent years, multiple Deep Learning (DL) techniques have been proposed to address the AMC problem. These DL techniques have demonstrated better generalization, scalability and robustness capabilities compared to previous solutions. However, most of these techniques require high processing and storage capabilities that limit their applicability to energy-and computation-constrained end-devices. In this work, we propose a new gated recurrent unit neural network solution for AMC that has been specifically designed for resource-constrained IoT devices. We trained and tested our solution with over-the-air measurements of real radio signals. Our results show that the proposed solution has a memory footprint of 73.5 kBytes, 51.74% less than the reference model, and achieves a classification accuracy of 92.4%.
The increase in the number of mobile and Internet of Things (IoT) devices, along with the demands of new applications and services, represents an important challenge in terms of spectral coexistence. As a result, these devices are now expected to make an efficient and dynamic use of the spectrum, and to provide processed information instead of simple raw sensor measurements. These communication and processing requirements have direct implications on the architecture of the systems. In this work, we present MIGOU, a wireless experimental platform that has been designed to address these challenges from the perspective of resource-constrained devices, such as wireless sensor nodes or IoT end-devices. At the radio level, the platform can operate both as a software-defined radio and as a traditional highly integrated radio transceiver, which demands less node resources. For the processing tasks, it relies on a system-on-a-chip that integrates an ARM Cortex-M3 processor, and a flash-based FPGA fabric, where high-speed processing tasks can be offloaded. The power consumption of the platform has been measured in the different modes of operation. In addition, these hardware features and power measurements have been compared with those of other representative platforms. The results obtained confirm that a state-of-the-art tradeoff between hardware flexibility and energy efficiency has been achieved. These characteristics will allow for the development of appropriate solutions to current end-devices’ challenges and to test them in real scenarios.
Wireless Sensor Networks (WSNs) are a growing research area as a large of number portable devices are being developed. This fact makes operating systems (OS) useful to homogenize the development of these devices, to reduce design times, and to provide tools for developing complex applications. This work presents an operating system scheduler for resource-constraint wireless devices, which adapts the tasks scheduling in changing environments. The proposed adaptive scheduler allows dynamically delaying the execution of low priority tasks while maintaining real-time capabilities on high priority ones. Therefore, the scheduler is useful in nodes with rechargeable batteries, as it reduces its energy consumption when battery level is low, by delaying the least critical tasks. The adaptive scheduler has been implemented and tested in real nodes, and the results show that the nodes lifetime could be increased up to 70% in some scenarios at the expense of increasing latency of low priority tasks.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.