Computation offloading enables intensive computational tasks in edge computing to be separated into multiple computing resources of the server to overcome hardware limitations. Deep learning derives the inference approach based on the learning approach with a volume of data using a sufficient computing resource. However, deploying the domain-specific inference approaches to edge computing provides intelligent services close to the edge of the networks. In this paper, we propose intelligent edge computing by providing a dynamic inference approach for building environment control. The dynamic inference approach is provided based on the rules engine that is deployed on the edge gateway to select an inference function by the triggered rule. The edge gateway is deployed in the entry of a network edge and provides comprehensive functions, including device management, device proxy, client service, intelligent service and rules engine. The functions are provided by microservices provider modules that enable flexibility, extensibility and light weight for offloading domain-specific solutions to the edge gateway. Additionally, the intelligent services can be updated through offloading the microservices provider module with the inference models. Then, using the rules engine, the edge gateway operates an intelligent scenario based on the deployed rule profile by requesting the inference model of the intelligent service provider. The inference models are derived by training the building user data with the deep learning model using the edge server, which provides a high-performance computing resource. The intelligent service provider includes inference models and provides intelligent functions in the edge gateway using a constrained hardware resource based on microservices. Moreover, for bridging the Internet of Things (IoT) device network to the Internet, the gateway provides device management and proxy to enable device access to web clients.
Computation offloading enables intensive computational tasks to be separated into multiple computing resources for overcoming hardware limitations. Leveraging cloud computing, edge computing can be enabled to apply not only large-scale and personalized data but also intelligent algorithms based on offloading the intelligent models to high-performance servers for working with huge volumes of data in the cloud. In this paper, we propose a getaway-centric Internet of Things (IoT) system to enable the intelligent and autonomous operation of IoT devices in edge computing. In the proposed edge computing, IoT devices are operated by a decision-making model that selects an optimal control factor from multiple intelligent services and applies it to the device. The intelligent services are provided based on offloading multiple intelligent and optimization approaches to the intelligent service engine in the cloud. Therefore, the decision-making model in the gateway is enabled to select the best solution from the candidates. Also, the proposed IoT system provides monitoring and visualization to users through device management based on resource virtualization using the gateway. Furthermore, the gateway interprets scenario profiles to interact with intelligent services dynamically and apply the optimal control factor to the actual device through the virtual resource. For implementing the improved energy optimization using the proposed IoT system, we propose two intelligent models to learn parameters of a user’s residential environment using deep learning and derive the inference models to deploy in the intelligent service engine. The inference models are used for predicting a heater energy consumption that is applied to the heater. The heater updates the environment parameters to reach the user-desired values. Moreover, based on two energy consumption values, the decision-making model brings a smaller value to operate the heater to enable reducing the energy consumption as well as providing a user-desired environment.
With the continuous increase in the development and use of artificial intelligence systems and applications, problems due to unexpected operations and errors of artificial intelligence systems have emerged. In particular, the importance of trust analysis and management technology for artificial intelligence systems is continuously growing so that users who desire to apply and use artificial intelligence systems can predict and safely use services. This study proposes trust management requirements for artificial intelligence and a trust management framework based on it. Furthermore, we present challenges for standardization so that trust management technology can be applied and spread to actual artificial intelligence systems. In this paper, we aim to stimulate related standardization activities to develop globally acceptable methodology in order to support trust management for artificial intelligence while emphasizing challenges to be addressed in the future from a standardization perspective.
The Internet of Things (IoT) enables the number of connected devices to be increased rapidly based on heterogeneous technologies such as platforms, frameworks, libraries, protocols, and standard specifications. Based on the connected devices, various applications can be developed by integrating domain-specific contents using the service composition for providing improved services. The management of the information including devices, contents, and composite objects is necessary to represent the physical objects on the Internet for accessing the IoT services transparently. In this paper, we propose an integrated service composition approach based on multiple service providers to provide improved IoT services by combining various service objects in heterogeneous IoT networks. In the proposed IoT architecture, each service provider provides web services based on Representational State Transfer (REST) Application Programming Interface (API) that delivers information to the clients as well as other providers for integrating the information to provide new services. Through the REST APIs, the integration management provider combines the service result of the IoT service provider to other contents to provide improved services. Moreover, the interworking proxy is proposed to bridge heterogeneous IoT networks for enabling transparent access in the integrated services through proving protocol translating on the entry of the device networks. Therefore, the interworking proxy is deployed between the IoT service provider and device networks to enable clients to access heterogeneous IoT devices through the composited services transparently.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.