Abstract-Machine learning algorithms are designed to resolve unknown behaviours by extracting commonalities over massive datasets. Unfortunately, learning such global behaviours can be inaccurate and slow for systems composed of heterogeneous elements, which behave very differently, for instance as it is the case for cyber-physical systems and Internet of Things applications. Instead, to make smart decisions, such systems have to continuously refine the behaviour on a per-element basis and compose these small learning units together. However, combining and composing learned behaviours from different elements is challenging and requires domain knowledge. Therefore, there is a need to structure and combine the learned behaviours and domain knowledge together in a flexible way. In this paper we propose to weave machine learning into domain modeling. More specifically, we suggest to decompose machine learning into reusable, chainable, and independently computable small learning units, which we refer to as micro learning units. These micro learning units are modeled together with and at the same level as the domain data. We show, based on a smart grid case study, that our approach can be significantly more accurate than learning a global behaviour while the performance is fast enough to be used for live learning.
Over the last few years, data analytics shifted from a descriptive era, confined to the explanation of past events, to the emergence of predictive techniques. Nonetheless, existing predictive techniques still fail to effectively explore alternative futures, which continuously diverge from current situations when exploring the effects of what-if decisions. Enabling prescriptive analytics therefore calls for the design of scalable systems that can cope with the complexity and the diversity of underlying data models. In this article, we address this challenge by combining graphs and time series within a scalable storage system that can organize a massive amount of unstructured and continuously changing data into multi-dimensional data models, called Many-Worlds Graphs. We demonstrate that our open source implementation, GreyCat, can efficiently fork and update thousands of parallel worlds composed of millions of timestamped nodes, such as what-if exploration.
Abstract-Internet of Things applications analyze our past habits through sensor measures to anticipate future trends. To yield accurate predictions, intelligent systems not only rely on single numerical values, but also on structured models aggregated from different sensors. Computation theory, based on the discretization of observable data into timed events, can easily lead to millions of values. Time series and similar database structures can efficiently index the mere data, but quickly reach computation and storage limits when it comes to structuring and processing IoT data. We propose a concept of continuous models that can handle high-volatile IoT data by defining a new type of meta attribute, which represents the continuous nature of IoT data. On top of traditional discrete object-oriented modeling APIs, we enable models to represent very large sequences of sensor values by using mathematical polynomials. We show on various IoT datasets that this significantly improves storage and reasoning efficiency.
Abstract-The models@run.time paradigm promotes the use of models during the execution of cyber-physical systems to represent their context and to reason about their runtime behaviour. However, current modeling techniques do not allow to cope at the same time with the large-scale, distributed, and constantly changing nature of these systems. In this paper, we introduce a distributed models@run.time approach, combining ideas from reactive programming, peer-to-peer distribution, and large-scale models@run.time. We define distributed models as observable streams of chunks that are exchanged between nodes in a peerto-peer manner. A lazy loading strategy allows to transparently access the complete virtual model from every node, although chunks are actually distributed across nodes. Observers and automatic reloading of chunks enable a reactive programming style. We integrated our approach into the Kevoree Modeling Framework and demonstrate that it enables frequently changing, reactive distributed models that can scale to millions of elements and several thousand nodes.
Micro-generations and future grid usages, such as charging of electric cars, raises major challenges to monitor the electric load in low-voltage cables. Due to the highly interconnected nature, real-time measurements are problematic, both economically and technically. This entails an overload risk in electricity networks when cables must be disconnected for maintenance reasons or are accidentally damaged. Therefore, it is of great interest for electricity grid providers to anticipate the load in networks and quicker detect failures. However, computing the electric load in cables requires computational intensive power flow calculations and live consumption measurements. Today's view of the grid is usually based on on-field documentation of cables, fuses, and measurements by technicians and therefore often outdated. Thus, the electric load is usually only simulated in case of major topology variations. However, live measurements of smart meters provide new opportunities. In this paper we present a novel approach for a near-time electric load approximation by deriving in live the current electric topology and cable loads from smart meter data. We leverage the models@run.time paradigm to combine live measurements with topology characteristics of the grid. Our approach enables to approximate the load in cables, not only for the current grid topology, but also to simulate topology changes for maintenance purposes. We showed that this allows a near real-time approximation while remaining very accurate (average deviation of 1.89% compared to offline power-flow calculation tools). Developed with a grid operator, this approach will be integrated in a monitoring and warning system and as an embeddable solution for on-field simulation. CCS Concepts•Computer systems organization → Embedded and cyber-physical systems; •Software and its engineering → Model-driven software engineering; Real-time systems software; Abstraction, modeling and modularity;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.