Originally developed for the replication of high aspect ratio LIGA structures, micro injection molding is presently on its way to become an established manufacturing process. Enhanced technological products like micro optical devices are entering the market. New developments like the different kinds of injection molding with several components open up opportunities for increasing economic efficiency as well as for new fields of applications. Software tools for the simulation of the thermal household of the molding tool and/or the moldfilling process itself can provide useful but not wholly sufficient assistance for the optimization of micro injection molding.
Global machine learning force fields, with the capacity to capture collective interactions in molecular systems, now scale up to a few dozen atoms due to considerable growth of model complexity with system size. For larger molecules, locality assumptions are introduced, with the consequence that nonlocal interactions are not described. Here, we develop an exact iterative approach to train global symmetric gradient domain machine learning (sGDML) force fields (FFs) for several hundred atoms, without resorting to any potentially uncontrolled approximations. All atomic degrees of freedom remain correlated in the global sGDML FF, allowing the accurate description of complex molecules and materials that present phenomena with far-reaching characteristic correlation lengths. We assess the accuracy and efficiency of sGDML on a newly developed MD22 benchmark dataset containing molecules from 42 to 370 atoms. The robustness of our approach is demonstrated in nanosecond path-integral molecular dynamics simulations for supramolecular complexes in the MD22 dataset.
Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data. As GNNs tightly entangle the input graph into the neural network structure, common explainable AI approaches are not applicable. To a large extent, GNNs have remained black-boxes for the user so far. In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions, i.e. by identifying groups of edges that jointly contribute to the prediction. Practically, we find that such explanations can be extracted using a nested attribution scheme, where existing techniques such as layer-wise relevance propagation (LRP) can be applied at each step. The output is a collection of walks into the input graph that are relevant for the prediction. Our novel explanation method, which we denote by GNN-LRP, is applicable to a broad range of graph neural networks and lets us extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.Index Terms-graph neural networks, higher-order explanations, layer-wise relevance propagation, explainable machine learning. ! INTRODUCTIONMany interesting structures found in scientific and industrial applications can be expressed as graphs. Examples are lattices in fluid modeling, molecular geometry, biological interaction networks, or social / historical networks. Graph neural networks (GNNs) [1], [2] have been proposed as a method to learn from observations in general graph structures and have found use in an ever growing number of applications [3]-[8]. While GNNs make useful predictions, they typically act as black-boxes, and it has neither been directly possible (1) to extract novel insight from the learned model nor (2) to verify that the model has made the intended use of the graph structure, e.g. that it has avoided Clever Hans phenomena [9].Explainable AI (XAI) is an emerging research area that aims to extract interpretable insights from trained ML models [10], [11]. So far, research has focused, for example, on full black-box models [12], [13], self-explainable models [14], [15], or deep neural networks [16], where in all cases, the prediction can be attributed to the input features. For a GNN, however, the graph being received as input is deeply
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.