2020
DOI: 10.1016/j.cnsns.2020.105205
|View full text |Cite
|
Sign up to set email alerts
|

Learning time-stepping by nonlinear dimensionality reduction to predict magnetization dynamics

Abstract: We establish a time-stepping learning algorithm and apply it to predict the solution of the partial differential equation of motion in micromagnetism as a dynamical system depending on the external field as parameter. The data-driven approach is based on nonlinear model order reduction by use of kernel methods for unsupervised learning, yielding a predictor for the magnetization dynamics without any need for field evaluations after a data generation and training phase as precomputation. Magnetization states fr… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
17
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
3

Relationship

4
4

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 18 publications
0
17
0
Order By: Relevance
“…The weights and biases are the learnable parameters of the networks which are determined during training of the networks by minimizing the sum of the squared residuals at collocation points. During training of the neural networks ( 17) and ( 18) the weights and biases are adjusted so that A (in) approx is an approximate solution of equation ( 8), A (out) approx is an approximate solution of equation ( 9), and both fulfill the interface conditions (12) and (13). The magnetic flux should decay to zero as |x| approaches infinity.…”
Section: Collocation Based Magnetostaticsmentioning
confidence: 99%
See 1 more Smart Citation
“…The weights and biases are the learnable parameters of the networks which are determined during training of the networks by minimizing the sum of the squared residuals at collocation points. During training of the neural networks ( 17) and ( 18) the weights and biases are adjusted so that A (in) approx is an approximate solution of equation ( 8), A (out) approx is an approximate solution of equation ( 9), and both fulfill the interface conditions (12) and (13). The magnetic flux should decay to zero as |x| approaches infinity.…”
Section: Collocation Based Magnetostaticsmentioning
confidence: 99%
“…Traditionally, the numerical solution of (inverse) magnetostatic and micromagnetic problems relies on the finite difference or finite element discretization of the underlying partial differential equations. For the fast estimation of magnetostatic fields in motors [11] or the magnetic response of magnetic sensor elements, neural networks [11,12] or kernel methods [13] have been applied. In order to train the machine learning models, conventional numerical solvers are used to generate the training data by varying geometry, external loads, or time.…”
Section: Introductionmentioning
confidence: 99%
“…With this approach they can predict the magnetization dynamics of thin film elements for previously unseen external fields. As an alternative to neural networks, kernel methods have been used to learn the solution of the Landau-Lifshitz-Gilbert equation in latent space [21,22]. In this work we present a neural network based methodology to predict the demagnetization curve of nanocrystalline permanent magnets from the microstructure.…”
Section: Introductionmentioning
confidence: 99%
“…However, in many applications, such as electronic circuit design and real time process control, the response to a magnetic field needs to be quick. In recent time, data-driven nonlinear reduced order approaches were developed to predict the micromagnetic dynamics depending on the external field [11,6,5] accomplished by machine learning (ML). The common idea is to transform the high-dimensional training magnetization states from simulation results for different field strengths and angles into a feature space where lower-dimensional (approximate) representations exist and then learn the dynamics with respect to the fewer latent variables, see Fig.…”
Section: Introductionmentioning
confidence: 99%
“…1. The main motivation for the kernel based methods introduced in [6,5] is to construct a time-stepping predictor on the basis of a non-blackbox nonlinear dimensionality reduction with explicit training solutions (in contrast to the extensive optimization needed in deep neural networks). Kernel principal component analysis (kPCA) [16] reduces the feature space dimension, while (kernel ridge) regression models the time-evolution in a ν-step scheme on the level of reduced magnetization representations.…”
Section: Introductionmentioning
confidence: 99%