“…Once the low-dimensional space has been found, the snapshots are projected onto this space, and the resulting reduced variables (either POD coefficients or latent variables of an autoencoder) can be used to train a neural network, which attempts to learn the evolution of the reduced variables in time (and/or their dependence on a set of parameters). From the references in this paper alone, many examples exist of feed-forward and recurrent neural networks having been used for the purpose of learning the evolution of time series data, for example, by Multi-layer perceptrons [12,13,40,41,43,[54][55][56][57][58][59][60], Gaussian Process Regression [11,45,[61][62][63] and Long-Short Term Memory networks [31,32,34,35,38,51,64]. When using these types of neural network to predict in time, if the reduced variables stray outside of the range of values encountered during training, the neural network can produce unphysical, divergent results [39,51,52,64,65].…”