For many applications, small-sample time series prediction based on grey forecasting models has become indispensable. Many algorithms have been developed recently to make them effective. Each of these methods has a specialized application depending on the properties of the time series that need to be inferred. In order to develop a generalized nonlinear multivariable grey model with higher compatibility and generalization performance, we realize the nonlinearization of traditional GM(1,N), and we call it NGM(1,N). The unidentified nonlinear function that maps the data into a better representational space is present in both the NGM(1,N) and its response function. The original optimization problem with linear equality constraints is established in terms of parameter estimation for the NGM(1,N), and two different approaches are taken to solve it. The former is the Lagrange multiplier method, which converts the optimization problem into a linear system to be solved; and the latter is the standard dualization method utilizing Lagrange multipliers, that uses a flexible estimation equation for the development coefficient. As the size of the training data increases, the estimation results of the potential development coefficient get richer and the final estimation results using the average value are more reliable. The kernel function expresses the dot product of two unidentified nonlinear functions during the solving process, greatly lowering the computational complexity of nonlinear functions. Three numerical examples show that the LDNGM(1,N) outperforms the other multivariate grey models compared in terms of generalization performance. The duality theory and framework with kernel learning are instructive for further research around multivariate grey models to follow.
Supplementary Information
The online version contains supplementary material available at 10.1007/s11071-023-08296-y.