“…The output of QNU from Figure 4 can be written in a vector multiplication form that can be decomposed into a long vector representation as follows: where x 0 = 1 (as shown in Figure 4 ), is a predicted value, x 1 , x 2 ,…, x n are external neural inputs at a sample time k , w i , j are neural weights of QNU, w is a long-vector representation of weight matrix of QNU, and colx is a long column vector of polynomial terms of neural inputs defined as follows: Notice that for weight optimization of polynomial static model ( 11 ), all x i and y ( k + n s ) are substituted with measured training data, so ( 11 ) yields a linear combination of neural weights that has, in principle, a unique solution for a given training data. Thus, contrary to MLP networks, the linear optimization nature of QNU implies that QNU avoids the local minima issue for a given training data while the neural model maintains high quality of nonlinear approximation that we have observed so far [ 23 , 41 ].…”