Abstract-This paper addresses the genetic design of functional link networks (FLNs). FLNs are high-order perceptrons (HOPs) without hidden units. Despite their linear nature, FLNs can capture nonlinear input-output relationships, provided that they are fed with an adequate set of polynomial inputs, which are constructed out of the original input attributes. Given this set, it turns out to be very simple to train the network, as compared with a multilayer perceptron (MLP). However, finding the optimal subset of units is a difficult problem because of its nongradient nature and the large number of available units, especially for high degrees. Some constructive growing methods have been proposed to address this issue. Here, we rely on the global search capabilities of a genetic algorithm to scan the space of subsets of polynomial units, which is plagued by a host of local minima. By contrast, the quadratic error function of each individual FLN has only one minimum, which makes fitness evaluation practically noiseless. We find that surprisingly simple FLNs compare favorably with other more complex architectures derived by means of constructive and evolutionary algorithms on some UCI benchmark data sets. Moreover, our models are especially amenable to interpretation, due to an incremental approach that penalizes complex architectures and starts with a pool of single-attribute FLNs.Index Terms-Evolutionary neural networks, feature subset selection, functional link networks, polynomial regression.