Abstract-Two-hidden layer feedforward neural networks (TLFNs) have been shown to outperform single-hidden-layer neural networks (SLFNs) for function approximation in many cases. However, their added complexity makes them more difficult to find. Given a constant number of hidden nodes n h , this paper investigates how their allocation between the first and second hidden layers (n h = n 1 + n 2 ) affects the likelihood of finding the best generaliser. The experiments were carried out over a total of ten public domain datasets with n h = 8 and 16. The findings were that the heuristic n 1 = 0.5n h + 1 has an average probability of at least 0.85 of finding a network with a generalisation error within 0.18% of the best generaliser. Furthermore, the worst case over all data sets was within 0.23% for n h = 8, and within 0.15% for n h = 16. These findings could be used to reduce the complexity of the search for TLFNs from quadratic to linear, or alternatively for 'topology mapping' between TLFNs and SLFNs, given the same number of hidden nodes, to compare their performance.Index Terms-ANN, optimal node ratio, topology mapping, two-hidden-layer feedforward, function approximation.