“…In many applications, it is convenient to take the activation function σ as a sigmoidal function which is defined as lim t→−∞ σ(t) = 0 and lim t→+∞ σ(t) = 1. The literature on neural networks abounds with the use of such functions and their superpositions (see, e.g., [2,4,6,8,10,11,13,15,20,22,29]). The possibility of approximating a continuous function on a compact subset of the real line or d-dimensional space by SLFNs with a sigmoidal activation function has been well studied in a number of papers.…”