“…This can happen when data used to train the network is not representative enough for the entire observation span, if the number of hidden layers or neurons is not correct, if the global minimum is overshot or when the network learns the training pattern well but is underperforming during its validation (poor generalisation) (Jain et al, 1996;Gardner and Dorling, 1998;Nguyen and Chan, 2004;Wang et al, 2005;Saxén and Pettersson, 2006;Stathakis, 2009). To prevent this from happening, one can remove redundant input data (Gunaratnam et al, 2003;Saxén and Pettersson, 2006), reduce or increase the number of neurons in the network and use the appropriate generalisation such as early stopping (Hansen and Salamon, 1990;Amari et al, 1997;Svozil et al, 1997;Wang et al, 2005) or another stopping criteria (Günther and Fritsch, 2010;Fritsch and Günther, 2012).…”