“…where ∆W k,m i is the correction for the i-th weight of the k-th neuron belonging to the layer m, α k,m is the corresponding learning rate, n m−1 is the number of the inputs equal to the number of the outputs of the previous layer, z s k,m is the magnitude of the weighted sum, δ s k,m is the output error obtained through the backpropagation method and Y s i,m−1 is the conjugate-transposed of the input. In this way, it is possible to organize a very efficient batch learning algorithm based on the LLS method [37]. When using this algorithm, the output error is calculated for each neuron and each sample and saved in a specific matrix at the end of every training epoch.…”