“…Thus, the previous sections thoroughly prove that the sufficient condition to maintain the learning convergence under ISS's umbrella and in the sense of BIBS stability is (20) that yields (21) for non-momentum gradient algorithms. For experimental results, e.g., with a class of polynomial neural architectures, please see paper [10], where spectral radii's effect ( ) A on gradient learning is studied and shown in detail. Also, this letter's proofs are the theoretical complements to the earlier, i.e., more experimentally focused work [10] that did not explicitly show the whole theoretical kinship.…”