“…Therefore, ReOS‐ELM and OS‐RELM are basically the
‐norm modification of the original OS‐ELM algorithm. Another approach is the ADMM based online ELM algorithm (OAL1‐ELM)
11 which applies the
‐norm minimization with ADMM framework. Different from these works, the GPU‐MRO‐ELM algorithm: (1) applies two regularization at the same time in an online setting, combining the benefits of sparsity and stability, (2) compared to the OAL1‐ELM algorithm, MRO‐ELM and GPU‐MRO‐ELM produce joint sparsity with the mixed‐norm regularization, that is, all the elements in a row of the output weight matrix is eliminated rather than individual elements and the resultant neural network is more compact since corresponding neurons are completely eliminated, and (3) GPU acceleration is combined with the optional automatic parallel hyper‐parameter tuning that accelerates the training time and the tuning time of MRO‐ELM.…”