Activation functions facilitate deep neural networks by introducing non-linearity to the learning process. The non-linearity feature gives the neural network the ability to learn complex patterns. Recently, the most widely used activation function is the Rectified Linear Unit (ReLU). Though, other various existing activation including hand-designed alternatives to ReLU have been proposed. However, none has succeeded in replacing ReLU due to their existing inconsistencies. In this work, activation function called ReLU-Memristor-like Activation Function (RMAF) is proposed to leverage benefits of negative values in neural networks. RMAF introduces a constant parameter (α) and a threshold parameter (p) making the function smooth, non-monotonous, and introduces non-linearity in the network. Our experiments show that, the RMAF works better than ReLU and other activation functions on deeper models and across number of challenging datasets. Firstly, experiments are performed by training and classifying on multi-layer perceptron (MLP) over benchmark data such as the Wisconsin breast cancer, MNIST, Iris and Car evaluation. RMAF achieves high performance of 98.74%, 99.67%, 98.81% and 99.42% respectively, compared to Sigmoid, Tanh and ReLU. Secondly, experiments were performed on convolution neural network (ResNet) over MNIST, CIFAR-10 and CIFAR-100 data and observed the proposed activation function achieves higher performance accuracy of 99.73%, 98.77% and 79.82% respectively than Tanh, ReLU and Swish. Additionally, we experimented our work on deep networks i.e. squeeze network (SqueezeNet), Dense connected neural network (DenseNet121) and ImageNet dataset, which RMAF produced the best performance. We note that, the RMAF converges faster than the other functions and can replace ReLU in any neural network due to the efficiency, scalability and its similarity to both ReLU and Swish.
Affine matrix rank minimization problem is a fundamental problem with a lot of important applications in many fields. It is well known that this problem is combinatorial and NP-hard in general. In this paper, a continuous promoting low rank non-convex fraction function is studied to replace the rank function in this NP-hard problem. Inspired by our former work in compressed sensing, an iterative singular value thresholding algorithm is proposed to solve the regularization transformed affine matrix rank minimization problem. For different a > 0, we could get a much better result by adjusting the different value of a, which is one of the advantages for the iterative singular value thresholding algorithm compared with some state-of-art methods. Some convergence results are established and numerical experiments show that this thresholding algorithm is feasible for solving the regularization transformed affine matrix rank minimization problem. Moreover, we proved that the value of the regularization parameter λ > 0 can not be chosen too large. Indeed, there exists λ > 0 such that the optimal solution of the regularization transformed affine matrix rank minimization problem is equal to zero for any λ >λ. Numerical experiments on matrix completion problems show that our method performs powerful in finding a low-rank matrix and the numerical experiments about image inpainting problems show that our algorithm has better performances than some state-of-art methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.