Capsule network's hierarchical framework (CapsNets) consists of an initial standard convolution layer that uses an activation function at its core. The rectified linear unit (ReLU) activation function is widely used in CapsNet and brain tumor classification tasks among several existing activation functions. However, ReLU has some shortcomings where the zero derivatives of the function cause failure of neuron activation. Furthermore, the performance accuracy obtained by the ReLU with CapsNet on brain tumor classification is unsatisfactory. We proposed a new activation function called parametric scaled hyperbolic tangent (PSTanh), which enhances the conventional hyperbolic tangent by avoiding vanishing gradient, provides a small gradient with the introduction of λ and β parameters, and enables faster optimization. Eight standard activation functions (i.e., tanh, Memrister-Like Activation Function (ReLU), Leaky-ReLU, PReLU, ELU, SELU, Swish, ReLU-Memrister-Like Activation Function (RMAF), and the proposed activation) are analyzed and compared in brain tumor classification tasks. Furthermore, extensive experiments are conducted using MNIST, fashion-MNIST, CIFAR-10, CIFAR-100, and ImageNet datasets trained on CapsNets models and deep CNN models (i.e., AlexNet, SqueezeNet, ResNet50, and DenseNet121). The brain tumor's experimental results based on CapsNet and CNN model show that the proposed PSTanh activation achieves better performance than other functions.