The activation function has a critical influence on whether a convolutional neural network in deep learning can converge or not; a proper activation function not only makes the convolutional neural network converge faster but also can reduce the complexity of convolutional neural network architecture and gets the same or better performance. Many activation functions have been proposed; however, various activation functions have advantages, defects, and applicable network architectures. A new activation function called Polynomial Linear Unit (PolyLU) is proposed in this paper to improve some of the shortcomings of the existing activation functions. The PolyLU meets the following basic properties: continuously differentiable, approximate identity near the origin, unbounded for positive inputs, bounded for negative inputs, smooth, monotonic, and zero-centered. There is a polynomial term for the negative inputs and no exponential terms in the PolyLU that reduces the computational complexity of the network. Compared to those common activation functions like Sigmoid, Tanh, ReLU, LeakyReLU, ELU, Mish, and Swish, the experiments show that the PolyLU has improved some network complexity and has better accuracy over MNIST, Kaggle Cats and Dogs, CIFAR-10 and CIFAR-100 datasets. Test by the CIFAR-100 dataset with batch normalization, PolyLU improves by 0.