Pooling is an important part of modern convolutional neural networks, which can expand the perception field, reduce the parameter matrix, and avoid overfitting. Currently used maximum pooling, average pooling and various subsequent improved pooling algorithms cannot take into account the contour and background information of the feature map at the same time. In addition, the performance of different pooling algorithms varies greatly on different models and datasets. In this paper, we propose a learnable pooling algorithm. The introduction of learnable parameters allows the pooling layer to adaptively optimize the selection of key feature information that is beneficial to improve model performance during model training. It is experimentally verified that the pooling algorithm has superior performance over the existing maximum pooling and average pooling on several classical models and public datasets for image classification and text classification. The pooling algorithm with the introduction of learnable parameters can better prevent overfitting, steadily improve the accuracy of the model, and is highly generalizable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.