Most prevailing attention mechanism modules in contemporary research are convolution‐based modules, and while these modules contribute to enhancing the accuracy of deep learning networks in visual tasks, they concurrently augment the overall model complexity. To address the problem, this paper proposes a plug‐and‐play algorithm that does not increase the complexity of the model, Laplacian attention (LA). The LA algorithm first calculates the similarity distance between feature points in the feature space and feature channel and constructs the residual Laplacian matrix between feature points through the similarity distance and Gaussian kernel. This construction serves to segregate non‐similar feature points while aggregating those with similarities. Ultimately, the LA algorithm allocates the outputs of the feature channel and the feature space adaptively to derive the final LA outputs. Crucially, the LA algorithm is confined to the forward computation process and does not involve backpropagation or any parameter learning. The LA algorithm undergoes comprehensive experimentation on three distinct datasets—namely Cifar‐10, miniImageNet, and Pascal VOC 2012. The experimental results demonstrate that, compared with the advanced attention mechanism modules in recent years, such as SENet, CBAM, ECANet, coordinate attention, and triplet attention, the LA algorithm exhibits superior performance across image classification, object detection and semantic segmentation tasks.