The recent advancements in spatially resolved transcriptomics (SRT) technology have enabled the acquisition of gene expression data at near- or sub-single-cell resolution, along with simultaneous imaging of physical locations. Nevertheless, necessary experimental procedures such as tissue fixation, permeabilization, and tissue removal inevitably induce the diffusion of transcribed molecules. Consequently, this leads to the partial capture of ex-situ transcripts in SRT data, thereby introducing a considerable amount of noise into the dataset. To address this issue, in this study, we focused on evaluating the diffusion pattern of individual genes within tissue regions and quantitatively calculating their signal-to-noise ratio (SNR). Through this analysis, we successfully identified "invalid genes" exhibiting widespread expression across tissue regions. Then by filtering out these genes, we effectively reduced the high noise level present in SRT data. To achieve this, we developed the gene filter denoising (GF) algorithm, which utilizes the optimal transport method to compute the gene diffusion coefficient and generate denoised SRT data. One notable advantage of our GF algorithm is its ability to fully "respect" the raw sequencing data, thereby avoiding the introduction of false positives often associated with traditional interpolation and modification denoising methods. Furthermore, we conducted comprehensive validation of GF, and the GFdenoised SRT data demonstrated substantial improvements in clustering, identification of differentially expressed genes (DEGs), and cell type annotation. Taken together, we believe that the GF denoising technique will serve as an essential and crucial step in exploring SRT data and investigating the underlying biological processes.