Single-point zeroth-order optimization (SZO) is suitable for solving online blackbox optimization and simulation-based learning-to-control problems. However, the vanilla SZO method suffers from larger variance and slow convergence, which seriously limits its practical application. On the other hand, extremum seeking (ES) control is regarded as the continuous-time version of SZO, while they have been mostly studied separately in the control and optimization communities despite the close relation. In this work, we borrow the idea of high-pass and low-pass filters from ES control to improve the performance of SZO. Specifically, we develop a novel SZO method called HLF-SZO, by integrating a high-pass filter and a low-pass filter into the vanilla SZO method. Interestingly, it turns out that the integration of a high-pass filter coincides with the residual-feedback SZO method, and the integration of a low-pass filter can be interpreted as the momentum method. We prove that HLF-SZO achieves a convergence rate of O(d/T 2 3 ) for Lipschitz and smooth objective functions (in both convex and nonconvex cases). Extensive numerical experiments show that the high-pass filter can significantly reduce the variance and the low-pass filter can accelerate the convergence. As a result, the proposed HLF-SZO has a much smaller variance and much faster convergence compared with the vanilla SZO method, and empirically outperforms the state-ofthe-art residual-feedback SZO method.