Bilateral filtering is an image processing technique commonly adopted as intermediate step of several computer vision tasks. Opposite to the conventional image filtering, which is based on convolving the input pixels with a static kernel, the bilateral filtering computes its weights on the fly according to the current pixel values and some tuning parameters. Such additional elaborations involve nonlinear weighted averaging operations, which make difficult the deployment of bilateral filtering within existing vision technologies based on real-time and low-energy hardware architectures. This paper presents a new approximation strategy that aims to improve the energy efficiency of circuits implementing the bilateral filtering function, while preserving their real-time performances and elaboration accuracy. In contrast to the state-of-the-art, the proposed technique allows the filtering action to be on the fly adapted to both the current pixel values and to the tuning parameters, thus avoiding any architectural modification or tables update. When hardware implemented within the Xilinx Zynq XC7Z020 FPGA device, a 5 × 5 filter based on the proposed method processes 237.6 Mega pixels per second and consumes just 0.92 nJ per pixel, thus improving the energy efficiency by up to 2.8 times over the competitors. The impact of the proposed approximation on three different imaging applications has been also evaluated. Experiments demonstrate reasonable accuracy penalties over the accurate counterparts.