Graphics Processing Units and Tensor Processing Units coupled with tiny machine learning models deployed on edge devices are revolutionizing computer vision and real-time tracking systems. However, edge devices often pose constraints regarding computational resources and power consumption. This paper proposes a visual-based virtual sensor paradigm that provides power-aware multi-object tracking at the edge while preserving tracking accuracy and enhancing privacy. The virtual sensor implements a new Dynamic Inference Power Manager (DIPM) based on an adaptive frame rate. We implement and deploy the virtual sensor and the DIPM on the NVIDIA Jetson Nano edge platform to prove the effectiveness and efficiency of our solutions. Extensive experimental results show that the proposed virtual sensor can achieve a 40% reduction in energy consumption, with a marginal decrease of less than 6% in tracking accuracy.