Oscillatory Neural Network (ONN) is an emerging neuromorphic architecture composed of oscillators that implement neurons and coupled by synapses. ONNs exhibit rich dynamics and associative properties, which can be used to solve problems in the analog domain according to the paradigm let physics compute. For example, compact oscillators made of VO 2 material are good candidates for building low-power ONN architectures dedicated to AI applications at the edge like pattern recognition. However, little is known about the ONN scalability and its performances when implemented in hardware. Before deploying ONN, it is necessary to assess its computation time, energy consumption, performance and accuracy for a given application. Here, we consider a VO 2 -oscillator as an ONN building block and we perform circuit-level simulations to evaluate the ONN performances at the architecture level. Notably, we investigate how ONN computation time, energy and memory capacity scale with the number of oscillators. We show that ONN energy grows linearly when scaling up the network, making it suitable for large-scale integration at the edge. Furthermore, we investigate the design knobs for minimizing the ONN energy. Assisted by TCAD simulations, we report on scaling of VO 2 devices in crossbar geometry to decrease the oscillator voltage and energy. We benchmark ONN versus state-of-the-art architectures and observe that the ONN paradigm is a competitive energy efficient solution for scaled VO 2 devices. Finally, we present how ONN can efficiently detect edges in images captured on low-power edge devices.