Context. The rise of use cases of AI catered towards the Edge, where devices have limited computation power and storage capabilities, motivates the need for better understating of how AI performs and consumes energy. Goal. The aim of this paper is to empirically assess the impact of three different AI containerization strategies on the energy consumption, execution time, CPU, and memory usage for computer-vision tasks on the Edge. Method. In this paper we conduct an experiment with the used containerization strategy as main factor, with three treatments: ONNX Runtime, WebAssembly, and Docker. The subjects of the experiment are four widely-used computer-vision algorithms. We then orchestrate a series of runs where we deploy the four subjects on different generations of Raspberry Pi devices, with different hardware capabilities. A total of 120 runs (per device) are recorded to gather data on energy, execution time, CPU, and memory. Results. We found a statistically significant difference between the three containerization strategies on all dependent variables. Specifically, WebAssembly proves to be a valuable alternative for devices with reduced disk space and computation power. Conclusions. For computer-vision tasks with limited disk space and RAM memory requirements, developers should prefer WebAssembly for deployment. The (non-dockerized) ONNX Runtime resulted to be the best choice in terms of energy consumption and execution time.