This work studied the energy behavior of six matrix multiplication algorithms with various physical asset usage patterns. Two were variants of the straight inner product of rows and columns. The rest were variants of Strassen’s divide-and-conquer. Cases varied in ways that were expected to affect energy behavior. The study collected data for square matrix dimensions up to 4000. The research used reliable on-chip integrated voltage regulators embedded in a recent HPC-class AMD CPU for power measurements. Inner product methods used much less energy than the others for small to moderately large matrices. The advantage diminished for sufficiently large dimensions. The power draw of the inner product methods was less for small dimensions. After a point, the power advantage shifted significantly in favor of the divide-and-conquer group (average of 24% better), with the more block-optimized versions showing increased power efficiency (at least 8.3% better than the base method). The study explored the interplay between algorithm design, power efficiency, and computational resources. It aims to help advance the cause of power efficiency in HPC and other scenarios that rely on this vital computation.