The edge processing of deep neural networks (DNNs) is becoming increasingly important due to its ability to extract valuable information directly at the data source to minimize latency and energy consumption. Although pruning techniques are commonly used to reduce model size for edge computing, they have certain limitations. Frequency-domain model compression, such as with the Walsh-Hadamard transform (WHT), has been identified as an efficient alternative. However, the benefits of frequency-domain processing are often offset by the increased multiply-accumulate (MAC) operations required. This article proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations. Our approach offers unique opportunities to enhance computational efficiency, resulting in several high-level advantages, including array microarchitecture with parallelism, analog-to-digital converter (ADC)/digital-to-analog converter (DAC)-free analog computations, and increased output sparsity. Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix. Moreover, our novel array microarchitecture enables adaptive stitching of cells column-wise and row-wise, thereby facilitating perfect parallelism in computations. Additionally, our scheme enables ADC/DAC-free computations by training against highly quantized matrix-vector products, leveraging the parameter-free nature of matrix multiplications. Another crucial aspect of our design is its ability to handle signed-bit processing for frequencybased transformations. This leads to increased output sparsity and reduced digitization workload. On a 16 × 16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 801 tera operations per second per Watt (TOPS/W) without early termination strategy and 2655 TOPS/W with early termination strategy at VDD = 0.85 V for 16-nm predictive technology models (PTM).