Memristive crossbars have become a popular means for realizing unsupervised and supervised learning techniques. In previous neuromorphic architectures with leaky integrate-andfire neurons, the crossbar itself has been separated from the neuron capacitors to preserve mathematical rigor. In this work, we sought to design a simplified sparse coding circuit without this restriction, resulting in a fast circuit that approximated a sparse coding operation at a minimal loss in accuracy. We showed that connecting the neurons directly to the crossbar resulted in a more energy-efficient sparse coding architecture, and alleviated the need to pre-normalize receptive fields. This work provides derivations for the design of such a network, named the Simple Spiking Locally Competitive Algorithm, or SSLCA, as well as CMOS designs and results on the CIFAR and MNIST datasets. Compared to a non-spiking, non-approximate model which scored 33 % on CIFAR-10 with a single-layer classifier, this hardware scored 32 % accuracy. When used with a state-ofthe-art deep learning classifier, the non-spiking model achieved 82 % and our simplified, spiking model achieved 80 %, while compressing the input data by 92 %. Compared to a previously proposed spiking model, our proposed hardware consumed 99 % less energy to do the same work at 21 × the throughput. Accuracy held out with online learning to a write variance of 3 %, suitable for the often-reported 4-bit resolution required for neuromorphic algorithms; with offline learning to a write variance of 27 %; and with read variance to 40 %. The proposed architecture's excellent accuracy, throughput, and significantly lower energy usage demonstrate the utility of our innovations.