A long-standing research goal is to develop computing technologies that mimic the brain’s capabilities by implementing computation in electronic systems directly inspired by its structure, function, and operational mechanisms, using low-power, spike-based neural networks. The Loihi neuromorphic processor provides a low-power, large-scale network of programmable silicon neurons for brain-inspired artificial intelligence applications. This paper exploits the Loihi processors and a theory-guided methodology to enable unsupervised learning of spike patterns. Our method ensures efficient and rapid selection of the network’s hyperparameters, enabling the neuromorphic processor to generate attractor states through real-time unsupervised learning. Precisely, we follow a fast design process in which we fine-tune network parameters using mean-field theory. Moreover, we measure the network’s learning ability regarding its error correction and pattern completion aptitude. Finally, we observe the dynamic energy consumption of the neuron cores for each millisecond of simulation equal to 23 μJ/time step during the learning and recall phase for four attractors composed of 512 excitatory neurons and 256 shared inhibitory neurons. This study showcases how large-scale, low-power digital neuromorphic processors can be quickly programmed to enable the autonomous generation of attractor states. These attractors are fundamental computational primitives that theoretical analysis and experimental evidence indicate as versatile and reusable components suitable for a wide range of cognitive tasks.