Spiking neural networks (SNN) have started to deliver energy-efficient, massively parallel, and low-latency solutions to AI problems, facilitated by the emerging neuromorphic hardware. To harness these computational benefits, SNN need to be trained by learning algorithms that adhere to braininspired neuromorphic principles, namely event-based, local, and online computations. However, the state-of-the-art SNN training algorithms are based on backpropagation that does not follow the above neuromorphic computational principles. Due to its limited biological plausibility, the application of backprop to SNN requires non-local feedback pathways for transmitting continuous-valued errors, and relies on gradients from future timesteps. The recent introduction of biologically plausible modifications to backprop has helped overcome several of its limitations, but limits the degree to which backprop is approximated, which hinders its performance. Here, we propose a biologically plausible gradientbased learning algorithm for SNN that is functionally equivalent to backprop, while adhering to all three neuromorphic computational principles. We introduced multi-compartment spiking neurons with local eligibility traces to compute the gradients required for learning, and a periodic "sleep" phase to further improve the approximation to backprop during which a local Hebbian rule aligns the feedback and feedforward weights. Our method achieved the same level of performance as backprop with multi-layer fully connected SNN on MNIST (98.13%) and the event-based N-MNIST (97.59%) datasets. We then deployed our learning algorithm on Intel's Loihi neuromorphic processor to train a 1-hidden-layer network for MNIST, and obtained 93.32% test accuracy while consuming 400 times less energy per training sample than Bi-oGrad on GPU. Our work demonstrates that optimal learning is feasible in neuromorphic computing, and further pursuing its biological plausibility can better capture the computational benefits of this emerging computing paradigm.