The deployment of artificial neural networks-based optical channel equalizers on edge-computing devices is critically important for the next generation of optical communication systems. However, this is still a highly challenging problem, mainly due to the computational complexity of the artificial neural networks (NNs) required for the efficient equalization of nonlinear optical channels with large dispersion-induced memory. To implement the NN-based optical channel equalizer in hardware, a substantial complexity reduction is needed, while we have to keep an acceptable performance level of the simplified NN model. In this work, we address the complexity reduction problem by applying pruning and quantization techniques to an NN-based optical channel equalizer. We use an exemplary NN architecture, the multi-layer perceptron (MLP), to mitigate the impairments for 30 GBd 1000 km transmission over a standard single-mode fiber, and demonstrate that it is feasible to reduce the equalizer’s memory by up to 87.12%, and its complexity by up to 78.34%, without noticeable performance degradation. In addition to this, we accurately define the computational complexity of a compressed NN-based equalizer in the digital signal processing (DSP) sense. Further, we examine the impact of using hardware with different CPU and GPU features on the power consumption and latency for the compressed equalizer. We also verify the developed technique experimentally, by implementing the reduced NN equalizer on two standard edge-computing hardware units: Raspberry Pi 4 and Nvidia Jetson Nano, which are used to process the data generated via simulating the signal’s propagation down the optical-fiber system.