The substantial increase in communication throughput driven by the ever-growing machine-to-machine communication within a data center and between data centers is straining the short-reach communication links. To satisfy such demand-while still complying with the strict requirements in terms of energy consumption and latency-several directions are being investigated with a strong focus on equalization techniques for intensitymodulation/direct-detection (IM/DD) transmission. In particular, the key challenge equalizers need to address is the inter-symbol interference introduced by the fiber dispersion when making use of the low-loss transmission window at 1550 nm. Standard digital equalizers such as feed-forward equalizers (FFEs) and decision-feedback equalizers (DFEs) can provide only limited compensation. Therefore more complex approaches either relying on maximum likelihood sequence estimation (MLSE) or using machine-learning tools, such as neural network (NN) based equalizers, are being investigated. Among the different NN architectures, the most promising approaches are based on NNs with memory such as time-delay feedforward NN (TD-FNN), recurrent NN (RNN), and reservoir computing (RC). In this work, we review our recent numerical results on comparing TD-FNN and RC equalizers, and benchmark their performance for 32-GBd on-off keying (OOK) transmission. A special focus will be dedicated to analyzing the memory properties of the reservoir and its impact on the full system performance. Experimental validation of the numerical findings is also provided together with reviewing our recent proposal for a new receiver architecture relying on hybrid optoelectronic processing. By spectrally slicing the received signal, independently detecting the slices and jointly processing them with an NN-based equalizer (wither TD-FNN or RC), significant extension reach is shown both numerically and experimentally.