In convolutional neural network (CNN) accelerators, the dominant power consumption is caused by the access of external data memory. In addition, power and area occupied by I/O interfaces maintaining low bit-error-rate, e.g., 1e-15, grow as the data rate increases. Considering the inherent error resilience of the inference process in machine learning applications, the requirement of error-free communication in the data-path is controversial. In this paper, a custom CNN accelerator integrating a channel emulator is designed by using an FPGA to analyze the effect of the BER of an I/O transceiver on the image classification accuracy. In order to implement a channel emulator, a digital-domain look-up-table (LUT)-based 12-tap FIR filter is employed to create inter-symbol interference (ISI), and a PRBS31 generator is used as a noise source. The implementation was evaluated by running the ImageNet dataset on the FPGA-based custom accelerator (Virtex Ultrascale+) implementing VGG-16. The results show that the BER up to 1e-4 in the memory access has a negligible impact on the inference accuracy.