“…Notably, early approaches for the BNN models in Courbariaux et al (2015Courbariaux et al ( , 2016; Rastegari et al (2016) quantize weights or activations into {+1, −1}, which replaces floating-point multiplications with binary bitwise operations, thus approximating the floating-point multiply-accumulate operation using bitwise XNOR and bit counting operations. Besides, the quantized binary weights can reduce weight storage requirements, which makes BNNs a highly appealing method for implementing CNNs on embedded systems and programmable devices (Guo, 2018;Zhou et al, 2017b;Yi et al, 2018;Liang et al, 2018). Despite many benefits above, the low precision of the binarized operations in BNNs degrades the classification ability on modern CNNs, thus limiting their applications.…”