“…In Fan et al [1], the authors developed efficient in-memory circuits to perform both floating-point vector-matrix multiplication and vector Hadamard product (element-wise product) operations that are necessary for neural network training. The proposed IMC design, validated on 28 nm technology design kit, enables both neural network inference and training at the edge, and improves the data density by 3.5 × compared to previous designs with negligible precision loss compared to the ideal BFloat16 computing.…”