“…In DNNs, approximations were introduced at levels of the data type quanti- zation, microarchitecture (e.g. neurons insignificantly contributing to the quality of outputs can be removed), training algorithm (an iterative process which can be stopped when good enough results are obtained), the multiply-accumulatetransform circuits (where the design of approximate multipliers and adders for DNN applications represents an independent topic [15], [16]), and memory cells and architecture (where, e.g., less significant bits can be stored in energy efficient, but less reliable memory cells [17]). An ultralow power deep learning ASIC for IoT was implemented on a single chip, capable of performing 374 GOPS/W and consuming less than 300 µW.…”