“…In this paper, we consider {4, 6, 8, 16} precisions in our search space for the ImageNet dataset as the accelerator supports these bit widths. Hence, there exist 16 distinct weight and activation precision choices for each layer, which are as follows: {(4,4), (6,4), (8,4), (16,4), (4,6), (6,6), (8,6), (16,6), (4,8), (6,8), (8,8), (16,8), (4,16), (6,16), (8,16), (16,16)}…”