Large-scale distributed convolution neural network (CNN) training brings two performance challenges: model performance and system performance. Large batch size usually leads to model test accuracy loss, which counteracts the benefits of parallel SGD. The existing solutions require massive hyperparameter hand-tuning. To overcome this difficult, we analyze the training process and find that earlier training stages are more sensitive to batch size. Accordingly, we assert that different stages should use different batch size, and propose a variable batch size strategy. In order to remain high test accuracy under larger batch size cases, we design an auto-tuning engine for automatic parameter tuning in the proposed variable batch size strategy. Furthermore, we develop a dataflow implementation approach to achieve the high-throughput CNN training on supercomputer system. Our approach has achieved high generalization performance on SOAT CNN networks. For the ShuffleNet, training with ImageNet-1K dataset, we scale the batch size to 120 K without accuracy loss and to 128 K with only a slight loss. And the dataflow implementation approach achieves 93.5% scaling efficiency on 1024 GPUs compared with the state-of-the-art.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.