Neural architecture search has attracted great atten-4 tion in the research community and has been successfully applied in 5 the industry recently. Differentiable architecture search (DARTS) 6 is an efficient architecture search method. However, the networks 7 searched by DARTS are often unstable due to the large gap in the 8 architecture depth between the search phase and the verification 9 phase. In addition, due to unfair exclusive competition between 10 different candidate operations, DARTS is prone to skip connection 11 aggregation, which may cause performance collapse. In this article, 12 we propose progressive partial channel connections based on chan-13 nel attention for differentiable architecture search (PA-DARTS) to 14 solve the above problems. In the early stage of searching, we only 15 select a few key channels for convolution using channel attention 16 and reserve all candidate operations. As the search progresses, 17 we gradually increase the number of channels and eliminate un-18 promising candidate operations to ensure that the search phase and 19 verification phase are all carried out on 20 cells. Due to the existence 20 of the partial channel connections based on channel attention, 21 we can eliminate the unfair competition between operations and 22 increase the stability of PA-DARTS. Experimental results showed 23 that PA-DARTS could achieve 97.59% and 83.61% classification 24 accuracy on CIFAR-10 and CIFAR-100, respectively. On ImageNet, 25 our algorithm achieved 75.3% classification accuracy. 26 Index Terms-Differentiable architecture search, exclusive 27 competition, channel attention, progressive partial channel 28 connections. 29 I. INTRODUCTION 30 I N RECENT years, deep neural networks have been widely 31 used in the industry due to their outstanding performance [1], 32 [2]. The topological structure of deep neural networks plays a 33 decisive role in their performance. Early neural network archi-34 tectures were mostly designed manually by professionals, such 35 as VGG [3], Resnet [4] and GoogleNet [5]. Although these net-36 works have performed impressively, designing an excellent deep 37 neural network architecture requires plenty of trial and error by 38 Manuscript