Neural architecture search (NAS) has significant progress in improving the accuracy of image classification. Recently, some works attempt to extend NAS to image segmentation which shows preliminary feasibility. However, all of them focus on searching architecture for semantic segmentation in natural scenes. In this paper, we design three types of primitive operation set on search space to automatically find two cell architecture DownSC and UpSC for semantic image segmentation especially medical image segmentation. Inspired by the U-net architecture and its variants successfully applied to various medical image segmentation, we propose NAS-Unet which is stacked by the same number of DownSC and UpSC on a U-like backbone network. The architectures of DownSC and UpSC updated simultaneously by a differential architecture strategy during the search stage. We demonstrate the good segmentation results of the proposed method on Promise12, Chaos, and ultrasound nerve datasets, which collected by magnetic resonance imaging, computed tomography, and ultrasound, respectively. Without any pretraining, our architecture searched on PASCAL VOC2012, attains better performances and much fewer parameters (about 0.8M) than U-net and one of its variants when evaluated on the above three types of medical image datasets.INDEX TERMS Medical image segmentation, convolutional neural architecture search, deep learning.
Differential Architecture Search (DARTS) is now a widely disseminated weight-sharing neural architecture search method. However, there are two fundamental weaknesses remain untackled. First, we observe that the wellknown aggregation of skip connections during optimization is caused by an unfair advantage in an exclusive competition. Second, there is a non-negligible incongruence when discretizing continuous architectural weights to a one-hot representation. Because of these two reasons, DARTS delivers a biased solution that might not even be suboptimal. In this paper, we present a novel approach to curing both frailties. Specifically, as unfair advantages in a pure exclusive competition easily induce a monopoly, we relax the choice of operations to be collaborative, where we let each operation have an equal opportunity to develop its strength. We thus call our method Fair DARTS. Moreover, we propose a zero-one loss to directly reduce the discretization gap. Experiments are performed on two mainstream search spaces, in which we achieve new state-of-the-art networks on Ima-geNet. Our code is available here 1 .
The recent advances in convolutional neural networks (CNNs) have used for image classification to achieve remarkable results. Different fields of image datasets will need different CNN architectures to achieve exceptional performance. However, designing a good CNN architecture is a computationally expensive task and requires expert knowledge. In this paper, we propose an effective framework to solve different image classification tasks using a convolutional neural architecture search (CNAS). The framework is inspired by current research on NAS, which automatically learns the best architecture for a specific training dataset, such as MNIST and CIFAR-10. Many search algorithms have been proposed for implementing NAS; however, insufficient attention has been paid to the selection of primitive operations (POs) in the search space. We propose a more efficient search space for learning the CNN architecture. Our search algorithm is based on Darts (a differential architecture search method), but it considers different numbers of intermediate nodes and replaces some unused POs by channel shuffle operation and squeeze-and-excitation operation. We achieve a better performance than Darts on both the CIFAR10/CIFA100 and Tiny-ImageNet datasets. We retain the none operation in deriving the architecture. The performance of the model has slightly decreased, but the number of architecture parameters has been reduced by approximately 40%. To balance the performance and the number of architecture parameters, the framework can learn a dense architecture for high-performance machines, such as servers, but a sparse architecture for resource-constrained devices, such as embedded systems or mobile devices.INDEX TERMS Image classification, convolutional neural architecture search, deep learning.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.