Automatic image colorization without manual interventions is an ill-conditioned and inherently ambiguous problem. Most of existing methods focus on formulating colorization as a regression problem and learn parametric mappings from grayscale to color through deep neural networks. Due to the multimodalities of color-grayscale space, in many applications, it is not required to recover exact ground-truth color. Pair-wise pixel-to-pixel learning-based algorithms lack rationality. Techniques such as color space conversion techniques are then proposed to avoid such direct pixel learning. However, the coloring results after color space conversion are blunt and unnatural. In this paper, we hold viewpoints that a reasonable solution is to generate some colorized result that looks natural. No matter what color a region is to be assigned, the colorized region should be semantically and spatially consistent. In this paper, we propose an effective semantic-aware automatic colorization model via unpaired cycle-consistent self-supervised network. Low-level monochrome loss, perceptual identity loss and high-level semantic-consistence loss, together with adversarial loss, are introduced to guide network selftraining. We train and test our model on randomly selected subsets from PASCAL VOC 2012. The