Image-to-Image translation aims to translate images from one domain to another. Existing approaches mainly stylize the images globally, while the local consistency between regions has been under-explored. Some instance-aware methods capture the regional consistency but heavily depend on wellannotated labels of a large-scale dataset. Besides, we observe that content-alike regions should have similar style between the target and translated images, however, little attention has been paid to explore such intrinsic property as explicit prior knowledge to guide the image translation process. In this paper, we aim to explore the label-free regional consistency for image-to-image translation. We propose regional relation consistency not only to maintain the global structure but also to keep a close look at the regional consistency, thus achieving more rigorous preservation of image contents. Moreover, we employ the phase of images as a semantic prior to select regions with similar content. We present phase-guided amplitude consistency to perform a more efficient local stylization. Extensive experiments verify that our approach outperforms the existing methods with a clear margin.