Haze is an important factor in photography with a special aesthetic, emotional, or compositional meaning, an image hazing method is proposed based on generative adversarial network. The proposed network consists of two parts: the generator has a symmetric encoder and decoder structure with skip connection, which is used to generate hazy images; and the discriminator is a global fully convolutional network, which is used to identify the reality of the generated hazy images. The haze-free image is used as the input of the generative network to obtain a corresponding hazy image. Then the discriminative network judges the similarity between the original haze-free image and the corresponding hazy image to optimize the network parameters, which means that all the parameters of the entire network can be automatically learned through training. The loss function of the network is the combination of the GAN(Generative Adversarial Networks) loss and the REG(Regression) loss with a coefficient λ. We design a REG loss which includes feature loss loss FL and image loss loss IL. By calculating the loss, a model mapping the relationship between the two corresponding pixels from the original image and the synthetic hazy image respectively is well trained. In the last part of the paper, we have shown some impressing synthetic hazy images obtained by our model as well as analyzing the effects on images with different scenes. Moreover, we have contrasted the experimental results of our algorithm with that of other classical ones. Experiments demonstrate that the proposed algorithm has a better performance than other state-of-the-art methods on both virtual-scene and real-world images qualitatively and quantitatively. INDEX TERMS Generative adversarial networks, image hazing, regression loss, mapping from pixels to pixels.