Data-to-text task aims to convert structured data into natural language text. However, the lack of large parallel data is a major practical problem for many domains in data-to-text generation. To address these issues, this paper proposes a method based on dual learning. Dual learning simultaneously learns two mutually dual generator and extractor, where the generator is responsible for text generation and the extractor for information extraction. Through dual learning, we can effectively utilize the interrelationships between unaligned data, thereby improving the performance of the generation model. We conduct experiments on an advertising datasets, and compare with traditional generation models. Experimental results demonstrate that the dual learning-based method achieves nearly the same performance as fully supervised approaches for the data-to-text generation task validating its effectiveness and feasibility in data-to-text generation tasks.