In this paper, we propose a novel end-to-end model for document deblurring using cycle-consistent adversarial networks. The main objective of this work is to achieve image deblurring without knowledge of the blur kernel. Our method, named 'Blur2Sharp CycleGAN', generates a sharp image from a blurry one and shows how CycleGAN can be used in document deblurring. Using only a blurred image as input, we try to generate the sharp image. Thus, no information about the blur kernel is required. In the evaluation part, we use Peak Signal to Noise Ratio (PSNR) and Structural Similarity Index (SSIM) to compare the deblurring images. The experiments demonstrate a clear improvement in visual quality with respect to the state-of-the-art using a dataset of text images.