Weather variation in the distribution of image data can cause a decline in the performance of existing visual algorithms during evaluation. Adding additional samples of target domain to training data or using pre-trained image restoration methods such as de-hazing, de-raining, and de-snowing, to improve the quality of input images are two promising solutions. In this work, we propose Multiple Weather Translation GAN (MWTG), a CycleGAN-based, dual-purpose framework that simultaneously learns weather generation and its removal from image data. MWTG consists of four GANs constrained using cycle consistency that carry out domain translation tasks between hazy, rainy, snowy, and clear weather, using an asymmetric approach. To increase network capacity, we employ a spatial feature transform (SFT) layer to fuse the features extracted from the weather layer, which contains high-level domain information from the previous generators. Further, we collect an unpaired, real-world driving dataset recorded under various weather conditions called Realistic Driving Scenes under Bad Weather (RDSBW). We qualitatively and quantitatively evaluate MWTG using the RDSBW and the variation of Cityscapes that synthesize weather effects, eg., FoggyCityscape. Our experimental results suggest that MWTG can generate realistic weather in clear images and also accurately remove noise from weather images. Furthermore, the SOTA pedestrian detector ASCP is shown to achieve an impressive gain in detection precision after image restoration using the proposed MWTG method.