The Intravascular Ultrasound (IVUS) technology is an important imaging modality used in realistic clinical practice, it is often combined with coronary angiography (CAG) to diagnose coronary artery disease. As the golden standard for in vivo imaging of coronary artery walls, it can provide high-resolution images of the artery wall. Generally, the IVUS acquisition device uses an ultrasonic transducer to acquire the fine-grained anatomical information of the cardiovascular tissue by means of pulse echo imaging. However, widely used mechanical rotating imaging system suffered from guidewire artifacts. The inadequate visualization caused by artifacts often caused huge trouble for clinical diagnosis and subsequent tissue structure evaluation, and there is no suitable way to solve this long-standing problem so far. In this paper, we conducted an exploratory study and proposed the first deep learning based network named AIVUS for repairing the corrupted IVUS images. The network has a novel generative adversarial architecture, the united of gated convolution and spatiotemporal aggregation structure has been introduced to enhance its restoration capability. The proposed network can handle large-scale, moving guidewire artifacts, and it can fully utilize spatial and temporal information hidden in sequence to recover the high-fidelity original content and maintain consistency between frames. Furthermore, we compared it with several latest restoration models, including both image restoration and video restoration models. Qualitative and quantitative comparison results on the collected IVUS datasets demonstrate that our method has achieved outstanding performance and its potential clinical value.