The aim of this study was to develop a deep neural network for respiratory motion compensation in free‐breathing cine MRI and evaluate its performance. An adversarial autoencoder network was trained using unpaired training data from healthy volunteers and patients who underwent clinically indicated cardiac MRI examinations. A U‐net structure was used for the encoder and decoder parts of the network and the code space was regularized by an adversarial objective. The autoencoder learns the identity map for the free‐breathing motion‐corrupted images and preserves the structural content of the images, while the discriminator, which interacts with the output of the encoder, forces the encoder to remove motion artifacts. The network was first evaluated based on data that were artificially corrupted with simulated rigid motion with regard to motion‐correction accuracy and the presence of any artificially created structures. Subsequently, to demonstrate the feasibility of the proposed approach in vivo, our network was trained on respiratory motion‐corrupted images in an unpaired manner and was tested on volunteer and patient data. In the simulation study, mean structural similarity index scores for the synthesized motion‐corrupted images and motion‐corrected images were 0.76 and 0.93 (out of 1), respectively. The proposed method increased the Tenengrad focus measure of the motion‐corrupted images by 12% in the simulation study and by 7% in the in vivo study. The average overall subjective image quality scores for the motion‐corrupted images, motion‐corrected images and breath‐held images were 2.5, 3.5 and 4.1 (out of 5.0), respectively. Nonparametric‐paired comparisons showed that there was significant difference between the image quality scores of the motion‐corrupted and breath‐held images (P < .05); however, after correction there was no significant difference between the image quality scores of the motion‐corrected and breath‐held images. This feasibility study demonstrates the potential of an adversarial autoencoder network for correcting respiratory motion‐related image artifacts without requiring paired data.
Purpose
To automate the segmentation of the peripheral arteries and veins in the lower extremities based on ferumoxytol‐enhanced MR angiography (FE‐MRA).
Methods
Our automated pipeline has 2 sequential stages. In the first stage, we used a 3D U‐Net with local attention gates, which was trained based on a combination of the Focal Tversky loss with region mutual loss under a deep supervision mechanism to segment the vasculature from the high‐resolution FE‐MRA datasets. In the second stage, we used time‐resolved images to separate the arteries from the veins. Because the ultimate segmentation quality of the arteries and veins relies on the performance of the first stage, we thoroughly evaluated the different aspects of the segmentation network and compared its performance in blood vessel segmentation with currently accepted state‐of‐the‐art networks, including Volumetric‐Net, DeepVesselNet‐FCN, and Uception.
Results
We achieved a competitive F1 = 0.8087 and recall = 0.8410 for blood vessel segmentation compared with F1 = (0.7604, 0.7573, 0.7651) and recall = (0.7791, 0.7570, 0.7774) obtained with Volumetric‐Net, DeepVesselNet‐FCN, and Uception. For the artery and vein separation stage, we achieved F1 = (0.8274/0.7863) in the calf region, which is the most challenging region in peripheral arteries and veins segmentation.
Conclusion
Our pipeline is capable of fully automatic vessel segmentation based on FE‐MRA without need for human interaction in <4 min. This method improves upon manual segmentation by radiologists, which routinely takes several hours.
Sampling k‐space asymmetrically (ie, partial Fourier sampling) in the readout direction is a common way to reduce the echo time (TE) during magnetic resonance image acquisitions. This technique requires overlap around the center of k‐space to provide a calibration region for reconstruction, which limits the minimum fractional echo to ~60% before artifacts are observed. The present study describes a method for reconstructing images from exact half echoes using two separate acquisitions with reversed readout polarity, effectively providing a full line of k‐space without additional data around central k‐space. This approach can benefit sequences or applications that prioritize short TE, short inter‐echo spacing or short repetition time. An example of the latter is demonstrated to reduce banding artifacts in balanced steady‐state free precession.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.