Convolutional neural networks (CNNs) have become the de facto algorithms of choice for semantic segmentation tasks in biomedical image processing. Yet, models based on CNNs remain susceptible to the domain shift problem, where a mismatch between source and target distributions could lead to a drop in performance. CNNs were recently shown to exhibit a textural bias when processing natural images, and recent studies suggest that this bias also extends to the context of biomedical imaging. In this paper, we focus on Magnetic Resonance Images (MRI) and investigate textural bias in the context of k-space artifacts (Gibbs, spike, and wraparound artifacts), which naturally manifest in clinical MRI scans. We show that carefully introducing such artifacts at training time can help reduce textural bias, and consequently lead to CNN models that are more robust to acquisition noise and out-of-distribution inference, including scans from hospitals not seen during training. We also present Gibbs ResUnet; a novel, end-to-end framework that automatically finds an optimal combination of Gibbs k-space stylizations and segmentation model weights. We illustrate our findings on multimodal and multi-institutional clinical MRI datasets obtained retrospectively from the Medical Segmentation Decathlon (n = 750) and The Cancer Imaging Archive (n = 243).INDEX TERMS Texture, bias, artifacts, robustness, MRI, CNNs.