Deep learning methods have become increasingly popular for optical sensor image analysis. They are adaptable to specific tasks and simultaneously demonstrate a high degree of generalization capability. However, applying deep neural networks to problems with low availability of labeled training data can lead to a model being incapable of generalizing to possible scenarios that may occur in test data, especially with the occurrence of dominant imaging artifacts. We propose a data-centric augmentation approach based on generative adversarial networks that overlays the existing labeled data with synthetic artifacts that are generated from data not present in the training set. This augmentation leads to a more robust generalization capability in semantic segmentation. Our method does not need any additional labeling and does not lead to additional memory or time consumption during inference. Further, we find it to be more effective than comparable approaches that are based on procedurally generated disturbances and the direct use of real disturbances. Building upon the improved segmentation results, we observe that our approach leads to improvements of 22% in the F1-score for an evaluated detection problem, which promises significant robustness towards future disturbances. In the context of sensor-based data analysis, the compensation of image artifacts is a challenge. When the structures of interest are not clearly visible in an image, algorithms that can cope with artifacts are crucial for obtaining the desired information. Thereby, the high variation of artifacts, the combination of different types of artifacts, and their similarity to signals of interest are specific issues that have to be considered in the analysis. Despite the high generalization capability of deep learning-based approaches, their recent success was driven by the availability of large amounts of labeled data. Therefore, the provision of comprehensive labeled image data with different characteristics of image artifacts is of importance. At the same time, applying deep neural networks to problems with low availability of labeled data remains a challenge. This work presents a data-centric augmentation approach based on generative adversarial networks that augments the existing labeled data with synthetic artifacts generated from data not present in the training set. In our experiments, this augmentation leads to a more robust generalization in segmentation. Our method does not need additional labeling and does not lead to additional memory or time consumption during inference. Further, we find it to be more effective than comparable augmentations based on procedurally generated artifacts and the direct use of real artifacts. Building upon the improved segmentation results, we observe that our approach leads to improvements of 22% in the F1-score for an evaluated detection problem. Having achieved these results with an example sensor, we expect increased robustness against artifacts in future applications.