Medical segmentation of optical coherence tomography (OCT) images using deep neural networks (DNNs) has been intensively studied in recent years, but generalization across datasets from different OCT devices is still a considerable challenge. In this work, we focus on the novel self-examination low-cost full-field (SELFF)-OCT, a handheld imaging device for home-monitoring of retinopathies, and the clinically used Spectralis-OCT. Images from both devices exhibit different characteristics, leading to different representations within DNNs and consequently to a reduced segmentation quality when switching between devices. To robustly segment OCT images from an OCT-scanner unseen during training, we alter the appearance of the images using manipulation methods ranging from traditional data augmentation to noise-based methods to learning-based style transfer methods. We evaluate the effect of the manipulation methods with respect to segmentation quality and changes in the feature space of the DNN. Reducing the domain shift with style transfer methods results in a significantly better segmentation of pigment epithelial detachment (PED). Investigations of the feature space show that the segmentation quality of PED is negatively correlated with the distance between training and test distributions. Our methods and results help researchers to choose and evaluate image manipulation methods for developing OCT segmentation models which are robust against domain shifts.