Visual discomfort significantly limits the broader application of stereoscopic display technology. Hence, the accurate assessment of stereoscopic visual discomfort is a crucial topic in this field. Electroencephalography (EEG) data, which can reflect changes in brain activity, have received increasing attention in objective assessment research. However, inaccurately labeled data, resulting from the presence of individual differences, restrict the effectiveness of the widely used supervised learning methods in visual discomfort assessment tasks. Simultaneously, visual discomfort assessment methods should pay greater attention to the information provided by the visual cortical areas of the brain. To tackle these challenges, we need to consider two key aspects: maximizing the utilization of inaccurately labeled data for enhanced learning and integrating information from the brain's visual cortex for feature representation purposes. Therefore, we propose the weakly supervised graph convolution neural network for visual discomfort (WSGCN-VD). In the classification part, a center correction loss serves as a weakly supervised loss, employing a progressive selection strategy to identify accurately labeled data while constraining the involvement of inaccurately labeled data that are influenced by individual differences during the model learning process. In the feature extraction part, a feature graph module pays particular attention to the construction of spatial connections among the channels in the visual regions of the brain and combines them with high-dimensional temporal features to obtain visually dependent spatio-temporal representations. Through extensive experiments conducted in various scenarios, we demonstrate the effectiveness of our proposed model. Further analysis reveals that the proposed model mitigates the impact of inaccurately labeled data on the accuracy of assessment.