This study investigates the potential of graph neural networks (GNNs) for estimating the system-level integrated information and major complex in integrated information theory (IIT) 3.0. Owing to the hierarchical complexity of IIT 3.0, tasks such as calculating integrated information and identifying major complex are computationally prohibitive for large systems, thereby restricting the applicability of IIT 3.0 to small systems. To overcome this difficulty, we propose a GNN model with transformer convolutional layers characterized by multi-head attention mechanisms for estimating the major complex and its integrated information. In our approach, exact solutions for integrated information and major complex are obtained for systems with 5, 6, and 7 nodes, and two evaluations are conducted: (1) a non-extrapolative setting in which the model is trained and tested on a mixture of systems with 5, 6, and 7 nodes, and (2) an extrapolative setting in which systems with 5 and 6 nodes are used for training and systems with 7 nodes are used for testing. The results indicate that the estimation performance in the extrapolative setting remains comparable to that in the non-extrapolative setting, showing no significant degradation. In an additional experiment, the model is trained on systems with 5, 6, and 7 nodes and tested on a larger system of 100 nodes, composed of two subsystems of 50 nodes each, with limited inter-subsystem connectivity resembling a split-brain configuration. When the connectivity between the subsystems is low, "local integration" emerges, meaning that a single subsystem forms a major complex. As the connectivity increases, local integration rapidly disappears, and the integrated information gradually rises toward "global integration," in which a large portion of the entire system forms a major complex. Overall, our findings suggest that GNNs can potentially be used for estimating integrated information, major complex, and other IIT-related quantities.