A thorough and holistic scene understanding is crucial for autonomous vehicles, where LiDAR semantic segmentation plays an indispensable role. However, most existing methods focus on the network design while neglecting the inherent difficulty, i.e, imbalanced data distribution in the realistic dataset (also named long-tailed distribution), which narrows down the capability of state-of-the-art methods. In this paper, we propose an input-output balanced framework to handle the issue of long-tailed distribution. Specifically, for the input space, we synthesize these tailed instances from mesh models and well simulate the position and density distribution of LiDAR scan, which enhances the input data balance and improves the data diversity. For the output space, a multihead block is proposed to group different categories based on their shapes and instance amounts, which alleviates the biased representation of dominating category during the feature learning. We evaluate the proposed model on two large-scale datasets, i.e, SemanticKITTI and nuScenes, where state-ofthe-art results demonstrate its effectiveness. The proposed new modules can also be used as a plug-and-play, and we apply them on various backbones and datasets, showing its good generalization ability.