LiDAR segmentation is crucial for autonomous driving perception. Recent trends favor point-or voxel-based methods as they often yield better performance than the traditional range view representation. In this work, we unveil several key factors in building powerful range view models. We observe that the "many-to-one" mapping, semantic incoherence, and shape deformation are possible impediments against effective learning from range view projections. We present RangeFormer -a full-cycle framework comprising novel designs across network architecture, data augmentation, and post-processing -that better handles the learning and processing of LiDAR point clouds from the range view. We further introduce a Scalable Training from Range view (STR) strategy that trains on arbitrary lowresolution 2D range images, while still maintaining satisfactory 3D segmentation accuracy. We show that, for the first time, a range view method is able to surpass the point, voxel, and multi-view fusion counterparts in the competing LiDAR semantic and panoptic segmentation benchmarks, i.e., SemanticKITTI, nuScenes, and ScribbleKITTI. 𝜙 𝜃 Cylindrical Coordinate Cartesian Coordinate 𝑧 𝑥 𝑦 Many-to-One Adjacent points are mapped to the same grid Empty Grid Void cells in the range image due to 3D sparsity Deformation Shapes are distorted due to limited FOVs road car