Considering the increasing prominence of 3D real city construction technology, 3D urban point cloud scene data merit further investigation. However, achieving finegrained semantic segmentation of urban scenes remains highly challenging due to the natural orderlessness and unstructured nature of acquired point clouds, along with their large-scale points and non-uniform distributions. In this study, we present LFEA-Net, a novel neural network specifically designed for semantic segmentation of large-scale urban point cloud scenes. The network comprises two main components: (1) The local feature extraction (LFE) module, which fully exploits local spatial, color and semantic information to enhance and preserve crucial information, consisting of bilateral feature encoding unit and multidimensional feature encoding unit. (2) The local feature aggregation (LFA) module, designed to bridge the semantic gap between local information and emphasize both local significant features and the entire local neighbor, consisting of soft cross operation and united pooling operation. We have evaluated the performance of LFEA-Net with state-of-the-art networks using the photogrammetric point cloud dataset SensatUrban, achieving 61.6 of mIoU score. The results demonstrate the superior efficacy of LFEA-Net in accurately segmenting and classifying large-scale urban point cloud scenes, highlighting its potential to advance environmental information perception.