Weeds are non-cultivated plants that grow amidst crops, inhibiting their healthy development. These invasive species substantially compromise both the quality and yield of agricultural produce. Hence, the precise identification and effective eradication of weeds are pivotal to optimizing crop yield and realizing precision agriculture. This study introduces PF-UperNet, a semantic segmentation approach rooted in an encoder-decoder architecture, designed to autonomously distinguish between Bean Crop and weeds using advanced computer vision techniques. Our methodology refines the foundational UperNet in several significant ways: Firstly, we adopt the PoolFormer-S12 as a substitute for UperNet's backbone structure, aiming to trim down the model parameters and boost its performance metrics. Secondly, the Efficient Channel Attention (ECA) mechanism is integrated into both the PoolFormer-S12 and the Decoder, sharpening the network's focus on extracting salient channel features. Then, within the Decoder, the Feature Alignment Pyramid Network (FaPN) supplants the conventional Feature Pyramid Network (FPN) module, remedying the misalignment issues observed in UperNet's FPN feature maps. Lastly, we replace the Cross-Entropy loss with a combination of Cross-Entropy loss and Dice coefficient loss to make the model pay more attention to the regions to be detected. Empirical evidence underlines the efficacy of our approach, registering a Mean Intersection over Union (MIoU) of 87.45%, a Mean Pixel Accuracy (MPA) of 96.82%, and encapsulating 46.16M parameters. Relative to the benchmark UperNet, our model demonstrates enhancements of 1.08% and 0.25% in MIoU and MPA, respectively, and accomplishes a parameter reduction of 27.92%. This model can provide robust technical support for precise, targeted weeding.