Few-shot semantic segmentation (FSS) aims to segment novel classes with only a few annotated samples. Existing methods to FSS generally combine the annotated mask and the corresponding support image to generate the class-specific representation, and perform the segmentation for the query image by matching the features of the query image to these representations. However, the segmentation performance could be fragile for the lack of an effective method to handle the inappropriate use of query features and the neglection of correlation between features in support and query images. In this work, we propose a novel feature disentanglement and recombination network (DRNet) to alleviate this problem. Concretely, we first apply the self-attention on both support foreground features and query foreground features. Then, the foreground features of the support and query branches are recombined using the crossattention after self-attention computation, which can encourage the foreground feature alignment between branches. Finally, the prototypes are generated from the recombined foreground features and support background features, and are utilized to guide the segmentation for given images. Considering the sensitivity of prototypes related to the subtle differences among objects from different classes and the same class, we further introduce a joint learning strategy to derive accurate segmentation of both seen and unseen objects in the support image and the query image respectively. Extensive experiments on the PASCAL-5 i and COCO-20 i datasets demonstrate the superiority of our DRNet comparing with the recent popular methods. The code is released on https://github.com/GS-Chang-Hn/DRNet-fss.