Generally, a few-shot distribution shift will lead to a poor generalization. Furthermore, while the number of instances of each class in the real world may significantly different, the existing few-shot classification methods are based on the assumption that the number of samples in each class is equal, which causes the trained classifier invalid. Moreover, through ResNet and WRN (Wide Residual Network) have achieved great success in the image processing field, the depth and width of CNNs constrain the conventional convolution layer performance. Thus, to overcome the above problems, the model of this paper proposes a novel few-shot classification model that uses learning balance variables to decide how much to learn from the imbalance dataset, which dynamically generates the convolution kernel based on each input. In our model, to extend the decision boundaries and enhance the class representations, this paper uses embedding propagation as a regularizer for manifold smoothing. Manifold smoothing can effectively solve the above problems of transductive learning. The interpolations between neural network features based on similarity graphs are used by embedding propagation. Experiments show that embedding propagation can produce a better embedding manifold and our model in standard few-shot datasets, such as miniImagenet, tieredImagenet, CUB has state-of-the-art results. It significantly outperforms the existing few-shot approaches, which consistently improves the accuracy of the models by about 11%.