With the rise of deep learning technology, natural language processing applications have made significant progress, especially in the construction of large-scale model multi-round dialogue systems. The article proposes a multi-round dialogue intention recognition model based on the Transformer framework, which is applied to large-scale model multi-round dialogue and combined with the BERT-BiLSTM-CRF model to achieve effective extraction of multi-round dialogue information. The BERT model is used to obtain the relevant semantic vector information features of multi-round dialogues, and the BiLSTM model is used to annotate the sequences of multi-round dialogues, taking the sequential sequences of multi-round dialogues as the forward inputs and the reverse sequences as the backward inputs, so as to enhance the generation of temporal features of the dialogue information. The output of the BiLSTM model is then used as the input of the conditional random field, and the transfer characteristics between dialogue labels are fully considered to obtain the address annotation sequence with the largest joint probability to achieve effective extraction of dialogue information. To verify the feasibility of the model for effectively extracting multi-round dialogue information, simulations are carried out in this paper. The F1 value of the BERT-BiLSTM-CRF model for semantic extraction on the ATIS dataset is 96.09%, which is 3.65 percentage points higher than that of the BiLSTM-CRF model. As the number of iterations increases, the model’s loss value stably converges to 0.54 after the 10th iteration. Based on the BERT model, the combination of the BiLSTM model and the CRF model can achieve the effective extraction of semantic information from large-scale models for multi-round dialogues, which provides a new research direction for natural language processing.