2021
DOI: 10.3390/agronomy11081530
|View full text |Cite
|
Sign up to set email alerts
|

ALBERT over Match-LSTM Network for Intelligent Questions Classification in Chinese

Abstract: This paper introduces a series of experiments with an ALBERT over match-LSTM network on the top of pre-trained word vectors, for accurate classification of intelligent question answering and thus the guarantee of precise information service. To improve the performance of data classification, a short text classification method based on an ALBERT and match-LSTM model was proposed to overcome the limitations of the classification process, such as few vocabularies, sparse features, large amount of data, lots of no… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 22 publications
0
4
0
Order By: Relevance
“…Deep learning techniques (R-BLS and G-BLS) outperform LSTM in text categorization. Entity discovery in complex queries is addressed using semantic features and classification [61][62][63][64][65].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Deep learning techniques (R-BLS and G-BLS) outperform LSTM in text categorization. Entity discovery in complex queries is addressed using semantic features and classification [61][62][63][64][65].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Zhou and Zhang [44] developed a medical QA model based on bidirectional encoder representations from transformers (BERT), generative pre-trained transformer 2 (GPT-2), and text-to-text transfer transformer (T5) models, thereby showing improved performance compared to existing systems. Wang et al [45] suggested a classification method based on a lite BERT (ALBERT) and match-long short-term memory (match-LSTM) models to improve the performance of data classification.…”
Section: General Question Answeringmentioning
confidence: 99%
“…In order to compare the performance of Bleem model and other models in extracting document elements, this study selected text similarity model (Islamaj R, 2019) [22] , BI CNN (Yin W, 2015) [23] , ABCNN (Type 3) (Yin W, 2016) [24] , match_ Lstm (Wang X, 2021) [25] , Bert (Uthirapathy S E 2023) [5] serve as the baseline.…”
Section: Performance Comparison and Analysis Of Different Modelsmentioning
confidence: 99%