2022
DOI: 10.1016/j.heliyon.2022.e09683
|View full text |Cite
|
Sign up to set email alerts
|

Automatic symptoms identification from a massive volume of unstructured medical consultations using deep neural and BERT models

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
4
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 44 publications
0
4
0
Order By: Relevance
“…Looking at the multi-label classification research summarized in Table 3, we can see that the majority covered classifying news-which, in nature, tends to be structured text without spelling mistakes-while others covered social media text with a balanced data set, and revealed acceptable scores. At the same time, the most relevant research that covered an imbalanced medical consultation data set revealed comparable performance to our models in terms of F1 score (achieving 35.46%) [50]. We assume that the characteristics of the used data set play a critical role in the quality of the developed classifiers.…”
Section: Discussionmentioning
confidence: 59%
See 2 more Smart Citations
“…Looking at the multi-label classification research summarized in Table 3, we can see that the majority covered classifying news-which, in nature, tends to be structured text without spelling mistakes-while others covered social media text with a balanced data set, and revealed acceptable scores. At the same time, the most relevant research that covered an imbalanced medical consultation data set revealed comparable performance to our models in terms of F1 score (achieving 35.46%) [50]. We assume that the characteristics of the used data set play a critical role in the quality of the developed classifiers.…”
Section: Discussionmentioning
confidence: 59%
“…CNN-either stand-alone or augmented with other techniques-has shown notably good performance for text classification [6,11,12,24,26,34]. It can be observed that RNNs are not widely used for problems that require sequential dependencies, and many text classification research has used their variations, LSTM/BiLSTM and GRU/BiGRU instead, either stand-alone or in combination with other algorithms [6,11,23,28,32,[34][35][36]39,50]. When an RNN is used, it is often combined with other algorithms [6,12].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Meanwhile, Chen et al [36] utilized coronary heart disease, also known as angina, as an example to construct a pre-trained diagnostic model for traditional Chinese medicine texts based on the BERT model, completing text classification tasks for different types of coronary heart disease medical cases. Faris et al [37] designed a method for symptom identification and diagnosis based on BERT to assist doctors in handling consultations in multiple languages from users. BERT-based and CNN-based medical application methods are summarized in Table 1.…”
Section: Related Workmentioning
confidence: 99%