2021
DOI: 10.1016/j.knosys.2021.106869
|View full text |Cite
|
Sign up to set email alerts
|

Sentence representation with manifold learning for biomedical texts

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7
2

Relationship

0
9

Authors

Journals

citations
Cited by 22 publications
(12 citation statements)
references
References 26 publications
0
12
0
Order By: Relevance
“…Table 3 shows the accuracy of BRCNN in the English tense collocation proposed in this study. To objectively evaluate the results of temporal collocation, HSASRL [23], SRML [24], and Usr-mtl [25] are introduced as comparison models. It can be seen from the results in Table 3 that the accuracy of BRCNN is higher than HSASRL and Usr-mtl.…”
Section: Results and Analysismentioning
confidence: 99%
“…Table 3 shows the accuracy of BRCNN in the English tense collocation proposed in this study. To objectively evaluate the results of temporal collocation, HSASRL [23], SRML [24], and Usr-mtl [25] are introduced as comparison models. It can be seen from the results in Table 3 that the accuracy of BRCNN is higher than HSASRL and Usr-mtl.…”
Section: Results and Analysismentioning
confidence: 99%
“…Good results are obtained in multiple datasets. Zhao et al [19] use the attention mechanism to obtain semantic representation at different levels in sentences, which can more accurately express the emotion reflected in the text and make the sentence representation more comprehensive. Most of the neural network training models remain in the training of vocabulary targets, and there are few studies on the whole sentence as the training target.…”
Section: Related Workmentioning
confidence: 99%
“…That is, sentence representations are gradually combined with neural networks from simple word embedding models. For example, Zhao et al [17] considered the powerful feature extraction capability of a convolutional neural network (CNN), which captures feature information at different levels of text and performs simple splicing to generate sentence embedding representations. Although CNN can obtain the local information flow of the sentence well, it is easy to lose the global semantics of the input sentence, resulting in problems such as poor sentence representation.…”
Section: Relate Workmentioning
confidence: 99%