2022
DOI: 10.1016/j.eswa.2022.117606
|View full text |Cite
|
Sign up to set email alerts
|

Knowledge guided distance supervision for biomedical relation extraction in Chinese electronic medical records

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
9

Relationship

0
9

Authors

Journals

citations
Cited by 19 publications
(5 citation statements)
references
References 23 publications
0
5
0
Order By: Relevance
“…A biRNN effectively extracts input and future (positive and negative) information simultaneously for a specific timeframe, which is not the case with regular RNNs [137]. NER performance is enhanced with limited data by adding knowledge and data augmentation via a multi-task bidirectional RNN model combined with deep transfer learning [138,139]. The RNN encoder-decoder encodes a sequence of symbols into a static length vector representation of multi-RNN neural networks and symbols [140].…”
Section: Tag Encoder-decoder Architecturementioning
confidence: 99%
“…A biRNN effectively extracts input and future (positive and negative) information simultaneously for a specific timeframe, which is not the case with regular RNNs [137]. NER performance is enhanced with limited data by adding knowledge and data augmentation via a multi-task bidirectional RNN model combined with deep transfer learning [138,139]. The RNN encoder-decoder encodes a sequence of symbols into a static length vector representation of multi-RNN neural networks and symbols [140].…”
Section: Tag Encoder-decoder Architecturementioning
confidence: 99%
“…Knowledge graphs cover a more comprehensive range of knowledge than traditional expert systems. To support the construction of knowledge graphs, existing studies have identified entities [ 42 44 ] and relationships [ 45 47 ] between drugs and diseases from various data sources. In downstream tasks, knowledge graphs can be combined with algorithms such as machine learning to achieve prescription recommendations [ 25 27 ].…”
Section: Related Workmentioning
confidence: 99%
“…The α 1 is the learning rate of the learning task and the parameters of the model, and the optimized performance after meta-training f θ ′ , see equation ( 9).…”
Section: Integration Of Graphs and Meta-learningmentioning
confidence: 99%
“…According to the research of related scholars, graph neural networks have three general frameworks: the first one, message passing neural network (MPNN) [4], unified graph neural network, and graph convolutional network (GCN) method [5]; The second nonlocal neural network (NLNN) [6] combines various Attention mechanisms; the last graph network (GN) unifies the first two MPNN and NLNN, such as GGNN model, relation network (Relation Network citeRelationN, Interaction Networks, etc. However, although these network models perform well in large-sample learning, they still perform poorly in smallsample professional vocabulary representation learning such as electronic medical record representation, Chinese bioinformatics relationship extraction, electronic health record information extraction, medical patient diagnosis classification, etc [8] [9] [10] [11]. The text corpus of medical professional vocabulary is generally small and beyond the expertise of most people, and requires a certain level of medical professional background to understand.…”
Section: Introductionmentioning
confidence: 99%