2020
DOI: 10.1007/978-3-030-64452-9_1
|View full text |Cite
|
Sign up to set email alerts
|

Improving Scholarly Knowledge Representation: Evaluating BERT-Based Models for Scientific Relation Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…The analysis of the entire dataset shows that all the entries contain the title field; 87.11% of the entries contain both the title and the subject heading fields; 52.26% of the items contain both the title and the abstract fields; and 51.34% of the items contain the title, the subject heading, and the abstract fields. Therefore, it can be seen that for classified books, the title and the subject heading are relatively common fields and the abstract field has a relatively low probability (Jiang et al, 2020). This section will select the basic one-way LSTM model: one layer of LSTM hidden layer; each hidden layer contains 128 nodes; the amount of data per batch is 128; the training process adopts the principle of early stopping, when the loss value of the model on the validation set Stop training when it increases, and train the entire training set for up to 1,000 epochs.…”
Section: Exploring the Impact Of Field Selection On Classificationmentioning
confidence: 99%
“…The analysis of the entire dataset shows that all the entries contain the title field; 87.11% of the entries contain both the title and the subject heading fields; 52.26% of the items contain both the title and the abstract fields; and 51.34% of the items contain the title, the subject heading, and the abstract fields. Therefore, it can be seen that for classified books, the title and the subject heading are relatively common fields and the abstract field has a relatively low probability (Jiang et al, 2020). This section will select the basic one-way LSTM model: one layer of LSTM hidden layer; each hidden layer contains 128 nodes; the amount of data per batch is 128; the training process adopts the principle of early stopping, when the loss value of the model on the validation set Stop training when it increases, and train the entire training set for up to 1,000 epochs.…”
Section: Exploring the Impact Of Field Selection On Classificationmentioning
confidence: 99%
“…The structure is simply logical, with the goal of maximizing the reasoning in our scenario. [41] is explored to identify relation types in knowledge graphs in scholarly domain. Farber in [42] developed a framework for extracting entities such as scientific methods and data set along with classification and aggregation.…”
Section: Common Scholarly Communication Infrastructuresmentioning
confidence: 99%
“…As a pre-trained language representation built on the deep neural technology of transformers, it provides NLP practitioners with high-quality language features from text data simply out-ofthe-box and thus improves performance on many NLP tasks. These models return contextualized word embeddings that can be directly employed as features for downstream tasks [28].…”
Section: Bert Modelsmentioning
confidence: 99%
“…Prospectively, machine learning can assist scientists to record their results in the Leaderboards of next-generation digital libraries such as the Open Research Knowledge Graph (ORKG) [27]. In our age of the "deep learning tsunami," [38] there are many studies that have used neural network models to improve the construction of automated scholarly knowledge mining systems [36,12,7,28]. With the recent introduction of language modeling techniques such as transformers [44], the opportunity to obtain boosted machine learning systems is further accentuated.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation