2019
DOI: 10.48550/arxiv.1909.04758
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Scientific Discourse Tagging for Evidence Extraction

Abstract: The biomedical scientific literature comprises a crucial, sometimes life-saving, natural language resource whose size is accelerating over time. The information in this resource tends to follow a style of discourse that is intended to provide scientific explanations for various pieces of evidence derived from experimental findings. Studying the rhetorical structure of the narrative discourse could enable more powerful information extraction methods to automatically construct models of scientific argument from … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…For variations on the classification task, we consider using a Conditional Random Field layer instead of a simple FFNN layer, which has been shown to improve results (Li et al, 2019a). However, we do not see an improvement in this case, possibly because the Bi-LSTM layer prior to classification was already inducing sequential information to be shared.…”
Section: E21 Classification Task Variationsmentioning
confidence: 99%
See 1 more Smart Citation
“…For variations on the classification task, we consider using a Conditional Random Field layer instead of a simple FFNN layer, which has been shown to improve results (Li et al, 2019a). However, we do not see an improvement in this case, possibly because the Bi-LSTM layer prior to classification was already inducing sequential information to be shared.…”
Section: E21 Classification Task Variationsmentioning
confidence: 99%
“…We describe each layer in turn. Sentence-Embedding Modeling The architecture we use to model each supervised task in our multitask setup is inspired by previous work in sentencelevel tagging and discourse learning (Choubey et al, 2020;Li et al, 2019a). As shown in Figure 4, we use a transformer model, RoBERTa-base (Liu et al, 2019), to generate sentence embeddings: each sentence in a document is fed sequentially into the same model, and we use the <s> token from each sentence as the sentence-level embedding.…”
Section: Neural Architecturementioning
confidence: 99%