2017
DOI: 10.1007/978-3-319-67468-1_3
|View full text |Cite
|
Sign up to set email alerts
|

Extracting Core Claims from Scientific Articles

Abstract: The number of scientific articles has grown rapidly over the years and there are no signs that this growth will slow down in the near future. Because of this, it becomes increasingly difficult to keep up with the latest developments in a scientific field. To address this problem, we present here an approach to help researchers learn about the latest developments and findings by extracting in a normalized form core claims from scientific articles. This normalized representation is a controlled natural language … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
6
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
2
1
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(6 citation statements)
references
References 16 publications
0
6
0
Order By: Relevance
“…Given the lack of robust definitions of science-relatedness, we followed an iterative process of data exploration, literature review and preliminary labeling rounds. We started by selecting and observing samples of science-related texts coming from science-related datasets [7,14,21,[23][24][25]29] and reviewing related definitions together with researchers from various disciplines. We then manually classified them into categories, and held intermediate annotation rounds with new samples to test the agreement across categories.…”
Section: Category Definitions and Annotation Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…Given the lack of robust definitions of science-relatedness, we followed an iterative process of data exploration, literature review and preliminary labeling rounds. We started by selecting and observing samples of science-related texts coming from science-related datasets [7,14,21,[23][24][25]29] and reviewing related definitions together with researchers from various disciplines. We then manually classified them into categories, and held intermediate annotation rounds with new samples to test the agreement across categories.…”
Section: Category Definitions and Annotation Frameworkmentioning
confidence: 99%
“…Methodological research at the intersection of NLP, information retrieval and machine learning is aimed at detecting, classifying or verifying (scientific) claims and discourse [11,18,19,24,25], and is a key facilitator for large-scale interdisciplinary analysis of science discourse. Prior works often focus on actual scholarly publications [14,21], where the formality of language differs substantially from science claims in online news and social media, e.g., Twitter.…”
Section: Introductionmentioning
confidence: 99%
“…In literature rule-based approaches have also been used for claims extraction or classification [6], [7], [8], [9]. De Ribaupierre [7] annotated each sentence of a document and used syntactic rules to identify the discourse type of every sentence.…”
Section: Introductionmentioning
confidence: 99%
“…Risk of annotation noise can increase when there are a large number of rules. Jansen and Kuhn [6] proposed a rule-based approach to help researchers know about recent developments by extracting a core claim from the abstract. They used term frequency (tf) for keyword extraction.…”
Section: Introductionmentioning
confidence: 99%
“…In both cases, about 70% of the created AIDA sentences received a perfect quality score. In a follow-up study, we worked on the extraction of AIDA sentences from paper abstracts with a simple rule-based approach [11].…”
Section: Introductionmentioning
confidence: 99%