2017 ACM/IEEE Joint Conference on Digital Libraries (JCDL) 2017
DOI: 10.1109/jcdl.2017.7991558
|View full text |Cite
|
Sign up to set email alerts
|

Identifying Important Citations Using Contextual Information from Full Text

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
37
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
4

Relationship

2
6

Authors

Journals

citations
Cited by 44 publications
(39 citation statements)
references
References 6 publications
2
37
0
Order By: Relevance
“…Figure 12 shows the results for COMST and TON data, where the number of references in COMST and TON goes up with the increasing number of authors. Similar results are reported by Saeed et al, Valenzuela et al and Zhu et al in their studies (Hassan et al 2017a;Valenzuela et al 2015;Zhu et al 2015). Figure 12 also shows that with the increasing number of authors, number of references from the last ten years in a paper also increase in COMST and TON.…”
Section: Analysis Based On Structural Elements Of Articlesupporting
confidence: 85%
“…Figure 12 shows the results for COMST and TON data, where the number of references in COMST and TON goes up with the increasing number of authors. Similar results are reported by Saeed et al, Valenzuela et al and Zhu et al in their studies (Hassan et al 2017a;Valenzuela et al 2015;Zhu et al 2015). Figure 12 also shows that with the increasing number of authors, number of references from the last ten years in a paper also increase in COMST and TON.…”
Section: Analysis Based On Structural Elements Of Articlesupporting
confidence: 85%
“…In addition, we found that qualitative assessment helps better understand the feature set being examined. A potential limitation of this study is the adoption of the definitions that as such came with the dataset [4,14]. In future studies, other definitions and features could be explored, such as stylometric features from full-text [15].…”
Section: Discussionmentioning
confidence: 99%
“…This method uses the entire dataset and draws random splits (decision trees) for each of the randomly selected features, and then the best division is chosen. For each tree, the importance is calculated from the impurity of the splits, obtaining a higher value where the characteristics are more important [14]. The separations are random to ensure that the model does not overfit the data.…”
Section: Feature Importance Methodsmentioning
confidence: 99%