2021
DOI: 10.1007/s11192-021-03943-w
|View full text |Cite
|
Sign up to set email alerts
|

Characterizing research leadership on geographically weighted collaboration network

Abstract: Research collaborations, especially long-distance and international collaborations, have become increasingly prevalent worldwide. Recent studies highlighted the significant role of research leadership in collaborations. However, existing measures of the research leadership do not take into account the intensity of leadership in the co-authorship network. More importantly, the spatial features, which influence the collaboration patterns and research outcomes, have not been incorporated in measuring the research… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(6 citation statements)
references
References 99 publications
(121 reference statements)
0
6
0
Order By: Relevance
“…In our empirical study, we use the three threshold‐dependent evaluation metrics (Precision, Recall, and F‐measure (F1)) and one threshold‐independent evaluation metric (Matthews correlation coefficient, MCC) to evaluate the performance of CSD models. The metrics are widely used in both software engineering studies 64‐71 and artificial intelligence researches 72‐75 . In the binary classification problem, these four evaluation metrics can be calculated according to a confusion matrix, as shown in Table 4.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In our empirical study, we use the three threshold‐dependent evaluation metrics (Precision, Recall, and F‐measure (F1)) and one threshold‐independent evaluation metric (Matthews correlation coefficient, MCC) to evaluate the performance of CSD models. The metrics are widely used in both software engineering studies 64‐71 and artificial intelligence researches 72‐75 . In the binary classification problem, these four evaluation metrics can be calculated according to a confusion matrix, as shown in Table 4.…”
Section: Methodsmentioning
confidence: 99%
“…The metrics are widely used in both software engineering studies [64][65][66][67][68][69][70][71] and artificial intelligence researches. [72][73][74][75] In the binary classification problem, these four evaluation metrics can be calculated according to a confusion matrix, as shown in Table 4.…”
Section: Performance Measuresmentioning
confidence: 99%
“…Therefore, it is necessary to take the effort into consideration for defect prediction. In this work, we deploy six different effort-aware evaluation metrics to measure the prediction results of EADP models, some of which are also widely used in the machine learning field [3,[36][37][38][39][40][41][42][43]. Similar to the previous EADP studies, we restrict the limited effort to 20% of the total LOC of one dataset in our work.…”
Section: Evaluation Metricmentioning
confidence: 99%
“…When checking the top 20% LOC according to the predicted result of the EADP model, the software testing team inspects n software modules and finds p actual defective modules with q defects. In our experiments, we utilise several evaluation measures that are commonly adopted in both the software engineering [92][93][94] and machine learning [95][96][97][98][99][100]. Precision@20% is the ratio between the number of actual defective modules and the number of predicted defective modules in the top 20% LOC.…”
Section: Effort-aware Evaluation Metricsmentioning
confidence: 99%