Proceedings of the 28th ACM International Conference on Information and Knowledge Management 2019
DOI: 10.1145/3357384.3358153
|View full text |Cite
|
Sign up to set email alerts
|

Hybrid Deep Pairwise Classification for Author Name Disambiguation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 12 publications
0
12
0
Order By: Relevance
“…For the similarity formula, this paper calculates cosine of the angle between the vectors x = ðx 1 , x 2 , ⋯, x n Þ and y = ðy 1 , y 2 , ⋯, y n Þ [13]. It can be written as formula (5). 3 Wireless Communications and Mobile Computing repeatedly assigned to different clusters according to the closest centroid.…”
Section: Framework and Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…For the similarity formula, this paper calculates cosine of the angle between the vectors x = ðx 1 , x 2 , ⋯, x n Þ and y = ðy 1 , y 2 , ⋯, y n Þ [13]. It can be written as formula (5). 3 Wireless Communications and Mobile Computing repeatedly assigned to different clusters according to the closest centroid.…”
Section: Framework and Methodologymentioning
confidence: 99%
“…A feature learning means and an affinity propagation clustering were taken into account. Kim et al [5] combined global features with structure features for author name disambiguation. Global features, extracted from attributes of dataset, formed the textual vector representation.…”
Section: Related Workmentioning
confidence: 99%
“…Several classification models have been used for learning the pairwise similarity function, including Naive Bayes [11], Logistic Regression [12], Support Vector Machines [8,11,13], Decision Trees (C4.5) [2], Random Forests (RF) [6,8,12,14,15], Deep Neural Networks (DNN) [16] and Gradient Boosted Trees (GBT) [6,15,17,18]. Tran et al [16] use DNNs with manually-crafted features, whereas Atarashi et al [19] leveraged a DNN to learn feature representations from bag-of-words vectors.…”
Section: Previous Workmentioning
confidence: 99%
“…Others train a model to learn a vector representation by triplets samples without considering any structure information. Kim et al [ 8 ] proposed a hybrid model that makes use of both and train an SVM, a RF, a GBT and a DNN to determine whether a pair of publications is related to the same author or not.…”
Section: Related Workmentioning
confidence: 99%