2008
DOI: 10.1007/978-3-540-89704-0_28
|View full text |Cite
|
Sign up to set email alerts
|

Refining Instance Coreferencing Results Using Belief Propagation

Abstract: Abstract. The problem of coreference resolution (finding individuals, which describe the same entity but have different URIs) is crucial when dealing with semantic data coming from different sources. Specific features of Semantic Web data (ontological constraints, data sparseness, varying quality of sources) are all significant for coreference resolution and must be exploited. In this paper we present a framework, which uses Dempster-Shafer belief propagation to capture these features and refine coreference re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
4
0
1

Year Published

2011
2011
2013
2013

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 8 publications
(5 citation statements)
references
References 17 publications
0
4
0
1
Order By: Relevance
“…Several other datasets used by different researchers to evaluate their tools separately were collected within the RIDDLE repository. 5 These datasets have also been adapted to the semantic web standards and used to evaluate the instance matching algorithms in the semantic web domain: e.g., Cora was used in [10,13,14], while the Restaurants dataset from the RIDDLE repository was used by [15]. The advantage of reusing these is the possibility to compare with the techniques developed in the database community, despite the differences in the format of processed data.…”
Section: Evaluation Initiatives In the Database Communitymentioning
confidence: 99%
See 1 more Smart Citation
“…Several other datasets used by different researchers to evaluate their tools separately were collected within the RIDDLE repository. 5 These datasets have also been adapted to the semantic web standards and used to evaluate the instance matching algorithms in the semantic web domain: e.g., Cora was used in [10,13,14], while the Restaurants dataset from the RIDDLE repository was used by [15]. The advantage of reusing these is the possibility to compare with the techniques developed in the database community, despite the differences in the format of processed data.…”
Section: Evaluation Initiatives In the Database Communitymentioning
confidence: 99%
“…12 The NYT repository includes three subsets describing different types of entities mentioned in the New York Times articles: people, organizations, and places. These three subsets were linked to three commonly used semantic web data repositories: DBpedia, 13 Freebase, 14 and Geonames. 15 These links were provided by the data publishers, which improved the gold standard quality (see Table 4).…”
Section: Benchmarkmentioning
confidence: 99%
“…These datasets have also been adapted to the semantic web standards and used to evaluate the instance matching algorithms in the semantic web domain: e.g., Cora was used in [10], [28], and [18], while the Restaurants dataset from the RIDDLE repository was used by [33]. The advantage of reusing these is the possibility to compare with the techniques developed in the database community, despite the differences in the format of processed data.…”
Section: Evaluation Initiatives In the Database Communitymentioning
confidence: 99%
“…In the method described by Nikolov et al (2008b) the results of value matching, ontological axioms, and relations between individuals are combined together using valuation networks (Shenoy, 1992). Valuation networks are graphs containing two kinds of nodes.…”
Section: Internal Approachesmentioning
confidence: 99%