2014
DOI: 10.1007/978-3-319-11915-1_3
|View full text |Cite
|
Sign up to set email alerts
|

Conference v2.0: An Uncertain Version of the OAEI Conference Benchmark

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
33
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 30 publications
(33 citation statements)
references
References 10 publications
0
33
0
Order By: Relevance
“…The intent of this track is to provide expressive and real-world matching problems over expressive ontologies covering the same domain [2]. This data set is composed of 16 ontologies covering the domain of conference organization and a subset of 21 reference alignments involving 7 ontologies.…”
Section: Oaei Conference Data Setmentioning
confidence: 99%
“…The intent of this track is to provide expressive and real-world matching problems over expressive ontologies covering the same domain [2]. This data set is composed of 16 ontologies covering the domain of conference organization and a subset of 21 reference alignments involving 7 ontologies.…”
Section: Oaei Conference Data Setmentioning
confidence: 99%
“…The final version of the game solves ontology alignment through two atomic tasks: (1) choosing a related concept -given a DBpedia concept players choose and agree upon a related PROTON concept; (2) specifying the type of relation between two concepts in terms of equivalence or subsumption. More recently, Cheatham and Hitzler [5] made use of MTurk to generate a mapping for the Conference track of the Ontology Alignment Evaluation Initiative (OAEI) thus pioneering the use of crowdsourcing for generating benchmark data in the Semantic Web research area. They conclude that crowdsourcing offers a scalable, cost-effective method for generating benchmarks that highly agree with expert opinion.…”
Section: Use Of Crowdsourcing For Knowledge Acquisitionmentioning
confidence: 99%
“…Presented with a pair of terms (typically representing ontology concepts) and a relation between these terms, crowd-workers judge whether the suggested relation holds. Frequently verified relations include generic ontology relations such as equivalence [5,25] and subsumption [19,25], which are relevant both in ontology evaluation [19] and ontology alignment scenarios [25]. T3.…”
Section: Typical Crowdsourcing Tasks In Ontology Engineeringmentioning
confidence: 99%
See 1 more Smart Citation
“…In assessing the OAEI benchmark, [9] found that disagreement between annotators (both crowd and expert) is an indicator for inherent uncertainty in the domain knowledge, and that current benchmarks in ontology alignment and evaluation are not designed to model this uncertainty. [22] found similar results for the task of crowdsourced POS tagging -most inter-annotator disagreement was indicative of debatable cases in linguistic theory, rather than faulty annotation.…”
Section: State Of the Artmentioning
confidence: 99%