2012
DOI: 10.1007/978-3-642-35176-1_33
|View full text |Cite
|
Sign up to set email alerts
|

CrowdMap: Crowdsourcing Ontology Alignment with Microtasks

Abstract: The last decade of research in ontology alignment has brought a variety of computational techniques to discover correspondences between ontologies. While the accuracy of automatic approaches has continuously improved, human contributions remain a key ingredient of the process: this input serves as a valuable source of domain knowledge that is used to train the algorithms and to validate and augment automatically computed alignments. In this paper, we introduce CROWDMAP, a model to acquire such human contributi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
106
0

Year Published

2013
2013
2018
2018

Publication Types

Select...
6
3
1

Relationship

2
8

Authors

Journals

citations
Cited by 124 publications
(107 citation statements)
references
References 19 publications
1
106
0
Order By: Relevance
“…Multi-user scenarios include CrowdMap [19] for ontology matching, ZenCrowd [8] for entity linking, and Zhang et al [23] for database schema matching, which use crowdsourcing on a web platform.…”
Section: Multi-user Feedbackmentioning
confidence: 99%
“…Multi-user scenarios include CrowdMap [19] for ontology matching, ZenCrowd [8] for entity linking, and Zhang et al [23] for database schema matching, which use crowdsourcing on a web platform.…”
Section: Multi-user Feedbackmentioning
confidence: 99%
“…Waitelonis et al (2011) explored similar principles to curate DBpedia content. In microtask crowdsourcing, Demartini et al (2012) worked on the identification of links between text entities and DBpedia URIs, while Sarasua et al (2012) focused on the post-processing of ontology mappings generated by alignment algorithms. Kontokostas et al (2013) proposed a contest to attract volunteers to assess Linked Data triples, which Acosta et al (2013) combined with the use of microtasks.…”
Section: Validation and Enhancement Of Knowledgementioning
confidence: 99%
“…The tools used in our experiments and the results are available online, including the outcome of the contest, 12 the gold standard and microtask data (HITs and results). 13 …”
Section: Creation Of Gold Standardmentioning
confidence: 99%