2010
DOI: 10.1504/ijmso.2010.033280
|View full text |Cite
|
Sign up to set email alerts
|

Reflections on five years of evaluating semantic search systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2010
2010
2017
2017

Publication Types

Select...
4
2
1

Relationship

3
4

Authors

Journals

citations
Cited by 14 publications
(10 citation statements)
references
References 21 publications
0
10
0
Order By: Relevance
“…In contrast with the Information Retrieval (IR) community, where evaluation using standardized techniques, such as those used for the annual TREC competitions, has been common for decades, the SW community is still a long way from defining standard evaluation benchmarks to evaluate the quality of semantic technologies [20]. Important efforts have been made in the last few years towards the establishment of common datasets, methodologies and metrics to evaluate semantic technologies, e.g., the SEALS project [26].…”
Section: Evaluation and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…In contrast with the Information Retrieval (IR) community, where evaluation using standardized techniques, such as those used for the annual TREC competitions, has been common for decades, the SW community is still a long way from defining standard evaluation benchmarks to evaluate the quality of semantic technologies [20]. Important efforts have been made in the last few years towards the establishment of common datasets, methodologies and metrics to evaluate semantic technologies, e.g., the SEALS project [26].…”
Section: Evaluation and Resultsmentioning
confidence: 99%
“…These tables are known as Entity Mapping Tables (ETMs). In our example, the EMT for the query term "actors" contains, among several others, exact candidate matches in DBpedia, the movie database 20…”
Section: The Element Mapping Component: Powermapmentioning
confidence: 99%
“…In this section, we present some example queries 13 to justify our claim that we can obtain answers to queries directly from the DBpedia semantically rich information -even in its current form. The solutions in Section 5 had allowed us to improve PowerAqua mapping and fusion algorithms to exhibit better performance, measured in terms of speed or seconds to answer a query, by shifting the focus on precision while minimizing the loss in recall.…”
Section: Initial Experiments and Discussionmentioning
confidence: 99%
“…A well-known limitation of the SW is its sparseness, as stated in [13], without a well populated SW, developing semantic search systems is only an intellectual exercise. Only a reduced number of topics were covered entirely or partially by an existing ontology (domain sparseness), and added to this, sparseness at the level of instances and relation (model complexity) was also found [4,10].…”
Section: Scaling To Highly Populated and Dense Ontologiesmentioning
confidence: 99%
“…However, for natural language based question answering tools over linked data there are no systematic and standard evaluation benchmarks in place yet. Therefore, evaluations of such systems are typically small-scale and idiosyncratic in the sense that they are specific for certain settings or applications [38].…”
Section: Introductionmentioning
confidence: 99%