Proceedings of the Knowledge Capture Conference 2017
DOI: 10.1145/3148011.3148031
|View full text |Cite
|
Sign up to set email alerts
|

Capturing Knowledge in Semantically-typed Relational Patterns to Enhance Relation Linking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
15
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
2

Relationship

3
4

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 12 publications
0
15
0
Order By: Relevance
“…ReMatch [14] 0.12 0.31 RelMatch [20] 0.15 0.29 EARL without adaptive learning 0.32 0.45 EARL with adaptive learning 0.36 0.47 Table 6. Evaluating EARL's Relation Linking performance…”
Section: Systemmentioning
confidence: 99%
See 1 more Smart Citation
“…ReMatch [14] 0.12 0.31 RelMatch [20] 0.15 0.29 EARL without adaptive learning 0.32 0.45 EARL with adaptive learning 0.36 0.47 Table 6. Evaluating EARL's Relation Linking performance…”
Section: Systemmentioning
confidence: 99%
“…Word embedding models are also frequently used to overcome the linguistic gap for relation linking. RelMatch [20] improves the accuracy of the PATTY dataset for relation linking. There are tools such as ReMatch [14] which uses wordnet similarity for relation linking.…”
Section: Related Workmentioning
confidence: 99%
“…All experiments were executed on 10 virtual servers, each with 8 cores, 32 GB RAM and the Ubuntu 16.04.3 operating system. It took us 22 days to generate training data by executing questions of considered datasets for all 28 components, as some tools such as ReMatch [20] and RelationMatcher [27] took approximately 120 and 30 seconds, respectively, to process each question.…”
Section: Corpus Creationmentioning
confidence: 99%
“…We prepared two separate datasets from LC-QuAD and QALD. We adopted the methodology presented in [6] and [27] for the benchmark creation of the subsequent steps of the QA pipelines. Furthermore, the accuracy metrics are micro F-Score (F-Score) as a harmonic mean of micro precision and micro recall.…”
Section: Preparing Training Datasetsmentioning
confidence: 99%
“…For each of the QB components, given a correct set of URIs as input, the generated SPARQL query is compared to the benchmark SPARQL query for the given question by comparing the answers the SPARQL queries retrieve from DBpedia. A similar component benchmarking procedure has been followed in [23,32].…”
Section: Component Benchmarkingmentioning
confidence: 99%