Proceedings of the 37th International ACM SIGIR Conference on Research &Amp; Development in Information Retrieval 2014
DOI: 10.1145/2600428.2600734
|View full text |Cite
|
Sign up to set email alerts
|

Erd'14

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
3
3
1

Relationship

1
6

Authors

Journals

citations
Cited by 50 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…We use precision, recall and F1 scores to evaluate and compare the approaches. We follow Carmel et al (2014) and Yang and Chang (2015) and define the scores on a per-entity basis. Since there are no mention boundaries for the gold entities, an extracted entity is considered correct if it is present in the set of the gold entities for the given question.…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…We use precision, recall and F1 scores to evaluate and compare the approaches. We follow Carmel et al (2014) and Yang and Chang (2015) and define the scores on a per-entity basis. Since there are no mention boundaries for the gold entities, an extracted entity is considered correct if it is present in the set of the gold entities for the given question.…”
Section: Evaluation Methodologymentioning
confidence: 99%
“…As schematized in Figure 1, these entities and their relations are represented in a graph structure as the nodes and edges, respectively, leading to the name of MAG. Note that the entity recognition and disambiguation (ERD), as reported in (Carmel et al, 2014), is far from a solved problem. However, the key here is the AI technologies employed in MAS are designed to learn and improve by itself by repeatedly reading more materials than any human can possibly do in a lifetime.…”
Section: Introductionmentioning
confidence: 99%