Proceedings of the 10th International Conference on Web Intelligence, Mining and Semantics 2020
DOI: 10.1145/3405962.3405964
|View full text |Cite
|
Sign up to set email alerts
|

Using schema.org Annotations for Training and Maintaining Product Matchers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 12 publications
(17 citation statements)
references
References 23 publications
0
15
0
Order By: Relevance
“…The end clusters are considered to be product offers referring to the same product entity, and are used to create training data for entity linking tasks. The work is further extended in later studies by Bizer et al (2019) and Peeters et al (2020b), and it was shown that this automatically created training dataset has a high quality and can be used to effectively train product entity matchers at high accuracy.…”
Section: Semantic Markup Data As Language Resourcesmentioning
confidence: 97%
See 3 more Smart Citations
“…The end clusters are considered to be product offers referring to the same product entity, and are used to create training data for entity linking tasks. The work is further extended in later studies by Bizer et al (2019) and Peeters et al (2020b), and it was shown that this automatically created training dataset has a high quality and can be used to effectively train product entity matchers at high accuracy.…”
Section: Semantic Markup Data As Language Resourcesmentioning
confidence: 97%
“…The same authors also used a product corpus to train a domain-specific word embedding model in Peeters et al (2020b). Specifically, they extracted the brand, name and description properties annotated by schema.org from the same corpus above, to create a text corpus that is used to train fastText embeddings.…”
Section: Semantic Markup Data As Language Resourcesmentioning
confidence: 99%
See 2 more Smart Citations
“…Deepmatcher: For our experiments with Deepmatcher [23], we choose the RNN summarization method which has proven to perform best for the WDC LSPC datasets [26]. We fix the batch size 4 as input for Deepmatcher.…”
Section: Models and Baselinesmentioning
confidence: 99%