2020
DOI: 10.1007/978-3-030-65384-2_12
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Sequence-to-Sequence Models for SPARQL Pattern Composition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 13 publications
0
2
0
Order By: Relevance
“…In scenarios where training data is limited, overfitting compromises network performance. To tackle this problem, instead of using a bidirectional long shortterm memory (LSTM) network to create the language representation model, Lukovnikov et al [53], Luo et al [54], and Panchbhai et al [55] independently evaluated the use of Bidirectional Encoder Representations from Transformers (BERT) [57], the current most performant solution for NL understanding tasks.…”
Section: Kbqa Based On Information Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…In scenarios where training data is limited, overfitting compromises network performance. To tackle this problem, instead of using a bidirectional long shortterm memory (LSTM) network to create the language representation model, Lukovnikov et al [53], Luo et al [54], and Panchbhai et al [55] independently evaluated the use of Bidirectional Encoder Representations from Transformers (BERT) [57], the current most performant solution for NL understanding tasks.…”
Section: Kbqa Based On Information Extractionmentioning
confidence: 99%
“…[54], and Panchbhai et al. [55] independently evaluated the use of Bidirectional Encoder Representations from Transformers (BERT) [57], the current most performant solution for NL understanding tasks.…”
Section: Qa Over Knowledge Bases Solutionsmentioning
confidence: 99%
“…Due to limited data availability, these LSTM-based approaches tend to overfit. Hence use of Bidirectional Encoder Representations from Transformers (BERT) is explored in Luo et al [124], Lukovnikov et al [147], Panchbhai et al [148]. One can also use five BERT models for five different tasks: question expected answer type (Q-EAT), answer type (AT) classification models, and Question Answering Model, as suggested by Day and Kuo [149].…”
mentioning
confidence: 99%