2021
DOI: 10.3390/app112110267
|View full text |Cite
|
Sign up to set email alerts
|

Enhance Text-to-Text Transfer Transformer with Generated Questions for Thai Question Answering

Abstract: Question Answering (QA) is a natural language processing task that enables the machine to understand a given context and answer a given question. There are several QA research trials containing high resources of the English language. However, Thai is one of the languages that have low availability of labeled corpora in QA studies. According to previous studies, while the English QA models could achieve more than 90% of F1 scores, Thai QA models could obtain only 70% in our baseline. In this study, we aim to im… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(2 citation statements)
references
References 22 publications
0
2
0
Order By: Relevance
“…Expanding the number of question-answer pairs of Thai Question Answering corpora using Multilingual Text-to-Text Transfer Transformer (mT5) is the approach proposed by [15]. In addition, the authors propose a new syllable-level evaluation metric, which they consider more suitable for the Thai language because there is no ambiguity in syllable tokenization.…”
Section: Question and Answeringmentioning
confidence: 99%
“…Expanding the number of question-answer pairs of Thai Question Answering corpora using Multilingual Text-to-Text Transfer Transformer (mT5) is the approach proposed by [15]. In addition, the authors propose a new syllable-level evaluation metric, which they consider more suitable for the Thai language because there is no ambiguity in syllable tokenization.…”
Section: Question and Answeringmentioning
confidence: 99%
“…There are a number of datasets in the literature for natural language QA [6][7][8][9][10][11][12][13][14][15], along with several solutions to answer these questions [16][17][18][19][20][21][22][23][24][25][26]. In this paper, we propose an approach that differs from the previous body of work as we do not receive the context but assume that the answer lies in a set of readily available documents (open-book), and we were not allowed to train our models on the given questions or set of documents (zero-shot).…”
Section: Related Workmentioning
confidence: 99%