2019
DOI: 10.1007/978-3-030-15719-7_6
|View full text |Cite
|
Sign up to set email alerts
|

Inductive Transfer Learning for Detection of Well-Formed Natural Language Search Queries

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
1
1

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(2 citation statements)
references
References 14 publications
0
2
0
Order By: Relevance
“…The well-formedness study (Faruqui and Das 2018) focus on binary accuracy using 0.8 as a threshold to determine whether a question is well-formed. Using the same threshold to group the multi-classed predictions, the binary classification accuracy is 79.56%, which is better than 70.7% reported by the best model in the original paper, higher than another transfer learning approach with 75.05% (Syed et al 2019), close to the accuracy of 81.6% from a BERT-based model (Chhina 2020). Accuracy for individual classes are presented in Table 1, we can see that scores 0.0 and 1.0 can be predicted with accuracy, but not for mid-range scores.…”
Section: Well-formednessmentioning
confidence: 68%
See 1 more Smart Citation
“…The well-formedness study (Faruqui and Das 2018) focus on binary accuracy using 0.8 as a threshold to determine whether a question is well-formed. Using the same threshold to group the multi-classed predictions, the binary classification accuracy is 79.56%, which is better than 70.7% reported by the best model in the original paper, higher than another transfer learning approach with 75.05% (Syed et al 2019), close to the accuracy of 81.6% from a BERT-based model (Chhina 2020). Accuracy for individual classes are presented in Table 1, we can see that scores 0.0 and 1.0 can be predicted with accuracy, but not for mid-range scores.…”
Section: Well-formednessmentioning
confidence: 68%
“…Their model uses both the question and the context as input, whereas in our approach, only the original noisy query are used as input to the reward-generating black-box QA model at the RL stage. Identifying well-formed questions (Faruqui and Das 2018) by training binary classification models have been studied using BERT (Chhina 2020) and transfer learning with pretrained models (Syed et al 2019). Instead, we investigate more fine-grained 6-way classification using a fined-tuned T5 well-formedness model, leveraged as a proxy for evaluating sequence-level fluency of reformulators.…”
Section: Related Workmentioning
confidence: 99%