Proceedings of the 18th BioNLP Workshop and Shared Task 2019
DOI: 10.18653/v1/w19-5050
|View full text |Cite
|
Sign up to set email alerts
|

IIT-KGP at MEDIQA 2019: Recognizing Question Entailment using Sci-BERT stacked with a Gradient Boosting Classifier

Abstract: The number of people turning to the Internet to search for a diverse range of health-related subjects continues to grow and with this multitude of information available, duplicate questions become more frequent and finding the most appropriate answers becomes problematic. This issue is important for questionanswering platforms as it complicates the retrieval of all information relevant to the same topic, particularly when questions similar in essence are expressed differently, and answering a given medical que… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(8 citation statements)
references
References 12 publications
0
8
0
Order By: Relevance
“…More generally, approaches combining ensemble methods and transfer learning of multi-task language models were the clear winners of the competition for RQE with the first and second scores (Zhu et al, 2019;Bhaskar et al, 2019). Approaches that used ensemble methods without multi-task language models (Sharma and Roychowdhury, 2019) or multi-task learning without ensemble methods (Pugaliya et al, 2019) performed worse than the first category but made it to the top 4. Domain knowledge was also used in several participating approaches with a clear positive impact.…”
Section: Rqe Approaches and Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…More generally, approaches combining ensemble methods and transfer learning of multi-task language models were the clear winners of the competition for RQE with the first and second scores (Zhu et al, 2019;Bhaskar et al, 2019). Approaches that used ensemble methods without multi-task language models (Sharma and Roychowdhury, 2019) or multi-task learning without ensemble methods (Pugaliya et al, 2019) performed worse than the first category but made it to the top 4. Domain knowledge was also used in several participating approaches with a clear positive impact.…”
Section: Rqe Approaches and Resultsmentioning
confidence: 99%
“…Data augmentation also played a key role for several systems that used external data to extend batches of in-domain data (Xu et al, 2019), created synthetic data (Bannihatti Kumar et al, 2019), or used models trained on external datasets (e.g. MultiNLI) in ensemble methods (Bhaskar et al, 2019;Sharma and Roychowdhury, 2019).…”
Section: Rqe Approaches and Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Related Work : The development of annotated TE and NLI medical datasets (Abacha et al, 2015;Ben Abacha et al, 2019;Abacha and Demner-Fushman, 2016;Romanov and Shivade, 2018) and a variety of pre-trained language models has led to a rise of extensive ongoing research in this field. Majority of the systems developed for the TE task adopts the multi-task learning (MTL) framework (Zhu et al, 2019;Bhaskar et al, 2019;Kumar et al, 2019;Xu et al, 2019), ensemble method (Sharma and Roychowdhury, 2019), and transfer learning (Bhaskar et al, 2019) for achieving better accuracy. Xu et al (2019) employed the MTL approach Yadav et al, 2018;Yadav et al, 2019;Yadav et al, 2020) in TE task to learn from the auxiliary tasks of question answering (QA) and NLI.…”
Section: E N T a I L Smentioning
confidence: 99%