Proceedings of the 18th BioNLP Workshop and Shared Task 2019
DOI: 10.18653/v1/w19-5056
|View full text |Cite
|
Sign up to set email alerts
|

IITP at MEDIQA 2019: Systems Report for Natural Language Inference, Question Entailment and Question Answering

Abstract: This paper presents the experiments accomplished as a part of our participation in the MEDIQA challenge, an (Abacha et al., 2019) shared task. We participated in all the three tasks defined in this particular shared task. The tasks are viz. i. Natural Language Inference (NLI) ii. Recognizing Question Entailment(RQE) and their application in medical Question Answering (QA). We submitted runs using multiple deep learning based systems (runs) for each of these three tasks. We submitted five system results in each… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 9 publications
0
4
0
Order By: Relevance
“…As Table VII indicates, even pretrained transformers on biological data, i.e., BioBERT, are beaten by the ReQuEST. However, the 1% supremacy of [61] suggests that the quality of text representation directly influences classifier performance.…”
Section: G Analysis Of Request Performance On Other Datasetsmentioning
confidence: 99%
“…As Table VII indicates, even pretrained transformers on biological data, i.e., BioBERT, are beaten by the ReQuEST. However, the 1% supremacy of [61] suggests that the quality of text representation directly influences classifier performance.…”
Section: G Analysis Of Request Performance On Other Datasetsmentioning
confidence: 99%
“…Liu et al [25] trained heterogeneous base models by the bagging method and then integrated them. Bandyopadhyay et al [26] integrated BioBERT models pre-trained on the different corpus to achieve better results. Yang et al [27] presented an adaptive decision fusion method, which adaptively combines classifiers with different levels of features to cultivate an answer selection model with robustness and effectiveness.…”
Section: Question Answering Based On Model Integrationmentioning
confidence: 99%
“…Task(s) ANU-CSIRO (Nguyen et al, 2019) NLI, RQE, QA ARS NITK (Agrawal et al, 2019) NLI, RQE, QA DoubleTransfer (Xu et al, 2019) NLI, RQE, QA Dr.Quad (Bannihatti Kumar et al, 2019) NLI, RQE, QA DUT-BIM (Zhou et al, 2019a) QA DUT-NLP (Zhou et al, 2019b) RQE, QA IITP (Bandyopadhyay et al, 2019) NLI, RQE, QA IIT-KGP (Sharma and Roychowdhury, 2019) RQE KU ai (Cengiz et al, 2019) NLI lasigeBioTM (Lamurias and Couto, 2019) NLI, RQE, QA MSIT SRIB (Chopra et al, 2019) NLI NCUEE (Lee et al, 2019b) NLI PANLP (Zhu et al, 2019) NLI, RQE, QA Pentagon (Pugaliya et al, 2019) NLI, RQE, QA Saama Research (Kanakarajan, 2019) NLI Sieg (Bhaskar et al, 2019) NLI, RQE Surf (Nam et al, 2019) NLI UU TAILS (Tawfik and Spruit, 2019) NLI, RQE UW-BHI (Kearns et al, 2019) NLI WTMED (Wu et al, 2019) NLI which builds up on BERT to perform multi-task learning and is evaluated on the GLUE benchmark (Wang et al, 2018). A common theme across all the papers was training of multiple models and then using an ensemble as the final system which performed better than the individual models.…”
Section: Teammentioning
confidence: 99%
“…Below is a nonexhaustive list resources used by various teams. • Word Embeddings While many teams used BERT (Lamurias and Couto, 2019;Zhou et al, 2019a;Bandyopadhyay et al, 2019;Nguyen et al, 2019;Sharma and Roychowdhury, 2019) 14 , some teams also used word embeddings as the input to their models. Bhaskar et al ( 2019) used biomedical word embeddings from Chen et al (2018) while Kearns et al (2019) used cui2vec (Beam et al, 2018.…”
Section: Multi-tasking and External Resourcesmentioning
confidence: 99%