“…Task(s) ANU-CSIRO (Nguyen et al, 2019) NLI, RQE, QA ARS NITK (Agrawal et al, 2019) NLI, RQE, QA DoubleTransfer (Xu et al, 2019) NLI, RQE, QA Dr.Quad (Bannihatti Kumar et al, 2019) NLI, RQE, QA DUT-BIM (Zhou et al, 2019a) QA DUT-NLP (Zhou et al, 2019b) RQE, QA IITP (Bandyopadhyay et al, 2019) NLI, RQE, QA IIT-KGP (Sharma and Roychowdhury, 2019) RQE KU ai (Cengiz et al, 2019) NLI lasigeBioTM (Lamurias and Couto, 2019) NLI, RQE, QA MSIT SRIB (Chopra et al, 2019) NLI NCUEE (Lee et al, 2019b) NLI PANLP (Zhu et al, 2019) NLI, RQE, QA Pentagon (Pugaliya et al, 2019) NLI, RQE, QA Saama Research (Kanakarajan, 2019) NLI Sieg (Bhaskar et al, 2019) NLI, RQE Surf (Nam et al, 2019) NLI UU TAILS (Tawfik and Spruit, 2019) NLI, RQE UW-BHI (Kearns et al, 2019) NLI WTMED (Wu et al, 2019) NLI which builds up on BERT to perform multi-task learning and is evaluated on the GLUE benchmark (Wang et al, 2018). A common theme across all the papers was training of multiple models and then using an ensemble as the final system which performed better than the individual models.…”