2020
DOI: 10.1007/978-3-030-43887-6_51
|View full text |Cite
|
Sign up to set email alerts
|

Results of the Seventh Edition of the BioASQ Challenge

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

1
8

Authors

Journals

citations
Cited by 35 publications
(22 citation statements)
references
References 42 publications
0
22
0
Order By: Relevance
“…Aside from the vocabulary issue mentioned earlier, neural network training uses non-convex optimization, which means that continual pretraining may not be able to completely undo suboptimal initialization from the general-domain language model. BioBERT [34] SciBERT [8] BLUE [45] BLURB BC5-chem [35] BC5-disease [35] NCBI-disease [18] -BC2GM [53] --JNLPBA [27] --EBM PICO [44] --ChemProt [31] DDI [21] -GAD [11] --BIOSSES [54] --HoC [20] --PubMedQA [25] ---BioASQ [42] --…”
Section: Domain-specificmentioning
confidence: 99%
“…Aside from the vocabulary issue mentioned earlier, neural network training uses non-convex optimization, which means that continual pretraining may not be able to completely undo suboptimal initialization from the general-domain language model. BioBERT [34] SciBERT [8] BLUE [45] BLURB BC5-chem [35] BC5-disease [35] NCBI-disease [18] -BC2GM [53] --JNLPBA [27] --EBM PICO [44] --ChemProt [31] DDI [21] -GAD [11] --BIOSSES [54] --HoC [20] --PubMedQA [25] ---BioASQ [42] --…”
Section: Domain-specificmentioning
confidence: 99%
“…Additionally we evaluate on CoNLL-03 (Tjong Kim Sang and De Meulder, 2003) named entity recognition (NER), and SQuAD 1.1 (Rajpurkar et al, 2016) question answering (QA). To demon-strate domain shift we evaluate using BC5CDR , Chemprot (Krallinger et al, 2017) and BioASQ (Nentidis et al, 2019) which are biomedical NER, relation extraction (RE), and QA tasks respectively. The first dataset is from the 2015 CDR challenge for identifying chemicals and diseases expertly annotated from Pubmed abstracts 4 .…”
Section: Evaluation Datamentioning
confidence: 99%
“…ChemProt ( Kringelum et al, 2016 ); and a QA task, i.e. BioASQ ( Nentidis et al , 2019 ). We follow the same evaluation settings used in Lee et al (2020) and Beltagy et al (2019) .…”
Section: Resultsmentioning
confidence: 99%
“…Large-scale pre-trained language models (PLMs) ( Beltagy et al , 2019 ; Lee et al , 2020 ) have shown state-of-the-art (SOTA) performance on various biomedical text mining tasks. These models provide contextualized representations, learned from large volumes of biomedical text, which then can be easily applied to achieve SOTA on downstream tasks such as named entity recognition (NER), relation extraction (REL) and question answering (QA) ( Kim et al , 2019 ; Lin et al , 2019 ; Nentidis et al , 2019 ).…”
Section: Introductionmentioning
confidence: 99%