BioNLP 2017 2017
DOI: 10.18653/v1/w17-2306
|View full text |Cite
|
Sign up to set email alerts
|

Results of the fifth edition of the BioASQ Challenge

Abstract: The goal of the BioASQ challenge is to engage researchers into creating cuttingedge biomedical information systems. Specifically, it aims at the promotion of systems and methodologies that are able to deal with a plethora of different tasks in the biomedical domain. This is achieved through the organization of challenges. The fifth challenge consisted of three tasks: semantic indexing, question answering and a new task on information extraction. In total, 29 teams with more than 95 systems participated in the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2018
2018
2020
2020

Publication Types

Select...
5
1
1

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(21 citation statements)
references
References 39 publications
0
21
0
Order By: Relevance
“…Our empirical performance evaluation is based on documents and questions from the BioASQ 2017 challenge's document and snippet retrieval tasks [7]. The goal in this task is to return the 10 most relevant passages from a collection of 12.8M PubMed abstracts for a specific biomedical question.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Our empirical performance evaluation is based on documents and questions from the BioASQ 2017 challenge's document and snippet retrieval tasks [7]. The goal in this task is to return the 10 most relevant passages from a collection of 12.8M PubMed abstracts for a specific biomedical question.…”
Section: Methodsmentioning
confidence: 99%
“…All (machine learning) methods are trained using five-fold cross validation on the training set. Word embeddings for all methods are computed as length-50 word2vec vectors on the PubMed document corpus [7]. For each question (training and test), a reference set of highly ranked documents is given by the challenge organizers.…”
Section: Methodsmentioning
confidence: 99%
“…We retrieved a maximum of 100 articles per question, ordered by the relevance score provided by the PubMed API, and the Dirichlet term smoothing score and BM25 score calculated by Galago. We retrieved 100 articles because this was the same number of articles retrieved by [19] (see Section IV-B). Then we compared with the articles given by the users that answered each question, which we considered as correct if they had more than a given number of votes.…”
Section: A Ir-based Validationmentioning
confidence: 99%
“…The multi-label PubMed article classification (Nentidis et al, 2017), helping the research community to solve this hard XMTC problem.…”
Section: Dataset Descriptionmentioning
confidence: 99%