2019
DOI: 10.1093/database/baz085
|View full text |Cite|
|
Sign up to set email alerts
|

Large expert-curated database for benchmarking document similarity detection in biomedical literature search

Abstract: Document recommendation systems for locating relevant literature have mostly relied on methods developed a decade ago. This is largely due to the lack of a large offline gold-standard benchmark of relevant documents that cover a variety of research fields such that newly developed literature search techniques can be compared, improved and translated into practice. To overcome this bottleneck, we have established the RElevant LIterature SearcH consortium consisting of more than 1500 scientists from 84 countries… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 51 publications
0
11
0
Order By: Relevance
“…1. RELISH: An annotated dataset of biomedical abstract queries and candidate pools of size 60, labelled for similarity by experts who are often authors of the query abstracts (Brown et al, 2019).…”
Section: Experiments and Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…1. RELISH: An annotated dataset of biomedical abstract queries and candidate pools of size 60, labelled for similarity by experts who are often authors of the query abstracts (Brown et al, 2019).…”
Section: Experiments and Resultsmentioning
confidence: 99%
“…Def 2. Aspect-level retrieval by sentences: Given query and candidate documents -Q and C, and a subset of sentences S Q ⊆ Q based on which to retrieve documents a system must output the ranking over C. Our definitions align with existing datasets and tasks of scientific document similarity: the well explored abstract-level similarity task (Brown et al, 2019;Cohan et al, 2020), and more recently introduced facet-based document similarity task (Mysore et al, 2021). §4 will describe evaluation on these datasets.…”
Section: Problem Setupmentioning
confidence: 99%
“…The weighty contribution of the Russian scientific school was made at the very beginning of postural control studies for the objectification of the measurements of postural stability. According to the recommendations of the International Consensus on postural control measurements [ 27 ] established based on “Research Methods to Evaluate Standing Stability” developed in 1952 by Russian scientists [ 28 ], the efficiency of the balance maintenance system is assessed by measuring oscillations in the foot plantar center of pressure (COP) relative to the center of gravity (CG) [ 29 ]. These variations reflect the movements of body segments or joints, muscle activity, the movements associated with respiration [ 30 ], and the work of the cardiovascular system [ 31 ].…”
Section: Stabilometric Measurements Of Postural Controlmentioning
confidence: 99%
“…For reviewing the studies devoted to the psychophysiological mechanisms of postural control, literature data were searched by the following keywords: “postural control,” “sensorimotor integration,” sedentary lifestyle,” “gravitation,” “support,” “vestibular,” “proprioceptive” and “visual” “afferentation,” in combination with the keywords such as “cognitive functions,” “memory,” “attention,” “decision making,” “imagination,” “emotions,” “fine motor skills,” “dual tasks,” “anxiety,” “depression,” “stabilometry,” “Electroencephalography,” and “Electromyography” ( Table 1 ). Literature was searched in the Web of Science , PubMed , Scopus , and RSCI databases according to the recommendations of “Preferred Reporting Items for Systematic Reviews and Meta-Analyses” ( PRISMA ) and using the methods described in RELISH ( RElevant LIterature SearcH ) [ 27 ]. The present review includes the results published in the articles with a Digital Object Identifier ( DOI ), completely corresponding to the keywords ( Table 1 ), except for those published only as abstracts.…”
mentioning
confidence: 99%
“…ML can learn from almost any data type, even unstructured medical text, such as patient records, medical notes, prescriptions, audio interview transcripts, or pathology and radiology reports. Future day-to-day applications will embrace ML methods to organize a growing volume of scientific literature, facilitating access and extraction of meaningful knowledge content from it (24). In the clinic, ML can harness the potential of electronic health records to accurately predict medical events (25).…”
Section: Digital Healthcare and Clinical Health Recordsmentioning
confidence: 99%