2020
DOI: 10.1007/978-3-030-58219-7_19
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the CLEF eHealth Evaluation Lab 2020

Abstract: In this paper we provide an overview of the fourth edition of the CLEF eHealth evaluation lab. CLEF eHealth 2016 continues our evaluation resource building efforts around the easing and support of patients, their next-of-kins and clinical staff in understanding, accessing and authoring eHealth information in a multilingual setting. This year's lab offered three tasks: Task 1 on handover information extraction related to Australian nursing shift changes, Task 2 on information extraction in French corpora, and T… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 21 publications
(7 citation statements)
references
References 26 publications
0
7
0
Order By: Relevance
“…While checking for relevance, users’ knowledge level is not considered, which could play a major role in the relevance of a resource to an individual’s information need. For example, if a person with no medical background is provided with academic literature to fulfill their information needs, it may be difficult for them to understand the literature [ 20 , 75 , 76 ]. Similarly, other aspects of relevance, including the trustworthiness and clinical validity of the document, were not considered in this study [ 23 , 77 ].…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…While checking for relevance, users’ knowledge level is not considered, which could play a major role in the relevance of a resource to an individual’s information need. For example, if a person with no medical background is provided with academic literature to fulfill their information needs, it may be difficult for them to understand the literature [ 20 , 75 , 76 ]. Similarly, other aspects of relevance, including the trustworthiness and clinical validity of the document, were not considered in this study [ 23 , 77 ].…”
Section: Discussionmentioning
confidence: 99%
“…Despite the benefits of shared resources, some important questions arise, given that OHC users are health consumers and might not be health experts: which resources shared by the OHC peers are relevant to the information needs of survivors of OvCa and their caregivers, and what aspects of resource sharing can help us determine resource relevance? Previous research examined health literacy in OHCs and revealed that most of the content is generated by users with underdeveloped skills in validating information sources and navigating the internet [ 20 ]. Therefore, users need help in finding the relevant resources generated or shared in OHCs [ 21 ].…”
Section: Introductionmentioning
confidence: 99%
“…The three most common information extraction tasks -named entity recognition (38,39,40), concept normalization (41,42), and relation extraction (Section 7) -are still active areas of research. However, in many cases, software systems exist that will perform these tasks automatically.…”
Section: Software For Clinical Information Extractionmentioning
confidence: 99%
“…This aim differs from our main objective: generating lay language summaries of biomedical scientific reviews for health consumers. Another related area of work concerns information retrieval from the internet, where the goal is to help consumers find (rather than interpret) health information (Goeuriot et al 2020).…”
Section: Biomedical Domain Summarizationmentioning
confidence: 99%