2013
DOI: 10.1136/amiajnl-2012-001317
|View full text |Cite
|
Sign up to set email alerts
|

Towards comprehensive syntactic and semantic annotations of the clinical narrative

Abstract: ObjectiveTo create annotated clinical narratives with layers of syntactic and semantic labels to facilitate advances in clinical natural language processing (NLP). To develop NLP algorithms and open source components.MethodsManual annotation of a clinical narrative corpus of 127 606 tokens following the Treebank schema for syntactic information, PropBank schema for predicate-argument structures, and the Unified Medical Language System (UMLS) schema for semantic information. NLP components were developed.Result… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
104
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 109 publications
(108 citation statements)
references
References 18 publications
2
104
0
Order By: Relevance
“…We computed our IAA values requiring an exact match between annotations, which is generally lower than a partial match. For example, Albright et al (2013) achieved an F1 measure of 0.697 in exact match, but of 0.750 in partial match. Overall, our Ogren et al (2008) for English (from 75.7 to 81.4% in entity annotation, exact match) and Oronoz et al (2015) for Spanish (from 88.63 to 90.53% in term annotation).…”
Section: Inter-annotator Agreement (Iaa)mentioning
confidence: 99%
See 1 more Smart Citation
“…We computed our IAA values requiring an exact match between annotations, which is generally lower than a partial match. For example, Albright et al (2013) achieved an F1 measure of 0.697 in exact match, but of 0.750 in partial match. Overall, our Ogren et al (2008) for English (from 75.7 to 81.4% in entity annotation, exact match) and Oronoz et al (2015) for Spanish (from 88.63 to 90.53% in term annotation).…”
Section: Inter-annotator Agreement (Iaa)mentioning
confidence: 99%
“…Notable research initiatives, in collaboration with health institutions, have annotated clinical texts: the Mayo Clinic corpus (Ogren et al 2008), the Clinical E-Science Framework (CLEF) (Roberts et al 2009), the THYME (Temporal Histories of Your Medical Events) project (Styler et al 2014), 7 the SHARP Template Annotations (Savova et al 2012), the MiPACQ (Multi-source Integrated Platform for Answering Clinical Questions) (Albright et al 2013), the IxA-Med-GS (Oronoz et al 2015) or the Harvey corpus (Savkov et al 2016). Research challenges have also fuelled the annotation of resources or enrichment of available texts.…”
Section: Introductionmentioning
confidence: 99%
“…This problem is exacerbated in the biomedical domain, where suitably qualified annotators can be both hard to find and prohibitively expensive [48,49].…”
Section: Discussionmentioning
confidence: 99%
“…However, this ongoing work on temporal evaluation is based on language data collected from the news. In the clinical domain, (Styler IV et al, Undated;Palmer and Pustejovsky, 2012;Albright et al, 2013) describe the THYME annotation project. The scope and language of temporality related to the cell cycle is different from that of both TempEval and the clinical domain, and supports (and demands) different types of reasoning, specifically related to cyclical time.…”
Section: Motivationmentioning
confidence: 99%