Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1165
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2016 Task 12: Clinical TempEval

Abstract: This study proposes a system to automatically analyze clinical temporal events in a fine-grained level in SemEval-2017. Support vector machine (SVM) and conditional random field (CRF) were implemented in our system for different subtasks, including detecting clinical relevant events and time expression, determining their attributes , and identifying their relations with each other within the document. Domain adaptation was the main challenge this year. Unified Medical Language System was consulted to generaliz… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
129
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 166 publications
(129 citation statements)
references
References 15 publications
0
129
0
Order By: Relevance
“…For example, Bada et al (2012) achieved 90?% annotator-reviser agreement for biomedical concept annotation in the CRAFT corpus. In the THYME corpus, Bethard et al (2016) reported an interannotator agreement of 0.731 (F1) for temporal expressions, and an annotator-adjudicator agreement of 0.830. Tables 6 and 7 report the figures of the IAA values between pairs of annotators, computed as the average F-measure of both sets that were double-annotated.…”
Section: Inter-annotator Agreement (Iaa)mentioning
confidence: 98%
See 1 more Smart Citation
“…For example, Bada et al (2012) achieved 90?% annotator-reviser agreement for biomedical concept annotation in the CRAFT corpus. In the THYME corpus, Bethard et al (2016) reported an interannotator agreement of 0.731 (F1) for temporal expressions, and an annotator-adjudicator agreement of 0.830. Tables 6 and 7 report the figures of the IAA values between pairs of annotators, computed as the average F-measure of both sets that were double-annotated.…”
Section: Inter-annotator Agreement (Iaa)mentioning
confidence: 98%
“…Research challenges have also fuelled the annotation of resources or enrichment of available texts. Well-known corpora come from the i2b2 challenges (Uzuner et al 2010(Uzuner et al , 2011Sun et al 2013), SemEval (Bethard et al 2016) and the Shared Annotated Resources (ShARe)/CLEF eHealth labs. 8 Overall, two levels of annotations have been applied in clinical texts.…”
Section: Introductionmentioning
confidence: 99%
“…We are also interested in extending the method to PPIs beyond the sentence boundary. Finally, we would like to test and generalize this approach to other biomedical relations such as chemical-disease relations (Wei et al, 2016). …”
Section: Resultsmentioning
confidence: 99%
“…It is a successor to Clinical TempEval 2016 (Bethard et al, 2016), Clinical TempEval 2015 (Bethard et al, 2015), and the i2b2 temporal challenge (Sun et al, 2013).…”
Section: Introductionmentioning
confidence: 99%