2021
DOI: 10.1145/3462475
|View full text |Cite
|
Sign up to set email alerts
|

Temporal Relation Extraction in Clinical Texts

Abstract: Unstructured data in electronic health records, represented by clinical texts, are a vast source of healthcare information because they describe a patient's journey, including clinical findings, procedures, and information about the continuity of care. The publication of several studies on temporal relation extraction from clinical texts during the last decade and the realization of multiple shared tasks highlight the importance of this research theme. Therefore, we propose a review of temporal relation extrac… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
8
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
3

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(8 citation statements)
references
References 124 publications
(360 reference statements)
0
8
0
Order By: Relevance
“…The improved performance of the Clinical-Longformer model as compared to GPT-4 when defined by the F1 score is likely due to the sacrifice of sensitivity for improved precision. GPT-4 demonstrated near 100% sensitivity in recent history and prior history of incarceration labels, as well as 100% sensitivity for any history of incarceration, but significantly lower specificity compared to Clinical Longformer (60.1% vs. 87.5%) Further, our study applies the similar principles utilized by Boch et al to identify parental criminal justice system involvement in a pediatric population to a more specific and different goal [25]. We focused on the identification of incarceration status and history in the subject of the encounter notes, while Boch et al looked at any pediatric exposure to parental justice involvement, including jail, prison, parole, and probation.…”
Section: Discussionmentioning
confidence: 64%
See 3 more Smart Citations
“…The improved performance of the Clinical-Longformer model as compared to GPT-4 when defined by the F1 score is likely due to the sacrifice of sensitivity for improved precision. GPT-4 demonstrated near 100% sensitivity in recent history and prior history of incarceration labels, as well as 100% sensitivity for any history of incarceration, but significantly lower specificity compared to Clinical Longformer (60.1% vs. 87.5%) Further, our study applies the similar principles utilized by Boch et al to identify parental criminal justice system involvement in a pediatric population to a more specific and different goal [25]. We focused on the identification of incarceration status and history in the subject of the encounter notes, while Boch et al looked at any pediatric exposure to parental justice involvement, including jail, prison, parole, and probation.…”
Section: Discussionmentioning
confidence: 64%
“…The NLP algorithm can capture nuanced information beyond these specific codes through the use of unstructured data when structured data, such as ICD-10 codes and problem lists, often under-report. In addition, the NLP algorithm surpasses simple keyword searches by considering the context and meaning of the text, leading to more accurate identification of incarceration history [ 25 ]. The Clinical-Longformer model demonstrated superior sensitivity, specificity, precision, and F1 score when compared to the RoBERTa model and a superior F1 score when compared to zero-shot GPT-4.…”
Section: Discussionmentioning
confidence: 99%
See 2 more Smart Citations
“…To assess the usefulness of our corpus, it should be applied to many other clinical NLP tasks, in particular, to sequence-labeling tasks, to measure the correlation between the algorithm accuracy and IAA for each semantic type. To cover temporal reasoning tasks [ 60 ], our annotation schema will be expanded to create a subset of this corpus with temporal annotation. The availability of the corpus to the scientific community will allow not only our research group, but other researchers to complement and adapt the SemClinBr annotations according to their needs, without having to start an annotation process from scratch, as Osborne et al (2018) [ 61 ] did when normalizing the ShARe corpus, or Wagholikar et al who used pooling techniques to reuse corpora across institutions [ 62 ].…”
Section: Discussionmentioning
confidence: 99%