2020
DOI: 10.2196/18953
|View full text |Cite
|
Sign up to set email alerts
|

The Impact of Pretrained Language Models on Negation and Speculation Detection in Cross-Lingual Medical Text: Comparative Study

Abstract: Background Negation and speculation are critical elements in natural language processing (NLP)-related tasks, such as information extraction, as these phenomena change the truth value of a proposition. In the clinical narrative that is informal, these linguistic facts are used extensively with the objective of indicating hypotheses, impressions, or negative findings. Previous state-of-the-art approaches addressed negation and speculation detection tasks using rule-based methods, but in the last few… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
7
2
1

Relationship

1
9

Authors

Journals

citations
Cited by 15 publications
(6 citation statements)
references
References 44 publications
0
6
0
Order By: Relevance
“…In addition, deep learning approaches have been implemented to further improve negation recognition [ 17 , 38 , 39 ]. For instance, context-independent and context-dependent pretrained transformers models achieved an F-score performance of over 85% for negation recognition in medical text outperforming rule-based methods [ 40 ]. The authors analyzed the most frequent false negatives and false positives for negation and speculation recognition and concluded that the ambiguity of some grammatical structures led their model to misclassify some tokens resulting in a decreased performance [ 40 ].…”
Section: Discussionmentioning
confidence: 99%
“…In addition, deep learning approaches have been implemented to further improve negation recognition [ 17 , 38 , 39 ]. For instance, context-independent and context-dependent pretrained transformers models achieved an F-score performance of over 85% for negation recognition in medical text outperforming rule-based methods [ 40 ]. The authors analyzed the most frequent false negatives and false positives for negation and speculation recognition and concluded that the ambiguity of some grammatical structures led their model to misclassify some tokens resulting in a decreased performance [ 40 ].…”
Section: Discussionmentioning
confidence: 99%
“…Reliability of automated techniques is lower for more complex linguistic elements that require interpretation, such as coding utterance valence when negations are used (e.g. [ 51 , 52 ]). Manual coding in addition to automated text processing is therefore necessary to guarantee consistent coding [ 53 ].…”
Section: Discussionmentioning
confidence: 99%
“…BMEWO-V is similar to other previous encodings [ 35 ]; however, we introduce the V tag to allow the representation of overlapping or nested entities which are usual phenomena in these types of texts. Additionally, we tested the BMEWO-V enconding format in previous works [ 16 , 36 ]. Finally, the BRAT format is transformed into sentences annotated in the CoNLL-2003 format [ 37 ].…”
Section: Methodsmentioning
confidence: 99%