2022
DOI: 10.3389/fdgth.2021.810260
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Clinical Events Based on Raw Text: From Bag-of-Words to Attention-Based Transformers

Abstract: Identifying which patients are at higher risks of dying or being re-admitted often happens to be resource- and life- saving, thus is a very important and challenging task for healthcare text analytics. While many successful approaches exist to predict such clinical events based on categorical and numerical variables, a large amount of health records exists in the format of raw text such as clinical notes or discharge summaries. However, the text-analytics models applied to free-form natural language found in t… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(5 citation statements)
references
References 18 publications
1
3
0
Order By: Relevance
“…Table 3 compares the extraction performance of all algorithms in terms of macro F1-score, precision, and recall. Consistent with the experiment results from other extraction or classification tasks in general NLP applications [62,63], all transformer-based extraction models significantly outperform traditional NLP methods proposed in previous work in alcohol-related information extraction. As pointed out in Gururangan et al [19], fine-tuning allows the transformerbased language models to quickly adapt to our dataset based on the linguistic patterns that they have acquired during their massive pre-training process.…”
Section: B Extraction Performancesupporting
confidence: 85%
“…Table 3 compares the extraction performance of all algorithms in terms of macro F1-score, precision, and recall. Consistent with the experiment results from other extraction or classification tasks in general NLP applications [62,63], all transformer-based extraction models significantly outperform traditional NLP methods proposed in previous work in alcohol-related information extraction. As pointed out in Gururangan et al [19], fine-tuning allows the transformerbased language models to quickly adapt to our dataset based on the linguistic patterns that they have acquired during their massive pre-training process.…”
Section: B Extraction Performancesupporting
confidence: 85%
“…This model does not capture the context of words and ignores the order of words. It also requires a large amount of data for the model to be accurate . Despite our limited data set’s low metrics, feature importance and SHAP summaries were informative in identifying donor conditions that may have affected organ selection, eg, words implying chronic disease such as chronic obstructive pulmonary disease, hypertension, and insulin might work negatively, whereas words implying trauma such as injury might work positively.…”
Section: Discussionmentioning
confidence: 99%
“…Including such insignificant data might have diluted the notation of important donor conditions, resulting in lower prediction scores. Although transformer-based models can handle longer input sequences and more extensive vocabulary than traditional models, their performance is suboptimal when a long medical summary is used . Due to inconsistent free-text input in the different fields, we combined them into a single-text input.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Among them, the BERT model is often used. Bert can achieve excellent performance on biomedical and clinical sentence similarity and short document classification tasks [12].…”
Section: Research Contentsmentioning
confidence: 99%