Proceedings of the 3rd Clinical Natural Language Processing Workshop 2020
DOI: 10.18653/v1/2020.clinicalnlp-1.11
|View full text |Cite
|
Sign up to set email alerts
|

Clinical XLNet: Modeling Sequential Clinical Notes and Predicting Prolonged Mechanical Ventilation

Abstract: Clinical notes contain rich information, which is relatively unexploited in predictive modeling compared to structured data. In this work, we developed a new clinical text representation Clinical XLNet that leverages the temporal information of the sequence of the notes. We evaluated our models on prolonged mechanical ventilation prediction problem and our experiments demonstrated that Clinical XLNet outperforms the best baselines consistently. The models and scripts are made publicly available.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
32
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
8
1
1

Relationship

1
9

Authors

Journals

citations
Cited by 47 publications
(32 citation statements)
references
References 23 publications
0
32
0
Order By: Relevance
“…Although they did not report any evidence of the superiority of the language model based approach, our works still complement each other. Another complementary worth mentioning work is Huang et al ( 7 ), which tried to overcome transformer input size limit for 30-day re-admission task on the same dataset, but unlike us, pursued simple hueristic rather than deep-learning solutions and reported smaller improvements. Unfortunately, we were not able to use their pre-trained language model and are not aware of any follow up study that did.…”
mentioning
confidence: 89%
“…Although they did not report any evidence of the superiority of the language model based approach, our works still complement each other. Another complementary worth mentioning work is Huang et al ( 7 ), which tried to overcome transformer input size limit for 30-day re-admission task on the same dataset, but unlike us, pursued simple hueristic rather than deep-learning solutions and reported smaller improvements. Unfortunately, we were not able to use their pre-trained language model and are not aware of any follow up study that did.…”
mentioning
confidence: 89%
“…[8][9][10] Recently, studies have reported that a new deep learning-based architecture, named "transformers," achieved state-of-the-art performance for a number of benchmark tasks [11][12][13][14][15][16] in the general English domain. Although several studies have examined transformer-based models for clinical individually, [17][18][19][20][21] there is no study that has systematically explored and compared their performance in the biomedical domain. In addition, there is a lack of package with pretrained clinical transformers that could facilitate researchers and other users adopting these state-of-the-art NLP models in various downstream clinical NLP tasks.…”
Section: Introductionmentioning
confidence: 99%
“…If it indeed reduces the time and memory consumed by the experiment and improves the recommendation performance, then, we will improve it within the existing method. In addition, BERT, which involves knowledge in the medical field, such as Clin-icalBERT [34] and BioBERT [35], will also serve as our follow-up research work.…”
Section: Discussionmentioning
confidence: 99%