The World Wide Web Conference 2019
DOI: 10.1145/3308558.3313485
|View full text |Cite
|
Sign up to set email alerts
|

Improving Medical Code Prediction from Clinical Text via Incorporating Online Knowledge Sources

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 36 publications
(31 citation statements)
references
References 20 publications
0
31
0
Order By: Relevance
“…The document similarity measures are used to find the similarity between healthcare documents. For example, to detect medical codes of the documents the authors have used an attention mechanism which targets the most informative parts of the documents [33]. In another research, Jaccard distance measure was used to compute the similarity between medical documents using a Non-negative matrix factorization algorithm [34].…”
Section: B Document Similarity In Healthcarementioning
confidence: 99%
“…The document similarity measures are used to find the similarity between healthcare documents. For example, to detect medical codes of the documents the authors have used an attention mechanism which targets the most informative parts of the documents [33]. In another research, Jaccard distance measure was used to compute the similarity between medical documents using a Non-negative matrix factorization algorithm [34].…”
Section: B Document Similarity In Healthcarementioning
confidence: 99%
“…CAML (Mullenbach et al, 2018) used CNN with multiple filters and label attention. adopted the doc2vec embedding and CNN architecture, and Bai and Vucetic (2019) incorporated online knowledge sources. The recent MultiResCNN model (Li and Yu, 2020) extensively concatenated and stacked CNNs with multi-filter convolution and residual learning.…”
Section: Related Workmentioning
confidence: 99%
“…To ensure that other experimental results can be reproduced, we perform experiments using the same dataset and the same splitting method. In the previous study [32], the dataset was divided into a training set, a validation set, and a test set of 0.7, 0.1, and 0.2, respectively. In our experiment, we keep the same splitting of dataset and compared with the previous studies.…”
Section: Experimental Settingsmentioning
confidence: 99%
“…The results are shown in Table 4. In [32], Bai et al presented a model that incorporates the external resources of Wikipedia knowledge framework (KSI) into the model to predict ICD codes and implements the optimal performance on the MIMIC dataset. In [16], Mullenbach et al presented Compared to other deep neural network-based approaches, our method achieves the highest Micro F1 and Macro F1 scores of 67.5% and 26.5% for the MIMIC-III dataset.…”
Section: Comparison With Related Workmentioning
confidence: 99%