2021
DOI: 10.1186/s12911-021-01662-z
|View full text |Cite
|
Sign up to set email alerts
|

Interpretable time-aware and co-occurrence-aware network for medical prediction

Abstract: Background Disease prediction based on electronic health records (EHRs) is essential for personalized healthcare. But it’s hard due to the special data structure and the interpretability requirement of methods. The structure of EHR is hierarchical: each patient has a sequence of admissions, and each admission has some co-occurrence diagnoses. However, the existing methods only partially model these characteristics and lack the interpretation for non-specialists. M… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(2 citation statements)
references
References 26 publications
0
2
0
Order By: Relevance
“…Similarly, interventions often mitigate the severity of conditions at the cost of raising other health risks. Thus, every medical event, whether it involves diseases or medications, can serve as a cause, complication, or early symptom of the recorded codes [42]. This study will demonstrate that order objectives, besides the context, enhance the model performance by learning more structural information.…”
Section: Introductionmentioning
confidence: 80%
“…Similarly, interventions often mitigate the severity of conditions at the cost of raising other health risks. Thus, every medical event, whether it involves diseases or medications, can serve as a cause, complication, or early symptom of the recorded codes [42]. This study will demonstrate that order objectives, besides the context, enhance the model performance by learning more structural information.…”
Section: Introductionmentioning
confidence: 80%
“…Unfortunately, the common opinion holds that deep learning models are blackboxes 20 . Through our previous investigation [21][22][23] , we found that an expert's understanding of a method is usually based on their knowledge, i.e., which input features are important to the result, and what is the contribution of each feature to the result. Based on this, we achieved the interpretation method by finding the key flight parameters and their contributions to load prediction, which is a preliminary exploration to explain the deep learning method in aviation engineering.…”
mentioning
confidence: 99%