2015 International Conference on Healthcare Informatics 2015
DOI: 10.1109/ichi.2015.23
|View full text |Cite
|
Sign up to set email alerts
|

Predicting Sequences of Clinical Events by Using a Personalized Temporal Latent Embedding Model

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
31
0

Year Published

2016
2016
2021
2021

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 35 publications
(31 citation statements)
references
References 11 publications
0
31
0
Order By: Relevance
“…After the step of full-text analysis of the 48 articles eleven of these articles finally remained (2.9%) [19,20,[26][27][28][29][30][31][32][33][34] for analysis in MAXQDA by using a coding scheme.…”
Section: Overall Resultsmentioning
confidence: 99%
See 3 more Smart Citations
“…After the step of full-text analysis of the 48 articles eleven of these articles finally remained (2.9%) [19,20,[26][27][28][29][30][31][32][33][34] for analysis in MAXQDA by using a coding scheme.…”
Section: Overall Resultsmentioning
confidence: 99%
“…Table 3 shows all types of graphs used in the articles to represent electronic health records of a patient. Most of the remaining articles use a representation of electronic health records in a graph representing an individual patient in a temporal manner (temporal event data mining) [20,27,28,30,32,33]. In contrast to that, causal networking represents the causal context of patient data and was used in two papers for representing patient data [29,30].…”
Section: Graph Propertiesmentioning
confidence: 99%
See 2 more Smart Citations
“…3) Embedding Layers: Embedding layers can also be integrated as part of a larger model to transform highdimensional features into a lower-dimensional space. The embedding can consist of a simple linear transformation [81], [82] or as a fully-connected (deep) network [11], [81], [78]. One study projected the input into a higherdimensional space using a convolutional layer [46].…”
Section: Representation Learningmentioning
confidence: 99%