2019 IEEE International Conference on Healthcare Informatics (ICHI) 2019
DOI: 10.1109/ichi.2019.8904638
|View full text |Cite
|
Sign up to set email alerts
|

Recurrent Imputation for Multivariate Time Series with Missing Values

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
14
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 32 publications
(15 citation statements)
references
References 3 publications
0
14
0
1
Order By: Relevance
“…Team Buffalo/Virginia [ 26 ] used a RNN to learn the global representations of all laboratory tests and a series of RNNs to learn laboratory test-specific patterns and to merge them through a fusion gate. They enabled both forward and backward directions of recurrence for imputation in order to improve its ability of long-term memory.…”
Section: Challenge Participating Systemsmentioning
confidence: 99%
See 1 more Smart Citation
“…Team Buffalo/Virginia [ 26 ] used a RNN to learn the global representations of all laboratory tests and a series of RNNs to learn laboratory test-specific patterns and to merge them through a fusion gate. They enabled both forward and backward directions of recurrence for imputation in order to improve its ability of long-term memory.…”
Section: Challenge Participating Systemsmentioning
confidence: 99%
“…Comparing the challenge participating systems, we note that several teams adopted the gradient boosting algorithms as primary tools for imputation [ 14 , 17 , 22 ]. A few teams used RNN (including LSTM and GRU) for imputation [ 20 , 26–28 ]. Several teams used KNN-based approaches [ 21 , 23 , 27 ].…”
Section: Challenge Participating Systemsmentioning
confidence: 99%
“…In recent years, deep learning models have become research hotspots and have been applied to time series data imputation. Suo et al [17] adopt a bidirectional RNN to predict the missing values based on the prefilled data. Yan et al propose DETROIT [21], which builds features based on the former and latter two collections for each collection.…”
Section: Related Workmentioning
confidence: 99%
“…Existing methods for analyzing irregular time series can be categorized into three main directions [9]: (i) the repair approach in which missing observations are recovered via smoothing or imputation [10][11][12][13][14]-also implemented, especially in recent years, by machine learning methods [15][16][17][18]; (ii) the generalization of spectral analysis tools [19,20], such as wavelets [21][22][23][24]; (iii) kernel methods [25,26]. In this paper, we deal with a repair approach which uses an input preparation step based on machine learning.…”
Section: Introductionmentioning
confidence: 99%