2019
DOI: 10.3390/app9153041
|View full text |Cite
|
Sign up to set email alerts
|

VS-GRU: A Variable Sensitive Gated Recurrent Neural Network for Multivariate Time Series with Massive Missing Values

Abstract: Multivariate time series are often accompanied with missing values, especially in clinical time series, which usually contain more than 80% of missing data, and the missing rates between different variables vary widely. However, few studies address these missing rate differences and extract univariate missing patterns simultaneously before mixing them in the model training procedure. In this paper, we propose a novel recurrent neural network called variable sensitive GRU (VS-GRU), which utilizes the different … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
15
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 30 publications
(15 citation statements)
references
References 31 publications
0
15
0
Order By: Relevance
“…An extension to missing indicators, capable of incorporating both IP and IO, is to derive a summary of the measurement process and include this as a predictor. 15 , 24 , 31–44 Examples include a count of the number of measurements (eg, throughout a critical care admission), 37 weighted counts, 42 combined missing indicators, 31 missingness rates over time, 32 time intervals between measures, 33–35 embedding vectors that represent missing values, 36 or information relating to hospital processes. 38 , 39 …”
Section: Resultsmentioning
confidence: 99%
“…An extension to missing indicators, capable of incorporating both IP and IO, is to derive a summary of the measurement process and include this as a predictor. 15 , 24 , 31–44 Examples include a count of the number of measurements (eg, throughout a critical care admission), 37 weighted counts, 42 combined missing indicators, 31 missingness rates over time, 32 time intervals between measures, 33–35 embedding vectors that represent missing values, 36 or information relating to hospital processes. 38 , 39 …”
Section: Resultsmentioning
confidence: 99%
“…Although in the latter works the missing rate of variables were huge (more than 70%) and different from each other, the model behaved the same for all variables. In [21], the missing rate of each variable was included in the inputs of GRU to reduce the harmful impact of variables with high missing rate on variables with low missing rate. Considering the fact that the mentioned literature of MNAR category was all analyzed over medical datasets, the work in [38] extended the analysis over the computer vision domain as well.…”
Section: Related Workmentioning
confidence: 99%
“…Taking into account recurrent methods, the variants of Recurrent Neural Network (RNN) like Gated Recurrent Unit (GRU) [19][20][21] and Long Short-Term Memory (LSTM) [22][23][24] have demonstrated promising results in extracting temporal features and some could deal with huge missingness as well. It is notable that the approaches in [19][20][21][22] are among few works on addressing massive MNAR missingness in MTS (more than 70%). However, the main focus of these works have been on the classification task of medical domains.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The missing data completion has been applied to many fields [13][14][15][16]. In the field of dams, Lv et al [17] pointed out that the interpolation methods of observation data mainly include internal physical association interpolation and mathematical interpolation and introduced the principle and process of linear interpolation.…”
Section: Introduction 1literature Reviewsmentioning
confidence: 99%