“…We show that as we can identify the ground truth components of the mixture, and the ground truth assignment of each microscopic observation, independent modeling of time series data from each component could be improved due to lower variance, and further benefitting the estimation of macroscopic time series that are of interest. Second, inspired by recent successes of Seq2seq models [36,10,12] based on deep neural networks, e.g., variants of recurrent neural networks (RNNs) [16,41,22], convolutional neural networks (CNNs) [4,15], and Transformers [19,38], we propose Mixture of Seq2seq (MixSeq), a mixture model for time series, where the components come from a family of Seq2seq models parameterized by different parameters. Third, we conduct synthetic experiments to demonstrate the superiority of our approach, and extensive experiments on real-world data to show the power of our approach compared with canonical approaches.…”