More and more research on left ventricle quantification skips segmentation due to its requirement of large amounts of pixel-by-pixel labels. In this study, a framework is developed to directly quantify left ventricle multiple indices without the process of segmentation. At first, DenseNet is utilized to extract spatial features for each cardiac frame. Then, in order to take advantage of the time sequence information, the temporal feature for consecutive frames is encoded using gated recurrent unit (GRU). After that, the attention mechanism is integrated into the decoder to effectively establish the mappings between the input sequence and corresponding output sequence. Simultaneously, a regression layer with the same decoder output is used to predict multi-indices of the left ventricle. Different weights are set for different types of indices based on experience, and l2-norm is used to avoid model overfitting. Compared with the state-of-the-art (SOTA), our method can not only produce more competitive results but also be more flexible. This is because the prediction results in our study can be obtained for each frame online while the SOTA only can output results after all frames are analyzed.