When considering the multi-turn dialogue systems, the model needs to generate a natural and contextual response. At present, HRAN, one of the most advanced models for multi-turn dialogue problems, uses a hierarchical recurrent encoder-decoder combined with a hierarchical attention mechanism. However, for complex conversations, the traditional attention-based RNN does not fully understand the context, which results in attention to the wrong context that generates irrelevant responses. To solve this problem, we proposed an improved hierarchical recurrent attention network, a self-attention network (HSAN), instead of RNN, to learn word representations and utterances representations. Empirical studies on both Chinese and English datasets show that the proposed model has achieved significant improvement.