Context modeling has always been the groundwork for the dialogue response generation task, yet it presents challenges due to the loose context relations among open-domain dialogue sentences. Introducing simulated dialogue futures has been proposed as a solution to mitigate the problem of low history–response relevance. However, these approaches simply assume that the history and future of a dialogue have the same effect on response generation. In reality, the coherence between dialogue sentences varies, and thus, history and the future are not uniformly helpful in response prediction. Consequently, determining and leveraging the relevance between history–response and response–future to aid in response prediction emerges as a pivotal concern. This paper addresses this concern by initially establishing three context relations of response and its context (history and future), reflecting the relevance between the response and preceding and following sentences. Subsequently, we annotate response contextual relation labels on a large-scale dataset, DailyDialog (DD). Leveraging these relation labels, we propose a response generation model that adaptively integrates contributions from preceding and succeeding sentences guided by explicit relation labels. This approach mitigates the impact in cases of lower relevance and amplifies contributions in cases of higher relevance, thus improving the capability of context modeling. Experimental results on public dataset DD demonstrate that our response generation model significantly enhances coherence by 3.02% in long sequences (4-gram) and augments bi-gram diversity by 17.67%, surpassing the performance of previous models.