Online recommendation requires handling rapidly changing user preferences. Deep reinforcement learning (DRL) is an effective means of capturing users' dynamic interest during interactions with recommender systems. Generally, it is challenging to train a DRL agent, due to large state space (e.g., user-item rating matrix and user profiles), action space (e.g., candidate items), and sparse rewards. Existing studies leverage experience replay (ER) to let an agent learn from past experience. However, they adapt poorly to the complex environment of online recommender systems and are inefficient in determining an optimal strategy from past experience. To address these issues, we design a novel state-aware experience replay model, which selectively selects the most relevant, salient experiences, and recommends the agent with the optimal policy for online recommendation. In particular, the model uses locality-sensitive hashing to map high dimensional data into low-dimensional representations and a prioritized reward-driven strategy to replay more valuable experience at a higher chance. Experiments on three online simulation platforms demonstrate our model's feasibility and superiority to several existing experience replay methods.