Existing recommender systems usually generate personalized recommendation lists based on the estimation of the preference scores over user-item pairs, while ignoring the impacts of the entire display list that plays a central part in the decision making process of a user. This leaves us an opportunity to generate better recommendation results by considering the impacts of all offered choices. However, such an extension cannot be handled efficiently by traditional top-k list recommendation methods, due to the entire list dependency issue which means a complete list of items is needed before we can precisely measure any item preference among the list. In this paper, we propose a Co-displayed Items Aware (CDIA) list generation approach, which is based on the reinforcement learning architecture, and can efficiently generate high-utility lists. Specifically, we propose CDIA-Sim to predict users' preferences, which considers the impacts of the co-displayed items. Then, to overcome the entire list dependency issue in the list recommendation task, we utilize the reinforcement learning technique and design CDIA-RL to generate high-utility lists. Experimental results show that CDIA-Sim achieves significant improvements in modeling user-item preferences, and CDIA-RL can yield lists efficiently and effectively, illustrating better performance than other competitors. INDEX TERMS Co-displayed items, list recommendation, reinforcement learning.