Conventional conversational recommender systems support interaction strategies that are hard-coded into the system in advance. In this context, Reinforcement Learning techniques have been proposed to learn an optimal, user-adapted interaction strategy, by encoding relevant information as features describing the state of the interaction. In this regard, a crucial problem is to select this subset of relevant features from a larger set, for any given recommendation task. In this paper, we tackle this issue of state features selection by proposing and exploiting two criteria for determining feature relevancy. Our results show that adding a feature might not always be beneficial, that the relevancy is influenced by the user behavior, and also by the numerical reinforcement signal which is exploited by the adaptive system for learning the optimal strategy. These results, obtained in off-line simulations and in a simplified scenario, were exploited to design an adaptive recommender system for an online travel planning application.