Autonomous interference management solutions for inter-cell interference coordination (ICIC) are of utmost importance. In this paper, the coexistence between macro and small cells is studied whereby different ICIC techniques pertaining to different deployment and information assumptions are evaluated. Inspired from Evolutionary Game Theory (EGT), decentralized strategies are devised, in which small cell Base Stations (BSs) exchange information through a central controller, and adapt their strategies based on instantaneous and average payoffs of the small cell population. In contrast, when distributed operation is aimed at, using tools from Reinforcement Learning (RL) small cells learn by interacting with their environment through trialsand-errors, and autonomously optimize their strategies based on a mere feedback. In particular, we compare the performance of decentralized Q-learning, Fuzzy Q-learning, improved Qlearning and expertness-based Q-learning procedures. Finally, the overall performance of the network in terms of average peruser data throughput and convergence are carried out in an LTE-A system level simulator.Index Terms-LTE heterogeneous networks, small cells, interference management, reinforcement learning, game theory.