Network densification is a suitable solution to improve the capacity of future mobile networks. However, deploying massive low-power base stations sharing the radio spectrum will cause increased interference reducing the ultra-dense networks (UDN) performance. Resource Allocation (RA) proposals have been developed for decades to meet mobile subscribers' data traffic and QoS demands and to prevent harmful interference. However, as networks evolve and mobile applications request more bandwidth, high data rates, and ultra-reliable low latency, the RA problem has become more complex. Machine Learning (ML) techniques have recently been explored to significantly reduce the computational complexity of RA problems and improve overall UDN performance compared to traditional methods. This paper systematically focuses on the most relevant research contributions that use ML techniques to produce accurate channel and power allocation results in UDN. A total of 56 articles were analyzed from a thorough selection process from manuscripts published from 2010 to 2022 in different academic databases. We describe the main aim of these research works and, according to the ML technique applied, have classified them into ANN-based, RL-based, or DRL-based models. Also, we identify the design features of reinforcement learning strategies used to enhance Key Performance Indicators (KPIs), such as energy and spectral efficiency, throughput, interference, or fairness. Research directions are discussed based on the findings.