The goal of this paper is to evaluate the performance of an adaptive beamforming approach in fifth-generation millimeter-wave multicellular networks, where massive multiple-input multiple-output configurations are employed in all active base stations of the considered orientations. In this context, beamforming is performed with the help of a predefined set of configurations that can deal with various traffic scenarios by properly generating highly directional beams on demand. In parallel, a machine learning (ML) beamforming approach based on the k-nearest neighbors (k-NN) approximation has been considered as well, which is trained in order to generate the appropriate beamforming configurations according to the spatial distribution of throughput demand. Performance is evaluated statistically, via a developed system level simulator that executes Monte Carlo simulations in parallel. Results indicate that the achievable spectral efficiency (SE) and energy efficiency (EE) values are aligned with other state of the art approaches, with reduced hardware and algorithmic complexity, since per user beamforming calculations are omitted. In particular, considering a two-tier cellular orientation, then in the non-ML approach EE and SE can reach up to 5 Mbits/J and 36 bps/Hz, respectively. Both metrics attain the aforementioned values when the MLassisted beamforming framework is considered. However, beamforming complexity is further reduced, since the ML approach provides a direct mapping among the considered throughput demand and appropriate beamforming configuration.