Head pose estimation (HPE) notoriously represents a crucial task for many computer vision applications in robotics, biometry and video surveillance. While, in general, HPE can be performed on both still images and frames extracted from live video or captured footage, its functional approach and the related processing pipeline may have a significant impact on suitability to different application contexts. This implies that, for any real-time application in which HPE is required, this information, namely the angular value of yaw, pitch and roll axes, should be provided in real-time as well. Since, so far, the primary aim in HPE research has been on improving estimation accuracy, there are only a few works reporting the computing time of the proposed HPE method and even less explicitly addressing it. The present work stems from a previous Partitioned Iterated Function Systems-based approach providing state-of-the-art accuracy with high computing cost, and improve it by means of two regression models, namely Gradient Boosting Regressor and Extreme Gradient Boosting Regressor, achieving much faster response and an even lower mean absolute error on the yaw and roll axis, as shown by experiments conducted on the BIWI and AFLW2000 datasets.