In recent years, 3D point cloud content has gained attention due to its application possibilities, such as multimedia systems, virtual, augmented, and mixed reality, through the mapping and visualization of environments and/or 3D objects, real-time immersive communications, and autonomous driving systems. However, raw point clouds demand a large amount of data for their representation, and compression is mandatory to allow efficient transmission and storage. The MPEG group proposed the Video-based Point Cloud Compression (V-PCC) standard, which is a dynamic point cloud encoder based on the use of video encoders through projections into 2D space. However, V-PCC demands a high computational cost, demanding fast implementations for real-time processing and, especially, for mobile device applications. In this paper, a machine-learning-based fast implementation of V-PCC is proposed, where the main approach is the use of trained decision trees to speed up the block partitioning process during the point cloud compression. The results show that the proposed fast V-PCC solution is able to achieve an encoding time reduction of 42.73% for the geometry video sub-stream and 55.3% for the attribute video sub-stream, with a minimal impact on bitrate and objective quality.