The representation of three-dimensional objects with point clouds is attracting increasing interest from researchers and practitioners. Since this representation requires a huge data volume, effective point cloud compression techniques are required. One of the most powerful solutions is the Moving Picture Experts Group geometry-based point cloud compression (G-PCC) emerging standard. In the G-PCC lifting transform coding technique, an adaptive quantization method is used to improve the coding efficiency. Instead of assigning the same quantization step size to all points, the quantization step size is increased according to level of detail traversal order. In this way, the attributes of more important points receive a finer quantization and have a smaller quantization error than the attributes of less important ones. In this paper, we adapt this approach to the G-PCC predicting transform and propose a hardware-friendly weighting method for the adaptive quantization. Experimental results show that compared to the current G-PCC test model, the proposed method can achieve an average Bjøntegaard delta rate of -6.7%, -14.7%, -15.4%, and -10.0% for the luma, chroma Cb, chroma Cr, and reflectance components, respectively on the MPEG Cat1-A, Cat1-B, Cat3-fused and Cat3-frame datasets.