Existing learning-based methods to compress PCs attributes typically employ variational autoencoders (VAE) to learn compact signal representations. However, these schemes suffer from limited reconstruction quality at high bitrates due to their intrinsic lossy nature. More recently, normalizing flows (NF) have been proposed as an alternative solution. NFs are invertible networks that can achieve lossless reconstruction, at the cost of very large architectures with high memory and computational footprint. This paper proposes an improved NF architecture with reduced complexity called RNF-PCAC. It is composed of two operating modes specialized for low and high bitrates, combined in a rate-distortion optimized fashion. Our approach reduces the number of parameters of the existing NF architectures by over 6×. At the same time, it achieves state-of-the-art coding gains compared to previous learning-based methods and, for some PCs, it matches the performance of G-PCC (v.21).