In the new area of immersive multimedia environments, understanding and manipulating visual attention are crucial for enhancing user experience. This study introduces an innovative framework that extends traditional 2D saliency maps to the analysis of 3D point clouds, a step forward in adapting saliency prediction to more complex and immersive environments. Our framework centers on the orthographic projection of 3D point clouds onto 2D planes, enabling the application of established 2D saliency models to this novel context. We further delve into the evaluation of these models on a 3D point cloud eye-tracking dataset, exploring various projection settings and thresholding techniques to maintain the integrity of saliency information in the transition from 2D to 3D. This research not only bridges a gap in applying visual attention models to 3D data but also offers insights into the optimization of quality of experience in immersive multimedia systems.