Currently, 3D deep learning based on point cloud data has become a research hotspot in the field of computer vision. However, the high cost of acquiring point cloud data, the tedious process of processing and labeling, and the scarcity of high-quality and suitable datasets have been the prominent problems faced by researchers. In this paper, we propose a method to quickly produce point cloud dataset based on BIM 3D model and computer simulation technology, including the steps of classifying and labeling BIM models, converting 3D object data formats, extracting point clouds using Pytorch3d and Open3d libraries, and improving efficiency through Revit secondary development and Dos batch processing. Finally, we demonstrate the effectiveness of the method by performing semantic segmentation experiments using Pointnet++ network and analyzed the impact of point cloud sampling density, sampling method and 3D model accuracy on the performance of virtual point cloud. As a digital twin of the real world, BIM models are a natural database with rich scenes and all kinds of elements. It is hoped that the method studied in this paper can help researchers to produce datasets applicable to their own research and provide help for the application of 3D deep learning techniques in engineering and other fields.