“…Table 2 compares the classification accuracy of the proposed method to that of alternative scalable 3D representations techniques on the ModelNet40 datasets. As observed, the proposed method performs better than VoxNet (Maturana and Scherer, 2015 ), 3DGAN (Wu et al, 2016 ), 3DShapeNets (Wu et al, 2015 ), NormalNet, VACWGAN-GP (Wang et al, 2019a ; Ergün and Sahillioglu, 2023 ), DPRNet (Arshad et al, 2019 ), Pointwise (Hua et al, 2018 ), BV-CNN's (Ma et al, 2017 ), NPCEM (Song et al, 2020 ), ECC (Simonovsky and Komodakis, 2017 ), PointNet (Charles et al, 2017 ), Geometry image (Sinha et al, 2016 ), VSL (Liu et al, 2018 ), GIFT (Bai et al, 2016 ), FPNN (Li et al, 2016 ), DGCB-Net (Tian et al, 2020 ), and DeepNN (Gao et al, 2022 ) that utilized mesh 3D data. The recent RECON (Qi et al, 2023 ) and PointConT (Liu et al, 2023 ) slightly outperformed our technique, which could be attributed to their usage of transformers and pre-train models.…”