Even though neural network methodologies have been established for a long time, only recently have they achieved exceptional efficacy in practical deployments, predominantly due to improvements in hardware computational capacity and the large amounts of available data for learning. Nonetheless, substantial challenges remain in utilizing deep learning in many domains, mainly because of the lack of large amounts of labeled data that are versatile enough for deep learning models to learn useful information. For instance, in mechanical assembly inspection, annotating data for each type of mechanical part to train a deep learning model can be very labor-intensive. Additionally, it is required to annotate data after each modification of mechanical part specification. Also, the system for inspection is typically not available until the first few samples are built to collect data. This paper proposes a solution for these challenges in case of the visual mechanical assembly inspection by processing point cloud data acquired via a three-dimensional (3D) scanner. To reduce the necessity for manually labeling large amounts of data, we employed synthetically generated data for both training and validation purposes, reserving the real sensor data exclusively for the testing phase. Our approach reduces the need for large amounts of labeled data by using synthetically generated point clouds from computer-aided design models for neural network training. Domain gap is a significant challenge for the usage of synthetically generated data. To reduce the domain gap, we used different preprocessing techniques, as well as a neural network architecture that focuses more on shared features that will not significantly change between synthetically generated data and real data from the 3D sensor.