3D part segmentation is an essential step in advanced CAM/CAD workflow. Precise 3D segmentation contributes to lower defective rate of work-pieces produced by the manufacturing equipment (such as computer controlled CNCs), thereby improving work efficiency and attaining the attendant economic benefits. A large class of existing works on 3D model segmentation are mostly based on fully-supervised learning, which trains the AI models with large, annotated datasets. However, the disadvantage is that the resulting models from the fully-supervised learning methodology are highly reliant on the completeness (or otherwise) of the available dataset, and its generalization ability is relatively poor to new unknown/unseen segmentation types (i.e., further additional so-called novel classes). In this work, we propose and develop a noteworthy fewshot learning-based approach for effective part segmentation in CAM/CAD; and this is designed to significantly enhance its generalization ability, and our development also aims to flexibly adapt to new segmentation tasks by using only relatively rather few samples. As a result, it not only reduces the requirements for the usually unattainable and exhaustive completeness of supervision datasets, but also improves the flexibility for realworld applications. In the development, drawing inspiration from the pertinent and interesting work described in the open literature as the attMPTI network, we propose and develop a multi-prototype approach (with self-attention mechanics) for fewshot point cloud part segmentation. As further improvement and innovation, we additionally adopt the transform net and the center loss block in the network. These characteristics serve to improve the comprehension for 3D features of the various possible instances of the whole work-piece and ensure the close distribution of the same class in feature space.