Traditionally, 3D segmentation tasks have operated in silos, focusing separately on semantic and instance segmentation. However, this disjointed approach lacks interoperability and fails to fully unleash the potential of a more integrated, multitask solution. To overcome this limitation, we introduce TUS-Net, an innovative transformer-based architecture meticulously crafted for both semantic and instance segmentation of point clouds. Our model introduces two pivotal advancements: First, it employs a superpoint-based pre-processing step that minimizes computational overhead without compromising on precision. Second, we leverage a dual-branch design within the transformer architecture, allowing it to adapt to the nuances of both segmentation tasks dynamically. Through extensive experimentation on the ScanNet dataset, our findings demonstrate that TUS-Net surpasses prevailing specialized models by a substantial margin and maintains remarkable computational efficiency. Notably, we achieve a 5.7% enhancement in mean Average Precision (mAP), for instance, segmentation, while striking an optimal balance between accuracy and runtime for semantic segmentation. These outcomes underscore the versatility, efficiency and high-performance attributes of TUS-Net, positioning it as an indispensable framework for robust 3D point cloud segmentation.