6D target object detection is of great importance to many applications such as robotics, industrial automation, and unmanned vehicles and is increasingly influencing broad industries including manufacturing, transportation, and retail industries, to name a few. Unlike the more common object detection methods that use the two-dimensional data such as RGB or depth images, the method proposed here relies on the three-dimensional point cloud of target objects to detect them in cluttered scenes in an end-to-end fashion. However, conventional point cloud-based 6D object detection methods rely on the robustness of key-point detection results that are not straightforward for humans to understand. The drawback makes conventional point cloud-based methods require expert knowledge to tune. In this paper, a point cloud-based 6D target object detection method that uses segmented object point cloud patches to predict object 6D poses and identity is introduced. In this way, the key-point detection is substituted with a point cloud segmentation procedure that is easier to visualize and tune. To extract the object pose and identity information from point cloud patches, we proposed an end-to-end data-driven pose correction model that can be trained with synthetic data and run on real-world data. A simple yet efficient basis spanning layer booster is proposed to accelerate the learning process and improve the pose estimation precision. An alignment error-based loss function is also introduced to make the proposed pose correction model trainable, efficiently. Experiments show that although the proposed model is trained only using object CAD models, its 6D detection performance matches that of the models using view data. Thus, the proposed method is suitable for 6D detection applications that have object CAD models instead of labeled scene data.