There is growing interest in the kinematic analysis of human functional upper extremity movement (FUEM) for applications such as health monitoring and rehabilitation. Deconstructing functional movements into activities, actions, and primitives is a necessary procedure for many of these kinematic analyses. Advances in machine learning have led to progress in human activity and action recognition. However, their utility for analyzing the FUEM primitives of reaching and targeting during reach-to-grasp and reach-to-point tasks remains limited. Domain experts use a variety of methods for segmenting the reaching and targeting motion primitives, such as kinematic thresholds, with no consensus on what methods are best to use. Additionally, current studies are small enough that segmentation results can be manually inspected for correctness. As interest in FUEM kinematic analysis expands, such as in the clinic, the amount of data needing segmentation will likely exceed the capacity of existing segmentation workflows used in research laboratories, requiring new methods and workflows for making segmentation less cumbersome. This paper investigates five reaching and targeting motion primitive segmentation methods in two different domains (haptics simulation and real world) and how to evaluate these methods. This work finds that most of the segmentation methods evaluated perform reasonably well given current limitations in our ability to evaluate segmentation results. Furthermore, we propose a method to automatically identify potentially incorrect segmentation results for further review by the human evaluator. Clinical impact: This work supports efforts to automate aspects of processing upper extremity kinematic data used to evaluate reaching and grasping, which will be necessary for more widespread usage in clinical settings.