Abstract-Developing robots capable of making sense of their environment requires the ability to learn from observations. An important paradigm that allows for robots to both imitate humans and gain an understanding of the tasks people perform is that of action primitive discovery. Action primitives have been used as a representation of the main building blocks that compose motion. Automatic primitive discovery is an active area of research, with existing methods that can provide viable solutions for learning primitives from demonstrations. However, when we learn primitives directly from raw data, we need a mechanism to determine those primitives that are appropriate for the task at hand: is brushing one's teeth a suitable primitive or are the actions of grabbing the toothbrush, adding toothpaste onto it, and executing the brushing motion better suited? It is this level of granularity that is important for determining well-suited primitives for applications. Existing methods for learning primitives do not provide a solution for discovering their granularity. Rather, these techniques stop at arbitrarily chosen levels, and often use clear, repetitive actions in order to easily label the primitives. Our contribution provides a framework for discovering the appropriate granularity level of learned primitives for a task. We apply our framework to action primitives learned from a set of motion capture data obtained from human demonstrations that includes hand and object motions. This helps find a wellsuited granularity level for our task, avoiding the use of low levels that don't capture the necessary core pattern in the actions, or high levels that miss important differences between actions. Our results show that this framework is able to discover the best suited primitive granularity level for a specific application.