There is currently great interest in analyzing the workflow of minimally invasive operations performed in a physical or simulation setting, with the aim of extracting important information that can be used for skills improvement, optimization of intraoperative processes, and comparison of different interventional strategies. The first step in achieving this goal is to segment the operation into its key interventional phases, which is currently approached by modeling a multivariate signal that describes the temporal usage of a predefined set of tools. Although this technique has shown promising results, it is challenged by the manual extraction of the tool usage sequence and the inability to simultaneously evaluate the surgeon's skills. In this paper we describe an alternative methodology for surgical phase segmentation and performance analysis based on Gaussian mixture multivariate autoregressive (GMMAR) models of the hand kinematics. Unlike previous work in this area, our technique employs signals from orientation sensors, attached to the endoscopic instruments of a virtual reality simulator, without considering which tools are employed at each time-step of the operation. First, based on pre-segmented hand motion signals, a training set of regression coefficients is created for each surgical phase using multivariate autoregressive (MAR) models. Then, a signal from a new operation is processed with GMMAR, wherein each phase is modeled by a Gaussian component of regression coefficients. These coefficients are compared to those of the training set. The operation is segmented according to the prior probabilities of the surgical phases estimated via GMMAR. The method also allows for the study of motor behavior and hand motion synchronization demonstrated in each phase, a quality that can be incorporated into modern laparoscopic simulators for skills assessment.