We introduce synchronized and calibrated multi-view video and motion capture dataset for motion analysis and gait identification. The 3D gait dataset consists of 166 data sequences with 32 people. In 128 data sequences, each of 32 individuals was dressed in his/her clothes, in 24 data sequences, 6 of 32 performers changed clothes, and in 14 data sequences, 7 of the performers had a backpack on his/her back. In a single recording session, every performer walked from right to left, then from left to right, and afterwards on the diagonal from upperright to bottom-left and from bottom-left to upper-right corner of a rectangular scene. We demonstrate that a baseline algorithm achieves promising results in a challenging scenario, in which gallery/training data were collected in walks perpendicular/facing to the cameras, whereas the probe/testing data were collected in diagonal walks. We compare performances of biometric gait recognition that were achieved on marker-less and marker-based 3D data. We present recognition performances, which were achieved by a convolutional neural network and classic classifiers operating on gait signatures obtained by multilinear principal component analysis. The availability of synchronized multi-view image sequences with 3D locations of body markers creates a number of possibilities for extraction of discriminative gait signatures. The gait data are available at http://bytom.pja.edu.pl/projekty/hm-gpjatk/.