Dynamic texture (DT) segmentation, and video processing in general, is currently widely dominated by methods based on deep neural networks that require the deployment of a large number of layers. Although this parametric approach has shown superior performances for the dynamic texture segmentation, all current deep learning methods suffer from a significant main weakness related to the lack of a sufficient reference annotation to train models and to make them functional. In addition, the result of these methods can deteriorate significantly when the network is fed with images or video not similar (as regards, shape, texture, color, etc.) to the images previously included in the training dataset. This study explores the unsupervised segmentation approach that can be used in the absence of training data to segment new videos. In particular, it tackles the task of dynamic texture segmentation. By automatically assigning a single class label to each region or group, this task consists of clustering into groups complex phenomena and characteristics which are both spatially and temporally repetitive. We present an effective unsupervised learning consensus model for the segmentation of dynamic texture (ULCM). This model is designed to merge different segmentation maps that contain multiple and weak quality regions in order to achieve a more accurate final result of segmentation. The diverse labeling fields required for the combination process are obtained by a simplified grouping scheme applied to an input video (on the basis of a three orthogonal planes: xy, yt and xt). In the proposed model, the set of values of the requantized local binary patterns (LBP) histogram around the pixel to be classified are used as features which represent both the spatial and temporal information replicated in the video. Experiments conducted on the challenging SynthDB dataset show that, contrary to current dynamic texture segmentation approaches that either require parameter estimation or a training step, ULCM is significantly faster, easier to code, simple and has limited parameters. Further qualitative experiments based on the YUP++ dataset prove the efficiently and competitively of the ULCM.