Group re-identification (G-ReID) focuses on associating the group images containing the same persons under different cameras. The key challenge of G-ReID is that all the cases of the intra-group member and layout variations are hard to exhaust. To this end, we propose a novel uncertainty modeling, which treats each image as a distribution depending on the current member and layout, then digs out potential group features by random samplings. Based on potential and original group features, uncertainty modeling can learn better decision boundaries, which is implemented by two modules, member variation module (MVM) and layout variation module (LVM). Furthermore, we propose a novel second-order transformer framework (SOT), which is inspired by the fact that the position modeling in the transformer is coped with the G-ReID task. SOT is composed of the intra-member module and inter-member module. Specifically, the intra-member module extracts the first-order token for each member, and then the inter-member module learns a second-order token as a group feature by the above first-order tokens, which can be regarded as the token of tokens. A large number of experiments have been conducted on three available datasets, including CSG, DukeGroup and RoadGroup. The experimental results show that the proposed SOT outperforms all previous state-of-the-art methods.