Grid cells are crucial in path integration and representation of the external world. The spikes of grid cells spatially form clusters called grid fields, which encode important information about allocentric positions. To decode the information, studying the spatial structures of grid fields is a key task for both experimenters and theorists. Experiments reveal that grid fields form hexagonal lattice during planar navigation, and are anisotropic beyond planar navigation. During volumetric navigation, they lose global order but possess local order. How grid cells form different field structures behind these different navigation modes remains an open theoretical question. However, to date, few models connect to the latest discoveries and explain the formation of various grid field structures. To fill in this gap, we propose an interpretive plane-dependent model of three-dimensional (3D) grid cells for representing both two-dimensional (2D) and 3D space. The model first evaluates motion with respect to planes, such as the planes animals stand on and the tangent planes of the motion manifold. Projection of the motion onto the planes leads to anisotropy, and error in the perception of planes degrades grid field regularity. A training-free recurrent neural network (RNN) then maps the processed motion information to grid fields. We verify that our model can generate regular and anisotropic grid fields, as well as grid fields with merely local order; our model is also compatible with mode switching. Furthermore, simulations predict that the degradation of grid field regularity is inversely proportional to the interval between two consecutive perceptions of planes. In conclusion, our model is one of the few pioneers that address grid field structures in a general case. Compared to the other pioneer models, our theory argues that the anisotropy and loss of global order result from the uncertain perception of planes rather than insufficient training.