The object segmentation mask’s observation sequence shows the trend of changes in the object’s observable geometric form, and predicting them may assist in solving various difficulties in multi-object tracking and segmentation (MOTS). With this aim, we propose the entangled appearance and motion structures network (EAMSN), which can predict the object segmentation mask at the pixel level by integrating VAE and LSTM. Regardless of the surroundings, each EAMSN keeps complete knowledge about the sequence of probable changes in the seen map of the object and its related dynamics. It suggests that EAMSN understands the item meaningfully and is not reliant on instructive examples. As a result, we propose a novel MOTS algorithm. By employing different EAMSNs for each kind of item and training them offline, ambiguities in the segmentation mask discovered for that object may be recovered, and precise estimation of the real boundaries of the object at each step. We analyze our tracker using the KITTI MOTS and MOTS challenges datasets, which comprise car and pedestrian objects, to illustrate the usefulness of the suggested technique. As a result, we developed distinct EAMSNs for cars and pedestrians, trained using the MODELNET40 and Human3.6 M datasets, respectively. The discrepancy between training and testing data demonstrates that EAMSN is not dependent on training data. Finally, we compared our strategy to a variety of other ways. Compared to the published findings, our technique gets the best overall performance.