Most eye-tracking experiments are limited to single subjects because gaze points are difficult to track when multiple users are involved and environmental factors might cause interference. To overcome this problem, this paper proposes a method for gaze tracking that can be applied for multiple users simultaneously. Four models, including FASEM, FAEM and FAFRCM in the signal-user environment, as well as FAEM and FAMAM in the multiple-user environment, are proposed, and we collected raw data of gazing behaviors to train the models. Through a modified VGG19 architecture and adjusting the Number of Convolutional Layers (NoCL), we obtained and compared the accuracy of various models to determine the most suitable architecture. Since data for multiple-users is not easy to obtain, in this paper, we trained the model first with single users, then extended it to multiple users with transfer learning. Finally, we propose an adaptive method to integrate the benefits of FAEM and FAMAM.INDEX TERMS Multiple users, gaze-tracking, position clustering, deep learning.