In the recent times, smartphone usage has become increasingly popular for learning. User's exhibit multiple gesture interactions with smartphones, while reading, which can provide valuable implicit feedback about the content consumed. Smartphones have many embedded sensors which capture plethora of user interaction data. The on-device Gyroscope and Accelerometer can be enabled to capture the variations done due to gesture interactions like scrolling, pinch to zoom, tap, orientation change and screen capture. This research work is based on training machine learning classifier models with smartphone sensors' readings to identify the users screen gesture interactions. 47 users performed various screen gestures using an android application and contributed in data collection activity. Aggregated time domain feature extraction has been computed on the preprocessed data. Four groups of data have been used to train the models. Extensive experiments are done to test the success of proposed system using Random Forests (RFC), Support Vector Machine (SVM), Extreme Gradient Boost (XGB), Adaptive Boost (ADA Boost), Naïve Bayes (NB) and K-Nearest Neighbour (KNN). Detailed analysis of the success rate and accuracy calculation have been performed. Best identification accuracy of 97.58% is achieved by Random Forest Classifier followed by Extreme Gradient Boost (XGB) and K-Nearest Neighbour (KNN) with accuracy 95.97% and 93.55% respectively.